Work Files Saved Searches
   My Account                                                  Search:   Quick/Number   Boolean   Advanced       Help   


 The Delphion Integrated View

  Buy Now:   Buy PDF- 12pp  PDF  |   File History  |   Other choices   
  Tools:  Citation Link  |  Add to Work File:    
  View:  Expand Details   |  INPADOC   |  Jump to: 
 
 Email this to a friend  Email this to a friend 
       
Title: US5787394: State-dependent speaker clustering for speaker adaptation
[ Derwent Title ]


Country: US United States of America

View Images High
Resolution

 Low
 Resolution

 
12 pages

 
Inventor: Bahl, Lalit Rai; Amawalk, NY
Gopalakrishnan, Ponani; Yorktown Heights, NY
Nahamoo, David; White Plains, NY
Padmanabhan, Mukund; Ossining, NY

Assignee: International Business Machines Corporation, Armonk, NY
other patents from INTERNATIONAL BUSINESS MACHINES CORPORATION (280070) (approx. 44,393)
 News, Profiles, Stocks and More about this company

Published / Filed: 1998-07-28 / 1995-12-13

Application Number: US1995000572223

IPC Code: Advanced: G10L 15/06;
IPC-7: G10L 5/06;

ECLA Code: G10L15/07; S10L15/063C;

U.S. Class: Current: 704/238; 704/231; 704/236; 704/239; 704/245; 704/E15.011;
Original: 704/238; 704/231; 704/236; 704/239; 704/245;

Field of Search: 395/2.45,2.52,2.54,2.62,2.63 704/236,238,243,245,253,254

Priority Number:
1995-12-13  US1995000572223

Abstract:     A system and method for adaptation of a speaker independent speech recognition system for use by a particular user. The system and method gather acoustic characterization data from a test speaker and compare the data with acoustic characterization data generated for a plurality of training speakers. A match score is computed between the test speaker's acoustic characterization for a particular acoustic subspace and each training speaker's acoustic characterization for the same acoustic subspace. The training speakers are ranked for the subspace according to their scores and a new acoustic model is generated for the test speaker based upon the test speaker's acoustic characterization data and the acoustic characterization data of the closest matching training speakers. The process is repeated for each acoustic subspace.

Primary / Asst. Examiners: Hudspeth, David R.; Opsasnick, Michael N.

INPADOC Legal Status: Show legal status actions

Family: None

First Claim:
Show all 23 claims
We claim:     1. A method for adapting the parameters of a speech recognition system during a training process, to better recognize speech of a particular test speaker comprising the steps of:
  • calculating the acoustic characterization of a plurality of training speakers for all acoustic subspaces of an acoustic space, the acoustic characterizations being individually identifiable for each training speaker for each acoustic subspace;
  • calculating the acoustic characterization of a test speaker from adaptation data provided by said test speaker for acoustic subspaces of the acoustic space;
  • computing a match score between the test speaker's characterization for each acoustic subspace, and each training speaker's characterization for the same acoustic subspace;
  • ranking each of the training speakers in the acoustic subspace based upon the score; and
  • for each acoustic space, generating a re-estimated acoustic model for the particular acoustic subspace using individually identifiable data respectively derived from the one or more training speakers closest to the test speaker for that acoustic subspace, the re-estimated acoustic model for each acoustic subspace being used during a decoding process.


Background / Summary: Show background / summary

Drawing Descriptions: Show drawing descriptions

Description: Show description

Forward References: Show 33 U.S. patent(s) that reference this one

       
U.S. References: Go to Result Set: All U.S. references   |  Forward references (33)   |   Backward references (9)   |   Citation Link

Buy
PDF
Patent  Pub.Date  Inventor Assignee   Title
Get PDF - 30pp US4817156  1989-03 Bahl et al.  International Business Machines Corporation Rapidly training a speech recognizer to a subsequent speaker given training data of a reference speaker
Get PDF - 10pp US4852173  1989-07 Bahl et al.  International Business Machines Corporation Design and construction of a binary-tree system for language modelling
Get PDF - 10pp US4922539  1990-05 Rajasekaran et al.  Texas Instruments Incorporated Method of encoding speech signals involving the extraction of speech formant candidates in real time
Get PDF - 31pp US5033087  1991-07 Bahl et al.  International Business Machines Corp. Method and apparatus for the automatic determination of phonological rules as for a continuous speech recognition system
Get PDF - 17pp US5241619  1993-08 Schwartz et al.  Bolt Beranek And Newman Inc. Word dependent N-best search method
Get PDF - 10pp US5276766  1994-01 Bahl et al.  International Business Machines Corporation Fast algorithm for deriving acoustic prototypes for automatic speech recognition
Get PDF - 15pp US5293584  1994-03 Brown et al.  International Business Machines Corporation Speech recognition system for natural language translation
Get PDF - 31pp US5488652  1996-01 Bielby et al.  Northern Telecom Limited Method and apparatus for training speech recognition algorithms for directory assistance applications
Get PDF - 14pp US5497447  1996-03 Bahl et al.  International Business Machines Corporation Speech coding apparatus having acoustic prototype vectors generated by tying to elementary models and clustering around reference vectors
       
Foreign References: None

Other References:
  • "A tree Based Statistical Language Model for Natural Language Speech Recognition", Bahl et al, Jul. '89, IEEE Transactions on Acoustics, Speech, and Signal Processing.


  • Inquire Regarding Licensing

    Powered by Verity


    Plaques from Patent Awards      Gallery of Obscure PatentsNominate this for the Gallery...

    Thomson Reuters Copyright © 1997-2014 Thomson Reuters 
    Subscriptions  |  Web Seminars  |  Privacy  |  Terms & Conditions  |  Site Map  |  Contact Us  |  Help