Work Files Saved Searches
   My Account                                                  Search:   Quick/Number   Boolean   Advanced       Help   


 The Delphion Integrated View

  Buy Now:   Buy PDF- 14pp  PDF  |   File History  |   Other choices   
  Tools:  Citation Link  |  Add to Work File:    
  View:  Expand Details   |  INPADOC   |  Jump to: 
 
 Email this to a friend  Email this to a friend 
       
Title: US6253179: Method and apparatus for multi-environment speaker verification
[ Derwent Title ]


Country: US United States of America

View Images High
Resolution

 Low
 Resolution

 
14 pages

 
Inventor: Beigi, Homayoon S.; Yorktown Heights, NY
Chaudhari, Upendra V.; Elmsford, NY
Maes, Stephane H.; Danbury, CT
Sorensen, Jeffrey S.; Seymour, CT

Assignee: International Business Machines Corporation, Armonk, NY
other patents from INTERNATIONAL BUSINESS MACHINES CORPORATION (280070) (approx. 44,393)
 News, Profiles, Stocks and More about this company

Published / Filed: 2001-06-26 / 1999-01-29

Application Number: US1999000240346

IPC Code: Advanced: G10L 17/00;
IPC-7: G10L 15/06; G10L 15/20;

ECLA Code: G10L17/04; G10L17/20;

U.S. Class: Current: 704/243; 704/246; 704/E17.006; 704/E17.014;
Original: 704/243; 704/246;

Field of Search: 704/243,244,245,246,250,233,273

Priority Number:
1999-01-29  US1999000240346

Abstract:     A method for unsupervised environmental normalization for speaker verification using hierarchical clustering is disclosed. Training data (speech samples) are taken from T enrolled (registered) speakers over any one of M channels, e.g., different microphones, communication links, etc. For each speaker, a speaker model is generated, each containing a collection of distributions of audio feature data derived from the speech sample of that speaker. A hierarchical speaker model tree is created, e.g., by merging similar speaker models on a layer by layer basis. Each speaker is also grouped into a cohort of similar speakers. For each cohort, one or more complementary speaker models are generated by merging speaker models outside that cohort. When training data from a new speaker to be enrolled is received over a new channel, the speaker model tree as well as the complementary models are updated. Consequently, adaptation to data from new environments is possible by incorporating such data into the verification model whenever it is encountered.

Attorney, Agent or Firm: F. Chau & Associates, LLP ;

Primary / Asst. Examiners: Korzuch, William R.; Lerner, Martin

INPADOC Legal Status: Show legal status actions

Family: None

First Claim:
Show all 19 claims
What is claimed is:     1. A computer-implemented method, comprising:
  • obtaining training data from each of a plurality T of sources constituting an enrolled population, over a plurality M of channels;
  • developing models for each of said T sources based on said training data, each model containing a collection of distributions;
  • generating a hierarchical model tree based on said models of said I sources, wherein at least some merged models within layers of said hierarchical model tree are computed via partitioning or grouping with respect to channel properties; and
  • obtaining training data from a new source over a new channel for addition to said enrolled population, developing a new model based thereupon and updating said hierarchical model tree with said new model.


Background / Summary: Show background / summary

Drawing Descriptions: Show drawing descriptions

Description: Show description

Forward References: Show 17 U.S. patent(s) that reference this one

       
U.S. References: Go to Result Set: All U.S. references   |  Forward references (17)   |   Backward references (10)   |   Citation Link

Buy
PDF
Patent  Pub.Date  Inventor Assignee   Title
Get PDF - 12pp US5687287  1997-11 Gandhi et al.  Lucent Technologies Inc. Speaker verification method and apparatus using mixture decomposition discrimination
Get PDF - 11pp US5806029  1998-09 Buhrke et al.  AT&T Corp Signal conditioned minimum error rate training for continuous speech recognition
Get PDF - 14pp US5963906  1999-10 Turin  AT & T Corp Speech recognition training
Get PDF - 6pp US6006184  1999-12 Yamada et al.  NEC Corporation Tree structured cohort selection for speaker recognition system
Get PDF - 23pp US6038528  2000-03 Mammone et al.  T-Netix, Inc. Robust speech processing with affine transform replicated data
Get PDF - 9pp US6058205  2000-05 Bahl et al.  International Business Machines Corporation System and method for partitioning the feature space of a classifier in a pattern classification system
Get PDF - 12pp US6073096  2000-06 Gao et al.  International Business Machines Corporation Speaker adaptation system and method based on class-specific pre-clustering training speakers
Get PDF - 11pp US6073101  2000-06 Maes  International Business Machines Corporation Text independent speaker recognition for transparent command ambiguity resolution and continuous access control
Get PDF - 16pp US6081660  2000-06 Macleod et al.  The Australian National University Method for forming a cohort for use in identification of an individual
Get PDF - 12pp US6107935  2000-08 Comerford et al.  International Business Machines Corporation Systems and methods for access filtering employing relaxed recognition constraints
       
Foreign References: None

Other References:
  • Rosenberg et al., "Speaker background models for connected digit password speaker verification," 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, May 1996, pp. 81 to 84.*
  • Li et al., "Normalized discriminant analysis with application to a hybrid speaker-verification system," 1996 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 2, May 1996, pp. 681 to 684.


  • Inquire Regarding Licensing

    Powered by Verity


    Plaques from Patent Awards      Gallery of Obscure PatentsNominate this for the Gallery...

    Thomson Reuters Copyright © 1997-2014 Thomson Reuters 
    Subscriptions  |  Web Seminars  |  Privacy  |  Terms & Conditions  |  Site Map  |  Contact Us  |  Help