Work Files Saved Searches
   My Account                                                  Search:   Quick/Number   Boolean   Advanced       Help   

 The Delphion Integrated View

  Buy Now:   Buy PDF- 10pp  PDF  |   File History  |   Other choices   
  Tools:  Citation Link  |  Add to Work File:    
  View:  Expand Details   |  INPADOC   |  Jump to: 
 Email this to a friend  Email this to a friend 
Title: US5768474: Method and system for noise-robust speech processing with cochlea filters in an auditory model
[ Derwent Title ]

Country: US United States of America

View Images High


10 pages

Inventor: Neti, Chalapathy V.; Boca Raton, FL

Assignee: International Business Machines Corporation, Armonk, NY
other patents from INTERNATIONAL BUSINESS MACHINES CORPORATION (280070) (approx. 44,393)
 News, Profiles, Stocks and More about this company

Published / Filed: 1998-06-16 / 1995-12-29

Application Number: US1995000581288

IPC Code: Advanced: G10L 15/02; G10L 15/20;
IPC-7: G01L 5/06; G01L 9/00;

ECLA Code: G10L15/02; S10L15/20; T05K999/99;

U.S. Class: Current: 704/235; 704/202; 704/232; 704/233; 704/243; 704/E15.004;
Original: 395/002.44; 395/002.41; 395/002.42; 395/002.11; 395/002.52;

Field of Search: 395/2.41,2.44,2.11,2.42,2.52,2.53,2.54

Priority Number:
1995-12-29  US1995000581288

Abstract:     A method for noise-robust speech processing with cochlea filters within a computer system is disclosed. This invention provides a method for producing feature vectors from a segment of speech, that is more robust to variations in the environment due to additive noise. A first output is produced by convolving a speech signal input with spatially dependent impulse responses that resemble cochlea filters. The temporal transient and the spatial transient of the first output is then enhanced by taking a time derivative and a spatial derivative, respectively, of the first output to produce a second output. Next, all the negative values of the second output are replaced with zeros. A feature vector is then obtained from each frame of the second output by a multiple resolution extraction. The parameters for the cochlea filters are finally optimized by minimizing the difference between a feature vector generated from a relatively noise-free speech signal input and a feature vector generated from a noisy speech signal input.

Attorney, Agent or Firm: Kashimba, Paul T. ; Dillon, Andrew J. ;

Primary / Asst. Examiners: MacDonald, Allen R.; Sax, Robert Louis

Maintenance Status: E1 Expired  Check current status
CC Certificate of Correction issued

INPADOC Legal Status: Show legal status actions          Buy Now: Family Legal Status Report

Designated Country: DE GB 

Family: Show 3 known family members

First Claim:
Show all 15 claims
What is claimed is:     1. A method for noise-robust speech processing with cochlea filters within a computer system, said method comprising the steps of:
  • convolving a speech signal input with impulse responses resembling said cochlea filters to produce a first output:
  • enhancing a temporal transient of said first output to produce a second output;
  • enhancing a spatial transient of said second output to produce a third output;
  • replacing all negative values of said third output with zeros to produce a fourth output;
  • extracting a feature vector from each frame of said fourth output by multiple resolution, wherein said each frame has a specified length, wherein said extracting step further comprises
    • defining a plurality of channels for said each frame;
    • for each of said plurality of channels, selecting a plurality of time intervals according to a center frequency of that particular channel; and
  • optimizing a plurality of cochlea filter parameters by utilizing said feature vector from said each frame to determine said cochlea filters.

Background / Summary: Show background / summary

Drawing Descriptions: Show drawing descriptions

Description: Show description

Forward References: Show 7 U.S. patent(s) that reference this one

U.S. References: Go to Result Set: All U.S. references   |  Forward references (7)   |   Backward references (6)   |   Citation Link

Patent  Pub.Date  Inventor Assignee   Title
Get PDF - 10pp US4400590  1983-08 Michelson  The Regents of the University of California Apparatus for multichannel cochlear implant hearing aid system
Get PDF - 8pp US4536844  1985-08 Lyon  Fairchild Camera and Instrument Corporation Method and apparatus for simulating aural response information
Get PDF - 18pp US5271397  1993-12 Seligman et al.  Cochlear Pty. Ltd. Multi-peak speech processor
Get PDF - 16pp US5331222  1994-07 Lin et al.  University of Maryland Cochlear filter bank with switched-capacitor circuits
Get PDF - 24pp US5388182  1995-02 Benedetto et al.  Prometheus, Inc. Nonlinear method and apparatus for coding and decoding acoustic signals with data compression and noise suppression using cochlear filters, wavelet analysis, and irregular sampling reconstruction
Get PDF - 11pp US5402493  1995-03 Goldstein  Central Institute For The Deaf Electronic simulator of non-linear and active cochlear spectrum analysis
Foreign References: None

Other Abstract Info: DERABS G1997-334894

Other References:
  • Yang, X. et al, Auditory Repr. of Acoustic Signals, IEEE Trans Infor. Theory, v38 No.2 pp. 834-836, Mar. 1992.
  • Etemad, K., Phoneme recog. by multi-res. and non-causal context, Proc IEEE-SP, pp. 343-352, Sep. 1993.
  • Lui, W. et al, Multires. speech analys. with an analog cochlear model, Proc IEEE-SP, pp. 433-436, Oct. 1992.
  • Yang X, Auditory Reprsntatn of Acoustic Signals, IEEE Trans Info. Theory, v 38 No. 2 824-839, Mar. 1992. (16 pages) Cited by 3 patents [ISI abstract]
  • Neti, Neuromorphic Speech Processing for Noisy Eviron., IEEE Intl Conf Neur Netw v 7 4425-30, Jul. 1994.

  • Inquire Regarding Licensing

    Powered by Verity

    Plaques from Patent Awards      Gallery of Obscure PatentsNominate this for the Gallery...

    Thomson Reuters Copyright © 1997-2014 Thomson Reuters 
    Subscriptions  |  Web Seminars  |  Privacy  |  Terms & Conditions  |  Site Map  |  Contact Us  |  Help