ImageVerifierCode 换一换
格式:DOCX , 页数:51 ,大小:3.53MB ,
资源ID:5127229      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/5127229.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Matlab在语音识别中的应用.docx)为本站会员(b****6)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

Matlab在语音识别中的应用.docx

1、Matlab在语音识别中的应用1.基于GUI的音频采集处理系统注:本实验是对“东、北、大、学、中、荷、学、院”孤立文字的识别!首先是GUI的建立,拖动所需控件,双击控件,修改控件的参数;主要有string Tag(这个是回调函数的依据),其中还有些参数如value style 也是需要注意的,这个在实际操作中不能忽视。这里需要给说明一下:图中所示按钮都是在一个按钮组里面,都属于按钮组的子控件。所以在添加回调函数时,是在按钮组里面添加的,也就是说右击三个按钮外面的边框,选择View CallbackSelectionChange,则在主函数中显示该按钮的回调函数:function uipanel

2、1_SelectionChangeFcn(hObject, eventdata, handles)以第一个按钮“录音”为例讲解代码;下面是“播放”和“保存”的代码:以上就是语音采集的全部代码。程序运行后就会出现这样的界面:点击录音按钮,录音结束后就会出现相应波形:点击保存,完成声音的保存,保存格式为.wav。这就完成了声音的采集。2.声音的处理与识别2.1打开文件语音处理首先要先打开一个后缀为.wav的文件,这里用到的不是按钮组,而是独立的按钮,按钮“打开”的回调函数如下:function pushbutton1_Callback(hObject, eventdata, handles)其中p

3、ushbutton1是“打开”按钮的Tag.在回调函数下添加如下代码:运行结果如图:2.2预处理回调函数如下:function pushbutton2_Callback(hObject, eventdata, handles)运行结果如图:2.3短时能量短时能量下的回调函数:function pushbutton3_Callback(hObject, eventdata, handles)其回调函数下的代码是:2.4端点检测这里要先声明一点,为了避免在以后的函数调用中,不能使用前面的变量,所以其实后面的函数都包含了前面的部分。显而易见这样程序就会显得很冗长,这也是值得以后修改的地方。funct

4、ion pushbutton4_Callback(hObject, eventdata, handles)2.5生成模版本功能和上面重复的部分省略掉了,现在只补充添加的代码:2.6语音识别将打开的语音与提前录好的语音库进行识别,采用的是DTW算法。识别完后就会在相应的文本框里显示识别的文字。代码如下:程序运行前后的对比图:GUI的整体效果图:总结实验已经实现了对“东、北、大、学、中、荷、学、院”文字的识别,前提是用模版的语音作为样本去和语音库测试,这已经可以保证的正确率,这说明算法是正确的,只是需要优化。而现场录音和模版匹配时,则不能保证较高的正确率,这说明特征参数的提取这方面还不够完善。特征

5、参数提取的原则是类内距离尽量小,类间距离尽量大的原则,这是需要以后完善的地方。也需要优化,先生成一个模版库,然后用待测语音和模版库语音识别,让这个模版库孤立出来,不需要每次测试都要重复生成模版库,提高运算速率。以后有机会可以实现连续语音的识别!附件这是全部代码文件mfcc.mat 文件是程序运行过程中生成的;test 文件夹里面存放了录音的模版:这里是6个.M文件,如下:1 WienerScalart96.mfunction output=WienerScalart96(signal,fs,IS) % output=WIENERSCALART96(signal,fs,IS)% Wiener f

6、ilter based on tracking a priori SNR usingDecision-Directed % method, proposed by Scalart et al 96. In this method it is assumed that% SNRpost=SNRprior +1. based on this the Wiener Filter can be adapted to a% model like Ephraims model in which we have a gain function which is a% function of a priori

7、 SNR and a priori SNR is being tracked using Decision% Directed method. % Author: Esfandiar Zavarehei% Created: MAR-05 if (nargin=3 & isstruct(IS)%This option is for compatibility with another programmeW=IS.windowsizeSP=IS.shiftsize/W;%nfft=IS.nfft;wnd=IS.window;if isfield(IS,IS)IS=IS.IS;elseIS=.25;

8、endend% .UP TO HERE pre_emph=0;signal=filter(1 -pre_emph,1,signal); NIS=fix(IS*fs-W)/(SP*W) +1);%number of initial silence segments y=segment(signal,W,SP,wnd); % This function chops the signal into framesY=fft(y);YPhase=angle(Y(1:fix(end/2)+1,:); %Noisy Speech PhaseY=abs(Y(1:fix(end/2)+1,:);%Specrog

9、ramnumberOfFrames=size(Y,2);FreqResol=size(Y,1); N=mean(Y(:,1:NIS); %initial Noise Power Spectrum meanLambdaD=mean(Y(:,1:NIS).2);%initial Noise Power Spectrum variancealpha=.99; %used in smoothing xi (For Deciesion Directed method for estimation of A Priori SNR)NoiseCounter=0;NoiseLength=9;%This is

10、a smoothing factor for the noise updatingG=ones(size(N);%Initial Gain used in calculation of the new xiGamma=G; X=zeros(size(Y); % Initialize X (memory allocation) h=waitbar(0,Wait.); for i=1:numberOfFrames%VAD and Noise Estimation STARTif i=NIS % If initial silence ignore VADSpeechFlag=0;NoiseCount

11、er=100;else % Else Do VADNoiseFlag, SpeechFlag, NoiseCounter, Dist=vad(Y(:,i),N,NoiseCounter); %Magnitude Spectrum Distance VADend if SpeechFlag=0 % If not Speech Update Noise ParametersN=(NoiseLength*N+Y(:,i)/(NoiseLength+1); %Update and smooth noise meanLambdaD=(NoiseLength*LambdaD+(Y(:,i).2)./(1+

12、NoiseLength); %Update and smooth noise varianceend%VAD and Noise Estimation END gammaNew=(Y(:,i).2)./LambdaD; %A postiriori SNRxi=alpha*(G.2).*Gamma+(1-alpha).*max(gammaNew-1,0); %Decision Directed Method for A Priori SNRGamma=gammaNew; G=(xi./(xi+1); X(:,i)=G.*Y(:,i); %Obtain the new Cleaned value

13、waitbar(i/numberOfFrames,h,num2str(fix(100*i/numberOfFrames);end close(h);output=OverlapAdd2(X,YPhase,W,SP*W); %Overlap-add Synthesis of speechoutput=filter(1,1 -pre_emph,output); %Undo the effect of Pre-emphasis function ReconstructedSignal=OverlapAdd2(XNEW,yphase,windowLen,ShiftLen); %Y=OverlapAdd

14、(X,A,W,S);%Y is the signal reconstructed signal from its spectrogram. X is a matrix%with each column being the fft of a segment of signal. A is the phase%angle of the spectrum which should have the same dimension as X. if it is%not given the phase angle of X is used which in the case of real values

15、is%zero (assuming that its the magnitude). W is the window length of time%domain segments if not given the length is assumed to be twice as long as%fft window length. S is the shift length of the segmentation process ( for%example in the case of non overlapping signals it is equal to W and in the%ca

16、se of %50 overlap is equal to W/2. if not givven W/2 is used. Y is the%reconstructed time domain signal.%Sep-04%Esfandiar Zavarehei if nargin2yphase=angle(XNEW);endif nargin3windowLen=size(XNEW,1)*2;endif nargin4ShiftLen=windowLen/2;endif fix(ShiftLen)=ShiftLenShiftLen=fix(ShiftLen);disp(The shift l

17、ength have to be an integer as it is the number of samples.)disp(shift length is fixed to num2str(ShiftLen)end FreqRes FrameNum=size(XNEW); Spec=XNEW.*exp(j*yphase); if mod(windowLen,2) %if FreqResol is oddSpec=Spec;flipud(conj(Spec(2:end,:);elseSpec=Spec;flipud(conj(Spec(2:end-1,:);endsig=zeros(Fra

18、meNum-1)*ShiftLen+windowLen,1);weight=sig;for i=1:FrameNumstart=(i-1)*ShiftLen+1; spec=Spec(:,i);sig(start:start+windowLen-1)=sig(start:start+windowLen-1)+real(ifft(spec,windowLen); endReconstructedSignal=sig; function Seg=segment(signal,W,SP,Window) % SEGMENT chops a signal to overlapping windowed

19、segments% A= SEGMENT(X,W,SP,WIN) returns a matrix which its columns are segmented% and windowed frames of the input one dimentional signal, X. W is the% number of samples per window, default value W=256. SP is the shift% percentage, default value SP=0.4. WIN is the window that is multiplied by% each

20、 segment and its length should be W. the default window is hamming% window.% 06-Sep-04% Esfandiar Zavarehei if nargin3SP=.4;endif nargin2W=256;endif nargin4Window=hamming(W);endWindow=Window(:); %make it a column vector L=length(signal);SP=fix(W.*SP);N=fix(L-W)/SP +1); %number of segments Index=(rep

21、mat(1:W,N,1)+repmat(0:(N-1)*SP,1,W);hw=repmat(Window,1,N);Seg=signal(Index).*hw; function NoiseFlag, SpeechFlag, NoiseCounter, Dist=vad(signal,noise,NoiseCounter,NoiseMargin,Hangover) %NOISEFLAG, SPEECHFLAG, NOISECOUNTER, DIST=vad(SIGNAL,NOISE,NOISECOUNTER,NOISEMARGIN,HANGOVER)%Spectral Distance Voi

22、ce Activity Detector%SIGNAL is the the current frames magnitude spectrum which is to labeld as%noise or speech, NOISE is noise magnitude spectrum template (estimation),%NOISECOUNTER is the number of imediate previous noise frames, NOISEMARGIN%(default 3)is the spectral distance threshold. HANGOVER (

23、 default 8 )is%the number of noise segments after which the SPEECHFLAG is reset (goes to%zero). NOISEFLAG is set to one if the the segment is labeld as noise%NOISECOUNTER returns the number of previous noise segments, this value is%reset (to zero) whenever a speech segment is detected. DIST is the%s

24、pectral distance. %Saeed Vaseghi%edited by Esfandiar Zavarehei%Sep-04 if nargin4NoiseMargin=3;endif nargin5Hangover=8;endif nargin3NoiseCounter=0;end FreqResol=length(signal); SpectralDist= 20*(log10(signal)-log10(noise);SpectralDist(find(SpectralDist0)=0; Dist=mean(SpectralDist); if (Dist Hangover)

25、 SpeechFlag=0; else SpeechFlag=1; end2 mfcc.mfunction cc=mfcc(k)%-% cc=mfcc(k)计算语音k的MFCC系数%-% M为滤波器个数,N为一帧语音采样点数M=24; N=256;% 归一化mel滤波器组系数bank=melbankm(M,N,22050,0,0.5,m);figure;plot(linspace(0,N/2,129),bank);title(Mel-Spaced Filterbank);xlabel(Frequency Hz);bank=full(bank);bank=bank/max(bank(:); %

26、DCT系数,12*24for i=1:12 j=0:23; dctcoef(i,:)=cos(2*j+1)*i*pi/(2*24);end% 归一化倒谱提升窗口w=1+6*sin(pi*1:12./12);w=w/max(w);% 预加重AggrK=double(k);AggrK=filter(1,-0.9375,1,AggrK);% 分帧FrameK=enframe(AggrK,N,80);% 加窗for i=1:size(FrameK,1) FrameK(i,:)=(FrameK(i,:).*hamming(N);endFrameK=FrameK;% 计算功率谱S=(abs(fft(Fra

27、meK).2;disp(显示功率谱)figure; plot(S);axis(1,size(S,1),0,2);title(Power Spectrum (M=24, N=256);xlabel(Frame);ylabel(Frequency Hz);colorbar; % 将功率谱通过滤波器组P=bank*S(1:129,:);% 取对数后作离散余弦变换D=dctcoef*log(P);% 倒谱提升窗for i=1:size(D,2) m(i,:)=(D(:,i).*w);end% 差分系数dtm=zeros(size(m);for i=3:size(m,1)-2 dtm(i,:)=-2*m

28、(i-2,:)-m(i-1,:)+m(i+1,:)+2*m(i+2,:);enddtm=dtm/3;%合并mfcc参数和一阶差分mfcc参数cc=m,dtm;%去除首尾两帧,因为这两帧的一阶差分参数为0cc=cc(3:size(m,1)-2,:);3 getpoint.mfunction StartPoint,EndPoint=getpoint(k,fs)%UNTITLED 此处显示有关此函数的摘要% 此处显示详细说明 signal=WienerScalart96(k,fs);sigLength=length(signal);%计算信号长度t=(0:sigLength-1)/fs;%计算信号对应时间坐标FrameLen = round(0.012/max(t)*sigLength);%定义每一帧长度FrameInc = round(FrameLen/3);%每一帧的重叠区域,选为帧长的1/31/2tmp=enframe(signal(1:end), FrameLen, FrameInc);

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1