大型图像的CNNmatlab代码0628.docx

上传人:b****4 文档编号:11945996 上传时间:2023-04-16 格式:DOCX 页数:31 大小:27.34KB
下载 相关 举报
大型图像的CNNmatlab代码0628.docx_第1页
第1页 / 共31页
大型图像的CNNmatlab代码0628.docx_第2页
第2页 / 共31页
大型图像的CNNmatlab代码0628.docx_第3页
第3页 / 共31页
大型图像的CNNmatlab代码0628.docx_第4页
第4页 / 共31页
大型图像的CNNmatlab代码0628.docx_第5页
第5页 / 共31页
点击查看更多>>
下载资源
资源描述

大型图像的CNNmatlab代码0628.docx

《大型图像的CNNmatlab代码0628.docx》由会员分享,可在线阅读,更多相关《大型图像的CNNmatlab代码0628.docx(31页珍藏版)》请在冰豆网上搜索。

大型图像的CNNmatlab代码0628.docx

大型图像的CNNmatlab代码0628

大型图像的卷积神经网络处理模块

目录

%=======cnnExercise.m2

%=======cnnConvolve.m9

%=======cnnPool.m12

%=======data_process.m13

%======feedForwardAutoencoder.m14

%=======displayColorNetwork.m15

%=======softmaxCost.m16

%======softmaxPredict.m17

%======softmaxTrain.m17

%=======cnnExercise.m

%本部分内容取自UFLDL教程中的大型图像卷积运算(WorkingwithLargeImages)。

本文省略了图像数据及UFLDL中公用的模块,这些文件或数据需要从网站单独获取。

%%CS294A/CS294WConvolutionalNeuralNetworksExercise

%Instructions

%Thisfilecontainscodethathelpsyougetstartedontheconvolutionalneuralnetworksexercise.Inthisexercise,

%youwillonlyneedtomodifycnnConvolve.mandcnnPool.m.Youwillnotneedtomodifythisfile.

%%======================================================================

%%STEP0:

Initialization

%Hereweinitializesomeparametersusedfortheexercise.

clearall;

clc;

closeall;

imageDim=64;%imagedimension;大图像尺寸64*64

imageChannels=3;%numberofchannels(rgb,so3);图像的通道数;红绿蓝

patchDim=8;%patchdimension;小图像尺寸8*8

%numPatches=50000;%numberofpatches;从大图像中采样的小图像数目;

numPatches=1500;%numberofpatches

visibleSize=patchDim*patchDim*imageChannels;%numberofinputunits;

%输入层单元数目=64*3

outputSize=visibleSize;%numberofoutputunits;输入层和输出层单元数目相同

hiddenSize=400;%numberofhiddenunits;

%隐层单元数目,也就是将要提取出的大图像的特征的数目;这个数目小于64*64=4096;

%这个特征的数目也就是卷积核的数目,对二维图像而言,卷积核是一种二维滤波器

%因此,隐层单元数目相对于大图像的维数是具备稀疏特性;

%卷积核的系数体现在神经网络的patchDim*patchDim维的二维滤波器的系数和偏置值;

%构建一个输入层单元数为(patchDim*patchDim),隐层单元数为hiddenSize,

%输出层单元数为(patchDim*patchDim)的神经网络;采用稀疏自编码学习机进行学习;

%这个神经网络的输入数据是对于大图像进行采样后获得的patchDim*patchDim维度图像;

%输出层的图像数据也是相同维度和像素的图像;

%这样训练后获得hiddenSize个大型图像集的特征,即卷积核或滤波器;

%每个卷积核或滤波器的系数即为每个隐层单元和输入层单元之间的连接的权值(W)和偏置值(b);

%即每个二维卷积核(滤波器)的尺寸为patchDim*patchDim;

%======================================

%hidden的数目,即大型图像的特征核的数目,每个特征核都要与每个大型图像进行卷积运算,

%即对大型图像进行滤波运算,获取大型图像在每个滤波器(特征核)滤波后的特征图像,

%从而更有利于采用softmax进行分类运算。

%================================================

%卷积的作用是实现深度神经网络中从输入层至隐层的计算过程,

%求出大型图像在对应卷积核的滤波运算后的特征图像;每个卷积核有1个输出节点和一个偏置值,

%采用1个卷积核对大型图像的所有像素进行卷积运算,卷积核的输出节点输出对应的特征图像。

%采用卷积神经网络避免了全联通神经网络中从输入层至隐层节点计算量大的不足;

%卷积神经网络通过采用部分联通网络的方式,对隐含层单元和输入单元间的连接加以限制,

%每个隐含层单元仅仅连接输入图像的一小片相邻区域。

%================================================

epsilon=0.1;%epsilonforZCAwhitening

poolDim=19;%dimensionofpoolingregion;池化区域的尺寸;

%%======================================================================

%%STEP1:

Trainasparseautoencoder(withalineardecoder)tolearn

%featuresfromcolorpatches.Ifyouhavecompletedthelineardecoder

%execise,usethefeaturesthatyouhaveobtainedfromthatexercise,

%loadingthemintooptTheta.Recallthatwehavetokeeparoundthe

%parametersusedinwhitening(i.e.,theZCAwhiteningmatrixandthe

%meanPatch)

%---------------------------YOURCODEHERE--------------------------

%Trainthesparseautoencoderandfillthefollowingvariableswith

%theoptimalparameters:

optTheta=zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize,1);

%hiddenSize代表隐含层神经元的偏置值的数目;visibleSize代表输出层神经元的偏置值的数目;

ZCAWhite=zeros(visibleSize,visibleSize);%白化的作用是把没有数据的地方变成空白;

meanPatch=zeros(visibleSize,1);%每个patch上所有像素值的平均值

%FeaturesStruct=load('STL10Features.mat');%只能这样读取?

%optTheta=FeaturesStruct.optTheta;

%ZCAWhite=FeaturesStruct.ZCAWhite;

%meanPatch=FeaturesStruct.meanPatch;

loadSTL10Features.mat

%%--------------------------------------------------------------------

%Displayandchecktoseethatthefeatureslookgood

W=reshape(optTheta(1:

visibleSize*hiddenSize),hiddenSize,visibleSize);

%visibleSize*hiddenSize代表输入层神经元到从隐层神经元的连接权值的数目;

b=optTheta(2*hiddenSize*visibleSize+1:

2*hiddenSize*visibleSize+hiddenSize);

%hiddenSize隐层的偏置值的数目;

%sizeW=size(W)

%sizeb=size(b)

figure

(1)

displayColorNetwork((W*ZCAWhite)');

%由于W代表对大图像进行滤波运算的的卷积核的系数,

%将W与ZCAWhite相乘使得对大图像进行滤波处理的过程同时进行了白化处理。

%%======================================================================

%%STEP2:

Implementandtestconvolutionandpooling

%Inthisstep,youwillimplementconvolutionandpooling,andtestthem

%onasmallpartofthedatasettoensurethatyouhaveimplemented

%thesetwofunctionscorrectly.Inthenextstep,youwillactually

%convolveandpoolthefeatureswiththeSTL10images.

%%STEP2a:

Implementconvolution

%ImplementconvolutioninthefunctioncnnConvolveincnnConvolve.m

%Notethatwehavetopreprocesstheimagesintheexactsameway

%wepreprocessedthepatchesbeforewecanobtainthefeatureactivations.

%Imag=load('stlTrainSubset.mat');%loadsnumTrainImages,trainImages,trainLabels

%numimg=10;

%numTrainImages=numimg;

%trainImages=Imag.trainImages(:

:

:

1:

numimg);

%trainLabels=Imag.trainLabels(1:

numimg);

%

%figure

(2)

%imshow(Imag.trainImages(:

:

:

20))

%%%%%Useonlythefirst8imagesfortesting

%convImages=trainImages(:

:

:

1:

8);

%%%NOTE:

ImplementcnnConvolveincnnConvolve.mfirst!

%convolvedFeatures=cnnConvolve(patchDim,hiddenSize,convImages,W,b,ZCAWhite,meanPatch);

%patchDim被采样的小块图像的尺寸,hiddenSize特征的数目;

%%%STEP2b:

Checkingyourconvolution

%%Toensurethatyouhaveconvolvedthefeaturescorrectly,wehave

%%providedsomecodetocomparetheresultsofyourconvolutionwith

%%activationsfromthesparseautoencoder

%%%For1000randompoints

%fori=1:

1000

%featureNum=randi([1,hiddenSize]);

%imageNum=randi([1,8]);

%imageRow=randi([1,imageDim-patchDim+1]);

%imageCol=randi([1,imageDim-patchDim+1]);

%

%patch=convImages(imageRow:

imageRow+patchDim-1,imageCol:

imageCol+patchDim-1,:

imageNum);

%一个小patch8*8的RGB所有数值

%imageRow:

imageRow+patchDim–1代表行,imageCol:

imageCol+patchDim–1代表列;

%patch=patch(:

);

%patch=patch-meanPatch;%meanPatch的值为被采样后图像中8*8*3个像素的平均值

%patch=ZCAWhite*patch;

%features=feedForwardAutoencoder(optTheta,hiddenSize,visibleSize,patch);

%disp('feedForwardAutoencoderresultsare:

')

%size_feedForwardAutoencoder=size(features)

%pause

%

%%%运用linearDecoder作业获得的features同卷积得来的feature进行对比

%ifabs(features(featureNum,1)-convolvedFeatures(featureNum,imageNum,imageRow,imageCol))>1e-9

%fprintf('Convolvedfeaturedoesnotmatchactivationfromautoencoder\n');

%fprintf('FeatureNumber:

%d\n',featureNum);

%fprintf('ImageNumber:

%d\n',imageNum);

%fprintf('ImageRow:

%d\n',imageRow);

%fprintf('ImageColumn:

%d\n',imageCol);

%fprintf('Convolvedfeature:

%0.5f\n',convolvedFeatures(featureNum,imageNum,imageRow,imageCol));

%fprintf('SparseAEfeature:

%0.5f\n',features(featureNum,1));

%error('Convolvedfeaturedoesnotmatchactivationfromautoencoder');

%end

%end

%%disp('Congratulations!

Yourconvolutioncodepassedthetest.');

%%%STEP2c:

Implementpooling

%%ImplementpoolinginthefunctioncnnPoolincnnPool.m

%%NOTE:

ImplementcnnPoolincnnPool.mfirst!

%size_convolvedFeatures=size(convolvedFeatures)

%pooledFeatures=cnnPool(poolDim,convolvedFeatures);

%size(pooledFeatures)=[特征数,大图像的数目,池化后大图像像素行数,池化后大图像像素列数]

%poolDim=19;

%size(convolvedFeatures)=[特征数,大图像的数目,卷积后大图像像素行数,卷积后大图像像素列数]

%size_pooledFeatures=size(pooledFeatures)

%pause

%%%%%STEP2d:

Checkingyourpooling

%%%Toensurethatyouhaveimplementedpooling,wewilluseyourpooling

%%%functiontopooloveratestmatrixandchecktheresults.

%

%testMatrix=reshape(1:

64,8,8);

%expectedMatrix=[mean(mean(testMatrix(1:

4,1:

4)))mean(mean(testMatrix(1:

4,5:

8)));...

%mean(mean(testMatrix(5:

8,1:

4)))mean(mean(testMatrix(5:

8,5:

8)));];

%%

%testMatrix=reshape(testMatrix,1,1,8,8);

%%pooledFeatures=squeeze(cnnPool(4,testMatrix));

%%

%if~isequal(pooledFeatures,expectedMatrix)

%disp('Poolingincorrect');

%disp('Expected');

%disp(expectedMatrix);

%disp('Got');

%disp(pooledFeatures);

%else

%disp('Congratulations!

Yourpoolingcodepassedthetest.');

%end

%%======================================================================

%%STEP3:

Convolveandpoolwiththedataset

%Inthisstep,youwillconvolveeachofthefeaturesyoulearnedwith

%thefulllargeimagestoobtaintheconvolvedfeatures.Youwillthen

%pooltheconvolvedfeaturestoobtainthepooledfeaturesfor

%classification.

%Becausetheconvolvedfeaturesmatrixisverylarge,wewilldothe

%convolutionandpooling50featuresatatimetoavoidrunningoutof

%memory.Reducethisnumberifnecessary

stepSize=50;

%将从卷积神经网络中所提取的400个卷积核(特征)分成50个一组,

%将一组卷积核与大型图像进行卷积运算后,在将第二组卷积核与大型图像进行卷积运算。

%将卷积核分组运算可减少对于计算机资源的需求;

%当计算机资源较少时,每组中包含的卷积核数目可较少;

%当计算机资源较多时,每组中包含的卷积核数目可较多;

%为运算方便,常使得stepSize的尺寸能被卷积核的总数400能整除;

assert(mod(hiddenSize,stepSize)==0,'stepSizeshoulddividehiddenSize');

%TrainNum=80;

%Trainimg=load('stlTrainSubset.mat');%loadsnumTrainImages,trainImages,trainLabels

%numTrainImages=TrainNum;

%trainImages=Trainimg.trainImages(:

:

:

1:

numTrainImages);

%trainLabels=Trainimg.trainLabels(1:

numTrainImages);

loadstlTrainSubset.mat%loadnumTrainImages,trainImages,trainLabels;

%trainImages的维度是[64,64,3,100],其中100表示图像的数目,也可是2000;

%Testimg=load('stlTestSubset.mat')%loadsnumTestImages,testImages,testLabels

loadstlTestSubset.mat

%numTestImages=40;

%testImages=Trainimg.trainImages(:

:

:

numTrainImages+1:

numTrainImages+numTestImages);

%testLabels=Trainimg.trainLabels(numTrainImages+1:

numTrainImages+numTestImages);

%clearTrainimg;

%训练图像与测试图像被池化后的维数;

pooledFeaturesTrain=zeros(hiddenSize,numTrainImages,...

floor((imageDim-patchDim+1)/poolDim),...

floor((imageDim-patchDim+1)/poolDim));

pooledFeaturesTest=zeros(hiddenSize,numTestImages,...

floor((imageDim-patchDim+1)/poolDim),...

floor((imageDim-patchDim+1)/pool

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 高等教育 > 理学

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1