ImageVerifierCode 换一换
格式:DOCX , 页数:31 ,大小:27.34KB ,
资源ID:11945996      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/11945996.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(大型图像的CNNmatlab代码0628.docx)为本站会员(b****4)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

大型图像的CNNmatlab代码0628.docx

1、大型图像的CNNmatlab代码0628大型图像的卷积神经网络处理模块目录%=cnnExercise.m 2%=cnnConvolve.m 9%=cnnPool.m 12%=data_process.m 13%= feedForwardAutoencoder.m 14%= displayColorNetwork .m 15%= softmaxCost .m 16%= softmaxPredict .m 17%= softmaxTrain.m 17%=cnnExercise.m% 本部分内容取自UFLDL教程中的大型图像卷积运算(Working with Large Images)。本文省略了图

2、像数据及UFLDL中公用的模块,这些文件或数据需要从网站单独获取。% CS294A/CS294W Convolutional Neural Networks Exercise% Instructions% This file contains code that helps you get started on the convolutional neural networks exercise. In this exercise, % you will only need to modify cnnConvolve.m and cnnPool.m. You will not need to

3、modify this file.%=% STEP 0: Initialization% Here we initialize some parameters used for the exercise.clear all;clc;close all;imageDim = 64; % image dimension; 大图像尺寸64*64imageChannels = 3; % number of channels (rgb, so 3);图像的通道数;红绿蓝patchDim = 8; % patch dimension; 小图像尺寸8*8% numPatches = 50000; % num

4、ber of patches;从大图像中采样的小图像数目;numPatches = 1500; % number of patchesvisibleSize = patchDim * patchDim * imageChannels; % number of input units;%输入层单元数目=64*3outputSize = visibleSize; % number of output units;输入层和输出层单元数目相同hiddenSize = 400; % number of hidden units ;% 隐层单元数目,也就是将要提取出的大图像的特征的数目;这个数目小于64*

5、64=4096;% 这个特征的数目也就是卷积核的数目,对二维图像而言,卷积核是一种二维滤波器% 因此,隐层单元数目相对于大图像的维数是具备稀疏特性;% 卷积核的系数体现在神经网络的patchDim*patchDim维的二维滤波器的系数和偏置值;% 构建一个输入层单元数为(patchDim*patchDim),隐层单元数为hiddenSize,% 输出层单元数为(patchDim*patchDim)的神经网络;采用稀疏自编码学习机进行学习;% 这个神经网络的输入数据是对于大图像进行采样后获得的patchDim*patchDim维度图像;% 输出层的图像数据也是相同维度和像素的图像;% 这样训练后

6、获得hiddenSize个大型图像集的特征,即卷积核或滤波器;% 每个卷积核或滤波器的系数即为每个隐层单元和输入层单元之间的连接的权值(W)和偏置值(b);% 即每个二维卷积核(滤波器)的尺寸为patchDim*patchDim;%=% hidden的数目,即大型图像的特征核的数目,每个特征核都要与每个大型图像进行卷积运算,% 即对大型图像进行滤波运算,获取大型图像在每个滤波器(特征核)滤波后的特征图像,% 从而更有利于采用softmax进行分类运算。%=% 卷积的作用是实现深度神经网络中从输入层至隐层的计算过程,% 求出大型图像在对应卷积核的滤波运算后的特征图像;每个卷积核有1个输出节点和一

7、个偏置值,% 采用1个卷积核对大型图像的所有像素进行卷积运算,卷积核的输出节点输出对应的特征图像。% 采用卷积神经网络避免了全联通神经网络中从输入层至隐层节点计算量大的不足;% 卷积神经网络通过采用部分联通网络的方式,对隐含层单元和输入单元间的连接加以限制,% 每个隐含层单元仅仅连接输入图像的一小片相邻区域。%=epsilon = 0.1; % epsilon for ZCA whiteningpoolDim = 19; % dimension of pooling region;池化区域的尺寸;%=% STEP 1: Train a sparse autoencoder (with a li

8、near decoder) to learn % features from color patches. If you have completed the linear decoder% execise, use the features that you have obtained from that exercise, % loading them into optTheta. Recall that we have to keep around the % parameters used in whitening (i.e., the ZCA whitening matrix and

9、 the% meanPatch)% - YOUR CODE HERE -% Train the sparse autoencoder and fill the following variables with % the optimal parameters:optTheta = zeros(2*hiddenSize*visibleSize+hiddenSize+visibleSize, 1);% hiddenSize代表隐含层神经元的偏置值的数目;visibleSize代表输出层神经元的偏置值的数目;ZCAWhite = zeros(visibleSize, visibleSize); %

10、白化的作用是把没有数据的地方变成空白;meanPatch = zeros(visibleSize, 1); % 每个patch上所有像素值的平均值 % FeaturesStruct = load(STL10Features.mat); %只能这样读取?% optTheta = FeaturesStruct.optTheta;% ZCAWhite = FeaturesStruct.ZCAWhite;% meanPatch = FeaturesStruct.meanPatch;load STL10Features.mat% % -% Display and check to see that th

11、e features look goodW = reshape(optTheta(1:visibleSize * hiddenSize), hiddenSize, visibleSize); % visibleSize * hiddenSize代表输入层神经元到从隐层神经元的连接权值的数目; b = optTheta(2*hiddenSize*visibleSize+1:2*hiddenSize*visibleSize+hiddenSize); % hiddenSize隐层的偏置值的数目;% sizeW=size(W)% sizeb=size(b)figure(1)displayColorNe

12、twork( (W*ZCAWhite);% 由于W代表对大图像进行滤波运算的的卷积核的系数,% 将W与ZCAWhite相乘使得对大图像进行滤波处理的过程同时进行了白化处理。%=% STEP 2: Implement and test convolution and pooling% In this step, you will implement convolution and pooling, and test them% on a small part of the data set to ensure that you have implemented% these two functi

13、ons correctly. In the next step, you will actually% convolve and pool the features with the STL10 images.% STEP 2a: Implement convolution% Implement convolution in the function cnnConvolve in cnnConvolve.m% Note that we have to preprocess the images in the exact same way % we preprocessed the patche

14、s before we can obtain the feature activations.% Imag=load(stlTrainSubset.mat); % loads numTrainImages, trainImages, trainLabels% numimg=10;% numTrainImages=numimg;% trainImages=Imag.trainImages(:,:,:,1:numimg);% trainLabels=Imag.trainLabels(1:numimg);% % figure(2)% imshow(Imag.trainImages(:,:,:,20)

15、% % % % Use only the first 8 images for testing% convImages = trainImages(:, :, :, 1:8); % % % NOTE: Implement cnnConvolve in cnnConvolve.m first!% convolvedFeatures = cnnConvolve(patchDim, hiddenSize, convImages, W, b, ZCAWhite, meanPatch); % patchDim被采样的小块图像的尺寸,hiddenSize特征的数目;% % STEP 2b: Checkin

16、g your convolution% % To ensure that you have convolved the features correctly, we have% % provided some code to compare the results of your convolution with% % activations from the sparse autoencoder% % % For 1000 random points% for i = 1:1000 % featureNum = randi(1, hiddenSize);% imageNum = randi(

17、1, 8);% imageRow = randi(1, imageDim - patchDim + 1);% imageCol = randi(1, imageDim - patchDim + 1); % % patch = convImages(imageRow:imageRow + patchDim - 1, imageCol:imageCol + patchDim - 1, :, imageNum); %一个小patch8*8的RGB所有数值% imageRow:imageRow + patchDim 1代表行,imageCol:imageCol + patchDim 1代表列;% pa

18、tch = patch(:); % patch = patch - meanPatch; % meanPatch的值为被采样后图像中8*8*3个像素的平均值% patch = ZCAWhite * patch; % features = feedForwardAutoencoder(optTheta, hiddenSize, visibleSize, patch); % disp(feedForwardAutoencoder results are:)% size_feedForwardAutoencoder=size(features)% pause% % %运用linearDecoder作

19、业获得的features同卷积得来的feature进行对比% if abs(features(featureNum, 1) - convolvedFeatures(featureNum, imageNum, imageRow, imageCol) 1e-9% fprintf(Convolved feature does not match activation from autoencodern);% fprintf(Feature Number : %dn, featureNum);% fprintf(Image Number : %dn, imageNum);% fprintf(Image

20、 Row : %dn, imageRow);% fprintf(Image Column : %dn, imageCol);% fprintf(Convolved feature : %0.5fn, convolvedFeatures(featureNum, imageNum, imageRow, imageCol);% fprintf(Sparse AE feature : %0.5fn, features(featureNum, 1); % error(Convolved feature does not match activation from autoencoder);% end %

21、 end% % disp(Congratulations! Your convolution code passed the test.);% % STEP 2c: Implement pooling% % Implement pooling in the function cnnPool in cnnPool.m % % NOTE: Implement cnnPool in cnnPool.m first!% size_convolvedFeatures=size(convolvedFeatures)% pooledFeatures = cnnPool(poolDim, convolvedF

22、eatures);% size(pooledFeatures)= 特征数,大图像的数目,池化后大图像像素行数, 池化后大图像像素列数% poolDim=19;% size(convolvedFeatures)=特征数,大图像的数目, 卷积后大图像像素行数, 卷积后大图像像素列数% size_pooledFeatures=size(pooledFeatures)% pause% % % % STEP 2d: Checking your pooling% % % To ensure that you have implemented pooling, we will use your poolin

23、g% % % function to pool over a test matrix and check the results.% % testMatrix = reshape(1:64, 8, 8);% expectedMatrix = mean(mean(testMatrix(1:4, 1:4) mean(mean(testMatrix(1:4, 5:8); .% mean(mean(testMatrix(5:8, 1:4) mean(mean(testMatrix(5:8, 5:8); ;% % % testMatrix = reshape(testMatrix, 1, 1, 8, 8

24、);% % pooledFeatures = squeeze(cnnPool(4, testMatrix);% % % if isequal(pooledFeatures, expectedMatrix)% disp(Pooling incorrect);% disp(Expected);% disp(expectedMatrix);% disp(Got);% disp(pooledFeatures);% else% disp(Congratulations! Your pooling code passed the test.);% end%=% STEP 3: Convolve and p

25、ool with the dataset% In this step, you will convolve each of the features you learned with% the full large images to obtain the convolved features. You will then% pool the convolved features to obtain the pooled features for% classification.% Because the convolved features matrix is very large, we

26、will do the% convolution and pooling 50 features at a time to avoid running out of% memory. Reduce this number if necessarystepSize = 50; % 将从卷积神经网络中所提取的400个卷积核(特征)分成50个一组,% 将一组卷积核与大型图像进行卷积运算后,在将第二组卷积核与大型图像进行卷积运算。% 将卷积核分组运算可减少对于计算机资源的需求;% 当计算机资源较少时,每组中包含的卷积核数目可较少;% 当计算机资源较多时,每组中包含的卷积核数目可较多;% 为运算方便,常

27、使得stepSize的尺寸能被卷积核的总数400能整除;assert(mod(hiddenSize, stepSize) = 0, stepSize should divide hiddenSize);% TrainNum=80;% Trainimg=load(stlTrainSubset.mat); % loads numTrainImages, trainImages, trainLabels% numTrainImages=TrainNum;% trainImages=Trainimg.trainImages(:,:,:,1:numTrainImages); % trainLabels=

28、Trainimg.trainLabels(1:numTrainImages);load stlTrainSubset.mat % load numTrainImages, trainImages, trainLabels; % trainImages的维度是64,64,3 , 100 , 其中100表示图像的数目,也可是2000;% Testimg=load(stlTestSubset.mat) % loads numTestImages, testImages, testLabelsload stlTestSubset.mat% numTestImages=40;% testImages=T

29、rainimg.trainImages(:,:,:,numTrainImages+1:numTrainImages+numTestImages);% testLabels=Trainimg.trainLabels(numTrainImages+1:numTrainImages+numTestImages);% clear Trainimg;%训练图像与测试图像被池化后的维数;pooledFeaturesTrain = zeros(hiddenSize, numTrainImages, . floor(imageDim - patchDim + 1) / poolDim), . floor(imageDim - patchDim + 1) / poolDim) );pooledFeaturesTest = zeros(hiddenSize, numTestImages, . floor(imageDim - patchDim + 1) / poolDim), . floor(imageDim - patchDim + 1) / pool

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1