ImageVerifierCode 换一换
格式:DOCX , 页数:14 ,大小:182.37KB ,
资源ID:12340987      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/12340987.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(神经网络英文文献.docx)为本站会员(b****4)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

神经网络英文文献.docx

1、神经网络英文文献ARTIFICIAL NEURAL NETWORK FOR LOAD FORECASTINGIN SMART GRIDHAO-TIAN ZHANG, FANG-YUAN XU, LONG ZHOUEnergy System Group ,City University London ,Northampton Square ,London,UK E-MAIL: , , long.zhou.Abstract:It is an irresistible trend of the electric power improvement for developing the smart g

2、rid, which applies a large amount of new technologies in power generation, transmission, distribution and utilization to achieve optimization of the power configuration and energy saving. As one of the key links to make a grid smarter, load forecast plays a significant role in planning and operation

3、 in power system. Many ways such as Expert Systems, Grey System Theory, and Artificial Neural Network (ANN) and so on are employed into load forecast to do the simulation. This paper intends to illustrate the representation of the ANN applied in load forecast based on practical situation in Ontario

4、Province, Canada.Keywords:Load forecast; Artificial Neuron Network; back propagation training; Matlab1.IntroductionLoad forecasting is vitally beneficial to the power system industries in many aspects. As an essential part in the smart grid, high accuracy of the load forecasting is required to give

5、the exact information about the power purchasing and generation in electricity market, prevent more energy from wasting and abusing and making the electricity price in a reasonable range and so on. Factors such as season differences, climate changes, weekends and holidays, disasters and political re

6、asons, operation scenarios of the power plants and faults occurring on the network lead to changes of the load demand and generations.Since 1990, the artificial neural network (ANN) has been researched to apply into forecasting the load. “ANNsare massively parallel networks of simple processing elem

7、ents designed to emulate the functions and structure of the brain to solve very complex problems ” . Owing to thetranscendent characteristics, ANNs is one of the most competent methods to do the practical works like load forecasting. This paper concerns about the behaviors of artificial neural netwo

8、rk in load forecasting. Analysis of the factors affectingthe load demandin Ontario, Canadais madeto give an effective way for load forecast in Ontario.2.Back Propagation Network2.1.Backgro undBecause the outstanding characteristic of the statistical and modeling capabilities, ANNtould deal with non-

9、linear and complex problems in terms of classificati on or forecasti ng. As the problem defi ned, the relatio nship betwee n the in put and target is non-li near and very complicated. ANN is an appropriate method to apply into the problem to forecast the load situati on. For appl ying in to the load

10、 forecast, an ANNn eeds to select a n etwork type such as Feed-forward Back Propagati on, Layer Recurre nt and Feed-forward time-delay and so on. To date, Back propagatio n is widely used in n eural n etworks, which is a feed-forward n etwork with continu ously valued fun cti ons and supervised lear

11、 nin g. It can match the in put data and corresp onding output in an appropriate way to approach a certa in function which is used for achiev ing an expected goal with some previous data in the same manner of the in put.2.2 . Architecture of back propagation algorithmFigure 1 shows a sin gle Neuron

12、model of back propagati on algorithm.Gen erally, the output is a fun ctio n of the sum of bias and weight multiplied by the in put. The activati on fun cti on could be any kinds of fun cti ons. However, the gen erated output is differe nt.Owing to the feed-forward n etwork, in gen eral, at least one

13、 hidde n layer before the output layer is needed. Three-layer network is selected as the architecture, because this kind of architecture can approximate any function with a few disc ontin uities. The architecture with three layers is show n in Figure 2 below:Figure 1. Neuron model of back propagati

14、on algorithmFigure 2. Architecture of three-layer feed-forward n etworkBasically, there are three activati on functions applied into back propagati on algorithm, n amely, Log-Sigmoid, Tan-Sigmoid, and Linear Tran sfer Fun cti on. The output range in each fun cti on is illustrated in Figure 3 below.F

15、igure.3. Activati on fun cti ons applied in back propagati on (a)Log-sigmoid (b)Ta n-sigmoid (c)li near fun cti on2.3. Trai ning fun cti on selectio nAlgorithms of training fun cti on employed based on back propagati on approach are used and the function was integrated in the Matlab Neuron n etwork

16、toolbox.Function nameAlgorkhtntrainbBatch training with weighl & bias learning rulesirainbfgBFGS quasi-Xewton backpropagationtrainbrBayesian lugularizationtraincCyclical order incremental training w/learning funclionsiraincgbPowell -Beale conjugate gradient backpropagationintinegfFlctchepPowcll coii

17、jugaiL pradicnt hackpropagaiionraincgpPolak-Ribierv conjugate gradient backpjopagatiorttraingdGradient descent backpropagaiiojimiingdmGradient du scent with moineniuin backpropagalioniraingdaGradieni descent with adaptive Ir backpropagationiniingdxGradient Llescent w/tnomcimun & adaptive k backpropa

18、gaiiontrain mLcvenberg-Marquard【backpropagationirainossOne step secant backpropagaiiontrainrRandom order iiKreineiilal training u/iearnin funclionsirainrpResident backpropagation t Rprop)trainsScquetuial order inccemental training w/leamingfunctionsirainscgScaled conjugate gradient backpropagationTA

19、BLE. TRAINING FUNCTIONS IN MATLAB S NN TOOLBOX3.Trai ning Procedures3.1.Backgro und an alysisThe n eural n etwork training is based on the load dema nd and weather conditions in Ontario Province, Canada which is located in the south of Canada. The region in Ontario can be divided into three parts wh

20、ich are southwest, cen tral and east, and n orth, accord ing to the weather con diti ons. The populati on is gathered around southeaster n part of the en tire provin ce, which in cludes two of the largest cities of Can ada, Toronto and Ottawa.3.2.Data Acquisiti onThe required trai ning data can be d

21、ivided into two parts: in put vectors and output targets. For load forecasti ng, in put vectors for training include all the information of factors affect ing the load dema nd cha nge, such as weather in formati on, holidays or working days, fault occurring in the network and so on. Output targets a

22、re the real time load sce narios, which mean the dema nd prese nted at the same time as in put vectors cha nging.Owing to the conditional restriction, this study only considers the weather in formati on and logical adjustme nt of weekdays and weeke nds as the factors affect ing the load status. In t

23、his paper, factors affect ing the load cha nging are listed below:(1). Temperature ( C)(2). Dew Poi nt Temperature ( C)(3). Relative Humidity (%)(4). Wind speed (km/h)(5). Wind Direction (10)(6). Visibility (km)(7). Atmospheric pressure (kPa)(8). Logical adjustment of weekday or weekendAccording to

24、the information gathered above, the weather information in Toronto taken place of the whole Ontario province is chosen to provide data acquisition. The data was gathered hourly according to the historical weather conditions remained in the weather stations. Load demanddata also needs to be gathered

25、hourly and correspondingly. In this paper, 2 years weather data and load data is collected to train and test the created network.3.3.Data NormalizationOwing to prevent the simulated neurons from being driven too far into saturation, all of the gathered data needs to be normalized after acquisition.

26、Like per unit system, each input and target data are required to be divided by the maximumabsolute value in corresponding factor. Each value of the normalized data is within the range between -1 and +1 so that the ANNcould recognize the data easily. Besides, weekdays are represented as 1, and weeken

27、d are represented as 0.3.4.Neural network creatingToolbox in Matlab is used for training and simulating the neuron network. The layout of the neural network consists of number of neurons and layers, connectivity of layers, activation functions, and error goal and so on. It depends on the practical s

28、ituation to set the framework and parameters of the network. The architecture of the ANNcould be selected to achieve the optimized result. Matlab is one of the best simulation tools to provide visible windows. Three-layer architecture has been chosen to give the simulation as shown in Figure 2 above

29、. It is adequate to approximate arbitrary function, if the nodes of the hidden layer are sufficient .Due to the practical input value is from -1 to +1, the transfer function of the first layer is set to be tan sigmiod, which is a hyperbolic tangent sigmoid transfer function. The transfer function of

30、 the output layer is set to be linear function, which is a linear function to calculate a layer s output from its net input. There is one advantage for the linear output transfer function: because the linear output neurons lead to the output take on any value, there is no difficulty to find out the

31、differences between output and target.The next step is the neurons and training functions selection.Generally, Trainbr and Trainlm are the best choices around all of the training functions in Matlab toolboxTrainlm (Levenberg-Marquardt algorithm) is the fastest training algorithm for networks with mo

32、derate size. However, the big problemappears that it needs the storage of somematrices which is sometimes large for the problems. Whenthe training set is large, trainlm algorithm will reduce the memoryand always compute the approximate Hessian matrix with n x n dime nsions. Ano ther drawback of the train Im is that the over-fitti ng will occur when the number of the neurons is too large. Basically, the number of neurons is not too Iarge when the trainIm aIgorithm is empIoyed into the network. Trainbr (Ba

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1