ImageVerifierCode 换一换
格式:DOCX , 页数:6 ,大小:23.91KB ,
资源ID:10777641      下载积分:10 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/10777641.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(非线性函数中英文对照外文翻译文献.docx)为本站会员(b****7)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

非线性函数中英文对照外文翻译文献.docx

1、非线性函数中英文对照外文翻译文献 中英文对照翻译 一个新的辅助函数的构造方法的全局优化非线性函数优化问题中具有许多局部极小,在他们的搜索空间中的应用,如工程设计,分子生物学是广泛的,和神经网络训练虽然现有的传统的方法,如最速下降方法,牛顿法,拟牛顿方法,信赖域方法,共轭梯度法,收敛迅速,可以找到解决方案,为高精度的连续可微函数,这在很大程度上依赖于初始点和最终的全局解的质量很难保证在全局优化中存在的困难阻碍了许多学科的进一步发展因此,全局优化通常成为一个具有挑战性的计算任务的研究一般来说,设计一个全局优化算法是由两个原因造成的困难:一是如何确定所得到的最小是全球性的(当时全球最小的是事先不知道

2、),和其他的是,如何从中获得一个更好的最小跳对第一个问题,一个停止规则称为贝叶斯终止条件已被报道许多最近提出的算法的目标是在处理第二个问题一般来说,这些方法可以被类主要分两大类,即:(一)确定的方法,及(ii)的随机方法随机的方法是基于生物或统计物理学,它跳到当地的最低使用基于概率的方法这些方法包括遗传算法(GA),模拟退火法(SA)和粒子群优化算法(PSO)虽然这些方法有其用途,它们往往收敛速度慢和寻找更高精度的解决方案是耗费时间他们更容易实现和解决组合优化问题然而,确定性方法如填充函数法,盾构法,等,收敛迅速,具有较高的精度,通常可以找到一个解决方案这些方法往往依赖于修改目标函数的函数“少

3、”或“低”局部极小,比原来的目标函数,并设计算法来减少该ED功能逃离局部极小更好的发现引用确定性算法中,扩散方程法,有效能量的方法,和积分变换方法近似的原始目标函数的粗结构由一组平滑函数的极小的“少”这些方法通过修改目标函数的原始目标函数的积分这样的集成是实现太贵,和辅助功能的最终解决必须追溯到原始目标函数的最小值,而所追踪的结果可能不是真正的全球最小的问题终端器无约束子能量法和动态隧道方法修改ES的目标函数的基础上的动态系统的稳定性理论的全局优化的梯度下降算法的杂交方法这些方法都将动态系统和相应的计算非常耗时,尤其是目标函数的维数的增加,因为他们的好点是通过搜索沿各坐标到终止的发现拉伸函数方

4、法是一个辅助函数法,利用以前的搜索得到的信息使目标函数和帮助算法跳出局部最小更有效这种技术已被纳入PSO的提高找到全局极小的成功率然而,这种混合算法是建立在一个随机的方法,其收敛速度慢、应用更易与低维问题填充函数法是另一个辅助函数法作案ES为目标函数的填充函数,然后找到更好的局部极小值逐步优化填充函数构造上得到的最小值填充函数法为我们提供了一个好主意,使用局部优化技术来解决全局优化问题如果无法估计的参数可以得到解决,设计的填充函数可以应用于高维函数,填充函数方法在文献中的前途是光明的掘进方法修改ES的目标函数,以确保未来的出发点具有相同的函数值所得到的最小离获得一个,从而找到全局极小的概率增加

5、一个连续的会话的方法(SCM)将目标函数转化为一个在函数值都高于得到的地区没有局部极小或固定点,除了预固定值这个方法似乎有希望如果通过预造成不影响固定的点被排除在外不管拉伸功能的方法,已设计的填充函数法,或隧道算法的使用,他们往往依赖于几个关键的参数是不同的邪教的预估中的应用,如在极小的存在和上下的目标函数的导数边界的间隔长度因此,一个在理论上有效的辅助函数法是困难的邪教在实践中,由于参数的不确定性,实现一一维函数的一个例子如下: 显然,1和2说明了“墨西哥帽”效应出现在辅助函数法(已填充函数法和拉伸函数法)在一个地方点x= 4.60095不必要的影响,即引入新的局部极小值,通过参数设置不当等

6、引起的新推出的局部极小值将增加原问题的复杂性和影响算法的全局搜索因此,一个有效的参数调节方便的辅助功能的方法是值得研究的基于此,在本文中,我们给出了一个简单的两阶段的函数变换方法,转换1398纽约王骥,J. S.张/数学和计算机和数学建模 47(2008)13961410 *= 4.60095的功能定义(3)“墨西哥帽”效应出现在两个点原目标函数迅速下降的收敛性和高的能力逐渐找到更好的解决方案,在更广阔的区域的一个辅助功能这个想法是,填充函数法很相似具体来说,我们首先发现的原始目标函数的局部最小然后拉伸函数法和模拟填充函数法对目标函数进行连续的两个阶段的转换构建的功能是在原来的目标函数值是高于

7、获得一个在第一步区下降,而一个固定点必须在更好的区域存在接下来,我们尽量减少辅助功能找到它的一个固定点(一个好点的比局部极小获得之前),然后下一个局部优化的出发点我们重复这个过程直到终止在新方法中,参数容易设置,例如两个常数可以被预处理,由于辅助函数的性质是不依靠不同的参数来实现,虽然两个参数中引入辅助函数上一集的尺寸为50,与其他方法的比较表明,新的算法是更有效的标准测试问题的数值试验A new constructing auxiliary function method for globaloptimization Nonlinear function optimization probl

8、ems which possess many local minimizers in their search spaces are widespread in applications such as engineering design, molecular biology, and neural network training. Although the existing traditional methods such as the steepest descent method, Newton method, quasi Newton methods, trust region m

9、ethod, and conjugate gradient method converge rapidly and can find the solutions with high precision for continuously differentiable functions, they rely heavily on the initial point and the quality of the final global solution is hard to guarantee. The existing difficulty in global optimization pre

10、vents many subjects from developing further.Therefore, global optimization generally becomes a challenging computational task for researchers.Generally speaking, the difficulty in designing an algorithm on global optimization is due to two reasons: One is how to determine that the obtained minimum i

11、s a global one (when the global minimum is not known in advance), and the other is that how to jump from the obtained minimum to a better one. In treating the first problem, a stopping rule named the Bayes in termination condition has been reported.Many recently proposed algorithms aim at dealing wi

12、th the second problem. Generally, these methods can be classed into two main categories, namely: (i)deterministic methods, and (ii) stochastic methods. The stochastic methods are based on biology or statistical physics,which jump to the local minimum by using a probability based approach. These meth

13、ods include genetic algorithm(GA), simulated annealing method (SA) and particle swarm optimization method (PSO). Although these methods have their uses, they often converge slowly and finding a solution with higher precision is time consuming.They are easier to implement and to solve combinational o

14、ptimization problems. However, deterministic methods such as the filled function method, tunneling method, etc, converge more rapidly, and can often find a solution with a higher precision. These methods often rely on modifying the objective function to a function with “fewer” or “lower” local minim

15、izers than the original objective function, and then design algorithms to minimize the modied function to escape from the found local minimum to a better one.Among the referenced deterministic algorithms, the diffusion equation method, the effective energy method, and integral transform scheme appro

16、ximate the coarse structure of the original objective function by a set of smoothed functions with “fewer” minimizers. These methods modify the objective function via integration of the original objective function. Such integrations are too expensive to implement, and the final solution of the auxil

17、iary function has to be traced to the minimum of the original objective function, whereas the traced result may be not the true global minimum of the problem. The terminal repeller unconstrained sub-energy tunneling method and the method of hybridization of the gradient descent algorithm with the dy

18、namic tunneling method modies the objective function based on the dynamic systems stability theory for global optimization. These methods have to integrate a dynamic system and the corresponding computation is time consuming, especially with the increase of the dimension of the objective function, s

19、ince their better point is found through searching along each coordinate till termination. The stretching function technique is an auxiliary function method which uses the obtained information in previous searches to stretch the objective function and help the algorithm to escape from the local mini

20、mum more effectively. This technique has been incorporated into the PSO to improve its success rate of finding global minima. However, this hybrid algorithm is constructed on a stochastic method, which converges slowly and applies more easily to the problem with a lower dimension. The filled functio

21、n method is another auxiliary function method which modies the objective function as a filled function, and then finds the better local minima gradually by optimizing the filled functions constructed on the obtained minima. The filled function method provides us with a good idea to use the local opt

22、imization techniques to solve global optimization problems. If the difficulty in estimating the parameters can be solved and the designed filled functions can be applied to higher dimensional functions, the filled functions approaches in the literature will be promising. The tunneling method modies

23、the objective function, which ensures the next starting point with equal function value to the obtained minimum to be away from the obtained one, and thus the probability of finding the global minimum is increased. A sequential conversation method (SCM)transforms the objective function into one whic

24、h has no local minima or stationary points in the region where the function values are higher than the ones obtained, except for the prexed values. This method seems promising if the unwilling effect caused by the prexed point is excluded.No matter whether the stretching function method, the already

25、 designed filled function method, or the tunneling algorithm are used, they often rely on several key parameters which are difcult to estimate in advance in applications,such as the length of the intervals where the minimizers exist and the lower or upper boundaries of the derivative of the objectiv

26、e function. Therefore, an effective auxiliary function method in theory is difcult to implement in practice due to the uncertainty of the parameters. An example of a one dimensional function is shown as follows:Figs. 1 and 2 illustrate that a “Mexican hat” effect appears in the auxiliary function me

27、thod (lled function method and stretching function method) at one local point x = 4.60095. The unwanted effect, namely that of introducing new local minima, is caused by improper parameter setting. The newly introduced local minima will increase the complexity of the original problem and affect the

28、global search of algorithm.Therefore, an effective and efficient auxiliary function method with easily adjusting parameters is worth investigating. Based on this, in this paper, we give a simple two-stage function transformation method which converts1398 Y.-J. Wang, J.-S. Zhang / Mathematical and Co

29、mputer Modelling 47 (2008) 13961410Fig. 1. A filled function (left plot) and a stretching function (right plot) constructed at x = 4.60095 of the function dened in (3). A “Mexican hat” effect appears in the two plots.the original objective function f (x) into an auxiliary function with rapidly desce

30、nding convergence and a high ability to gradually find better solutions in more promising regions. The idea is very similar to that of the filled function method. Specifically, we firstly find a local minimum of the original objective function. Then the stretching function technique and an analog fi

31、lled function method is employed to execute a consecutive two stage transformation on the objective function. The constructed function is always descending in the region where the original objective function values are higher than the obtained one in the first step, while a stationary point must exi

32、st in the better region. Next, we minimize the auxiliary function to find one of its stationary points (a better point of f (x) than the local minimizer obtained before), which is then the starting point for a next local optimization. We repeat the procedure until termination. In the new method, the parameters are easy to set, e.g. two constants can be prexed to them, because the properties of the auxiliary function are not realized by relying on the varying parameters, although two parameters are introduced in the auxiliary function. Numerical experiments on a s

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1