ImageVerifierCode 换一换
格式:DOCX , 页数:9 ,大小:23.69KB ,
资源ID:8600548      下载积分:3 金币
快捷下载
登录下载
邮箱/手机:
温馨提示:
快捷下载时,用户名和密码都是您填写的邮箱或者手机号,方便查询和重复下载(系统自动生成)。 如填写123,账号就是123,密码也是123。
特别说明:
请自助下载,系统不会自动发送文件的哦; 如果您已付费,想二次下载,请登录后访问:我的下载记录
支付方式: 支付宝    微信支付   
验证码:   换一换

加入VIP,免费下载
 

温馨提示:由于个人手机设置不同,如果发现不能下载,请复制以下地址【https://www.bdocx.com/down/8600548.html】到电脑端继续下载(重复下载不扣费)。

已注册用户请登录:
账号:
密码:
验证码:   换一换
  忘记密码?
三方登录: 微信登录   QQ登录  

下载须知

1: 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。
2: 试题试卷类文档,如果标题没有明确说明有答案则都视为没有答案,请知晓。
3: 文件的所有权益归上传用户所有。
4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
5. 本站仅提供交流平台,并不能对任何下载内容负责。
6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

版权提示 | 免责声明

本文(Human exploration and development of space using XML database Space Wide Web Space Wide Web by adapt.docx)为本站会员(b****6)主动上传,冰豆网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对上载内容本身不做任何修改或编辑。 若此文所含内容侵犯了您的版权或隐私,请立即通知冰豆网(发送邮件至service@bdocx.com或直接QQ联系客服),我们立即给予删除!

Human exploration and development of space using XML database Space Wide Web Space Wide Web by adapt.docx

1、Human exploration and development of space using XML database Space Wide Web Space Wide Web by adaptThe TLB (Translation Lookaside Buffer) miss services have been concealed from operating systems, but some new RISC architectures manage the TLB in software. Since software-managed TLBs provide flexibi

2、lity to an operating system in page translation, they are considered an important factor in the design of microprocessors for open system environments. However, software-managed TLBs suffer from larger miss penalty than hardware-managed TLBs, since they require more extra context switching overhead

3、than hardware-managed TLBs. This paper introduces a new technique for reducing the miss penalty of software-managed TLBs by prefetching necessary TLB entries before being used. This technique is not inherently limited to specific applications. The key of this scheme is to perform the prefetch operat

4、ions to update the TLB entries before first accesses so that TLB misses can be avoided. Using trace-driven simulation and a quantitative analysis, the proposed scheme is evaluated in terms of the miss rate and the total miss penalty. Our results show that the proposed scheme reduces the TLB miss rat

5、e by a factor of 6% to 77% due to TLB characteristics and page sizes. In addition, it is found that reducing the miss rate by the prefetching scheme reduces the total miss penalty and bus traffics in software-managed TLBs.Most Prolog machines have been based on specialized architectures. Our goal is

6、 to start with a general-purpose architecture and determine a minimal set of extensions for high-performance Prolog execution. We have developed both the architecture and optimizing compiler simultaneously, drawing on results of previous implementations. We find that most Prolog-specific operations

7、can be done satisfactorily in software; however, there is a crucial set of features that the architecture must support to achieve the best Prolog performance. In this paper, the costs and benefits of special architectural features and instructions are analyzed. In addition, we study the relationship

8、 between the strength of compiler optimization and the benefit of specialized hardware. We demonstrate that our base architecture can be extended to include explicit support for Prolog with modest increase in chip area (13%), and yet attain a significant performance benefit (6070%). Experiments usin

9、g optimized code that approximates the output of future optimizing compilers indicate that special hardware support can still provide a performance benefit of 3035%. The microprocessor described here, the VLSI-BAM, has been fabricated and incorporated into a working test system.It is well known that

10、 software maintenance and evolution are expensive activities, both in terms of invested time and money. Reverse engineering activities support the obtainment of abstractions and views from a target system that should help the engineers to maintain, evolve and eventually re-engineer it. Two important

11、 tasks pursued by reverse engineering are design pattern detection and software architecture reconstruction, whose main objectives are the identification of the design patterns that have been used in the implementation of a system as well as the generation of views placed at different levels of abst

12、ractions, which let the practitioners focus on the overall architecture of the system without worrying about the programming details it has been implemented with. In this context we propose an Eclipse plug-in called MARPLE (Metrics and Architecture Reconstruction Plug-in for Eclipse), which supports

13、 both the detection of design patterns and software architecture reconstruction activities through the use of basic elements and metrics that are mechanically extracted from the source code. The development of this platform is mainly based on the exploitation of the Eclipse framework and plug-ins as

14、 well as of different Java libraries for data access and graph management and visualization. In this paper we focus our attention on the design pattern detection process.Access to sufficient resources is a barrier to scientific progress for many researchers facing large computational problems. Gaini

15、ng access to large-scale resources (i.e., university-wide or federally supported computer centers) can be difficult, given their limited availability, particular architectures, and request/review/approval cycles. Simultaneously, researchers often find themselves with access to workstations and older

16、 clusters overlooked by their owners in favor of newer hardware. Software to tie these resources into a coherent Grid, however, has been problematic. Here, we describe our experiences building a Grid computing system to conduct a large-scale simulation study using “borrowed” computing resources dist

17、ributed over a wide area. Using standard software components, we have produced a Grid computing system capable of coupling several hundred processors spanning multiple continents and administrative domains. We believe that this system fills an important niche between a closely coupled local system a

18、nd a heavyweight, highly customized wide area system.Article Outline1. Introduction2. Scientific context3. Implementation 3.1. System constraints3.2. General design of the grid system3.3. System requirements 3.3.1. Operating system3.3.2. Client3.3.3. Server3.3.4. Account3.4. System processes 3.4.1.

19、user level processes3.4.2. grid_client processes3.4.3. project processes: executed once per invocation by grid_client process3.5. Basic features 3.5.1. Clientserver communications3.5.2. Authentication3.5.3. Architecture specific binaries3.5.4. Client-side security3.5.5. Server-side security3.5.6. Sy

20、stem monitoring3.5.7. Error handling4. Performance considerations5. Future work 5.1. Secure communications5.2. SQL transaction support5.3. A little language5.4. Validity checking6. ConclusionsAcknowledgementsReferencesVitaeThis paper describes the architecture of the first implementation of the In-V

21、IGO grid-computing system. The architecture is designed to support computational tools for engineering and science research In Virtual Information Grid Organizations (as opposed to in vivo or in vitro experimental research). A novel aspect of In-VIGO is the extensive use of virtualization technology

22、, emerging standards for grid-computing and other Internet middleware. In the context of In-VIGO, virtualization denotes the ability of resources to support multiplexing, manifolding and polymorphism (i.e. to simultaneously appear as multiple resources with possibly different functionalities). Virtu

23、alization technologies are available or emerging for all the resources needed to construct virtual grids which would ideally inherit the above mentioned properties. In particular, these technologies enable the creation of dynamic pools of virtual resources that can be aggregated on-demand for applic

24、ation-specific user-specific grid-computing. This change in paradigm from building grids out of physical resources to constructing virtual grids has many advantages but also requires new thinking on how to architect, manage and optimize the necessary middleware. This paper reviews the motivation for

25、 In-VIGO approach, discusses the technologies used, describes an early architecture for In-VIGO that represents a first step towards the end goal of building virtual information grids, and reports on first experiences with the In-VIGO software under development.Article Outline1. Introduction2. The I

26、n-VIGO concept3. Virtualization in In-VIGO 3.1. Virtual data and the virtual file system3.2. Virtual machines3.3. Virtual applications3.4. Virtual networks3.5. Virtual user interfaces4. The architecture of In-VIGO 4.1. The virtual application4.2. The virtual file system4.3. The resource manager4.4.

27、The user interface manager4.5. The global information system4.6. The user manager5. Implementation6. ConclusionsAcknowledgementsReferencesVitaeLeveraging cost matrix structure for hardware implementation of stereo disparity computation using dynamic programmingOriginal Research ArticleComputer Visio

28、n and Image UnderstandingArticle Outline1. Introduction2. Related works 2.1. Design pattern detection2.2. Software architecture reconstruction2.3. Concluding remarks3. An overview on MARPLE 3.1. The information detector engine module3.2. The Joiner module3.3. The classifier module3.4. The software a

29、rchitecture reconstruction module3.5. Distributed MARPLE4. Experimental results for DPD 4.1. Results for the information detector engine module4.2. Results for the Joiner module4.3. Results for the classifier module 4.3.1. Comparison with other tools4.3.2. Results on other design patterns4.4. Result

30、s for the SAR module5. Conclusions and future worksAcknowledgementsReferencesA tool for design pattern detection and software architecture reconstructionOriginal Research ArticleInformation Sciencespackage of Linux scripts for the parallelization of Monte Carlo simulationsOriginal Research ArticleCo

31、mputer Physics CommunicationsDespite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by

32、having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of “clone” CPUs is governed by a “master” computer by taking advantage of the capabilities of the Se

copyright@ 2008-2022 冰豆网网站版权所有

经营许可证编号:鄂ICP备2022015515号-1