1、Human exploration and development of space using XML database Space Wide Web Space Wide Web by adaptThe TLB (Translation Lookaside Buffer) miss services have been concealed from operating systems, but some new RISC architectures manage the TLB in software. Since software-managed TLBs provide flexibi
2、lity to an operating system in page translation, they are considered an important factor in the design of microprocessors for open system environments. However, software-managed TLBs suffer from larger miss penalty than hardware-managed TLBs, since they require more extra context switching overhead
3、than hardware-managed TLBs. This paper introduces a new technique for reducing the miss penalty of software-managed TLBs by prefetching necessary TLB entries before being used. This technique is not inherently limited to specific applications. The key of this scheme is to perform the prefetch operat
4、ions to update the TLB entries before first accesses so that TLB misses can be avoided. Using trace-driven simulation and a quantitative analysis, the proposed scheme is evaluated in terms of the miss rate and the total miss penalty. Our results show that the proposed scheme reduces the TLB miss rat
5、e by a factor of 6% to 77% due to TLB characteristics and page sizes. In addition, it is found that reducing the miss rate by the prefetching scheme reduces the total miss penalty and bus traffics in software-managed TLBs.Most Prolog machines have been based on specialized architectures. Our goal is
6、 to start with a general-purpose architecture and determine a minimal set of extensions for high-performance Prolog execution. We have developed both the architecture and optimizing compiler simultaneously, drawing on results of previous implementations. We find that most Prolog-specific operations
7、can be done satisfactorily in software; however, there is a crucial set of features that the architecture must support to achieve the best Prolog performance. In this paper, the costs and benefits of special architectural features and instructions are analyzed. In addition, we study the relationship
8、 between the strength of compiler optimization and the benefit of specialized hardware. We demonstrate that our base architecture can be extended to include explicit support for Prolog with modest increase in chip area (13%), and yet attain a significant performance benefit (6070%). Experiments usin
9、g optimized code that approximates the output of future optimizing compilers indicate that special hardware support can still provide a performance benefit of 3035%. The microprocessor described here, the VLSI-BAM, has been fabricated and incorporated into a working test system.It is well known that
10、 software maintenance and evolution are expensive activities, both in terms of invested time and money. Reverse engineering activities support the obtainment of abstractions and views from a target system that should help the engineers to maintain, evolve and eventually re-engineer it. Two important
11、 tasks pursued by reverse engineering are design pattern detection and software architecture reconstruction, whose main objectives are the identification of the design patterns that have been used in the implementation of a system as well as the generation of views placed at different levels of abst
12、ractions, which let the practitioners focus on the overall architecture of the system without worrying about the programming details it has been implemented with. In this context we propose an Eclipse plug-in called MARPLE (Metrics and Architecture Reconstruction Plug-in for Eclipse), which supports
13、 both the detection of design patterns and software architecture reconstruction activities through the use of basic elements and metrics that are mechanically extracted from the source code. The development of this platform is mainly based on the exploitation of the Eclipse framework and plug-ins as
14、 well as of different Java libraries for data access and graph management and visualization. In this paper we focus our attention on the design pattern detection process.Access to sufficient resources is a barrier to scientific progress for many researchers facing large computational problems. Gaini
15、ng access to large-scale resources (i.e., university-wide or federally supported computer centers) can be difficult, given their limited availability, particular architectures, and request/review/approval cycles. Simultaneously, researchers often find themselves with access to workstations and older
16、 clusters overlooked by their owners in favor of newer hardware. Software to tie these resources into a coherent Grid, however, has been problematic. Here, we describe our experiences building a Grid computing system to conduct a large-scale simulation study using “borrowed” computing resources dist
17、ributed over a wide area. Using standard software components, we have produced a Grid computing system capable of coupling several hundred processors spanning multiple continents and administrative domains. We believe that this system fills an important niche between a closely coupled local system a
18、nd a heavyweight, highly customized wide area system.Article Outline1. Introduction2. Scientific context3. Implementation 3.1. System constraints3.2. General design of the grid system3.3. System requirements 3.3.1. Operating system3.3.2. Client3.3.3. Server3.3.4. Account3.4. System processes 3.4.1.
19、user level processes3.4.2. grid_client processes3.4.3. project processes: executed once per invocation by grid_client process3.5. Basic features 3.5.1. Clientserver communications3.5.2. Authentication3.5.3. Architecture specific binaries3.5.4. Client-side security3.5.5. Server-side security3.5.6. Sy
20、stem monitoring3.5.7. Error handling4. Performance considerations5. Future work 5.1. Secure communications5.2. SQL transaction support5.3. A little language5.4. Validity checking6. ConclusionsAcknowledgementsReferencesVitaeThis paper describes the architecture of the first implementation of the In-V
21、IGO grid-computing system. The architecture is designed to support computational tools for engineering and science research In Virtual Information Grid Organizations (as opposed to in vivo or in vitro experimental research). A novel aspect of In-VIGO is the extensive use of virtualization technology
22、, emerging standards for grid-computing and other Internet middleware. In the context of In-VIGO, virtualization denotes the ability of resources to support multiplexing, manifolding and polymorphism (i.e. to simultaneously appear as multiple resources with possibly different functionalities). Virtu
23、alization technologies are available or emerging for all the resources needed to construct virtual grids which would ideally inherit the above mentioned properties. In particular, these technologies enable the creation of dynamic pools of virtual resources that can be aggregated on-demand for applic
24、ation-specific user-specific grid-computing. This change in paradigm from building grids out of physical resources to constructing virtual grids has many advantages but also requires new thinking on how to architect, manage and optimize the necessary middleware. This paper reviews the motivation for
25、 In-VIGO approach, discusses the technologies used, describes an early architecture for In-VIGO that represents a first step towards the end goal of building virtual information grids, and reports on first experiences with the In-VIGO software under development.Article Outline1. Introduction2. The I
26、n-VIGO concept3. Virtualization in In-VIGO 3.1. Virtual data and the virtual file system3.2. Virtual machines3.3. Virtual applications3.4. Virtual networks3.5. Virtual user interfaces4. The architecture of In-VIGO 4.1. The virtual application4.2. The virtual file system4.3. The resource manager4.4.
27、The user interface manager4.5. The global information system4.6. The user manager5. Implementation6. ConclusionsAcknowledgementsReferencesVitaeLeveraging cost matrix structure for hardware implementation of stereo disparity computation using dynamic programmingOriginal Research ArticleComputer Visio
28、n and Image UnderstandingArticle Outline1. Introduction2. Related works 2.1. Design pattern detection2.2. Software architecture reconstruction2.3. Concluding remarks3. An overview on MARPLE 3.1. The information detector engine module3.2. The Joiner module3.3. The classifier module3.4. The software a
29、rchitecture reconstruction module3.5. Distributed MARPLE4. Experimental results for DPD 4.1. Results for the information detector engine module4.2. Results for the Joiner module4.3. Results for the classifier module 4.3.1. Comparison with other tools4.3.2. Results on other design patterns4.4. Result
30、s for the SAR module5. Conclusions and future worksAcknowledgementsReferencesA tool for design pattern detection and software architecture reconstructionOriginal Research ArticleInformation Sciencespackage of Linux scripts for the parallelization of Monte Carlo simulationsOriginal Research ArticleCo
31、mputer Physics CommunicationsDespite the fact that fast computers are nowadays available at low cost, there are many situations where obtaining a reasonably low statistical uncertainty in a Monte Carlo (MC) simulation involves a prohibitively large amount of time. This limitation can be overcome by
32、having recourse to parallel computing. Most tools designed to facilitate this approach require modification of the source code and the installation of additional software, which may be inconvenient for some users. We present a set of tools, named clonEasy, that implement a parallelization scheme of a MC simulation that is free from these drawbacks. In clonEasy, which is designed to run under Linux, a set of “clone” CPUs is governed by a “master” computer by taking advantage of the capabilities of the Se
copyright@ 2008-2022 冰豆网网站版权所有
经营许可证编号:鄂ICP备2022015515号-1