1、计算机硬件开发A Java-based data acquisition system for nuclear physicsOriginal Research ArticleNuclear Instruments and Methods in Physics Research Section A: Accelerators,Spectrometers,Detectors and Associated EquipmentJam is a Java-based user-friendly data acquisition and analysis system developed for CAM
2、AC-based nuclear physics experiments. The system is menu-driven and has been designed to minimize the expertise needed to perform the essential tasks necessary to collect and sort data. The front-end hardware is VME based and includes a MVME167 running VxWorks, which is networked to a Sun workstatio
3、n. The sorting, display, and control routines are all written in Java, and the front-end code is written in C. With a Sparc 5 workstation, events with 10 parameters, 15 histograms, and 10 gate checks the system can collect and sort data up to event rates of 1 kHz. By only sorting a fraction of the e
4、vents, but storing all events, it can be run at the front-end limit of 10 kHz. Javas promise of platform independence has been found to be realistic, and Jam has been used with no modifications to sort offline on multiple platforms. Jam has a modular design allowing it to be easily modified. For exa
5、mple, Jam has an interface to allow users to write their own fitting routines. This article discusses the systems design and performance, as well as some advantages and disadvantages of using Java.Article Outline1. Introduction2. Design 2.1. User interface2.2. Online data acquisition and offline sor
6、ting2.3. Analysis tools2.4. Writing a sort routine3. System description 3.1. Hardware requirements3.2. Code for data acquisition4. Discussion 4.1. The disadvantages of using Jam4.2. The advantages of using Jam4.3. SummaryAcknowledgementsReferencesThe ALICE TPC, a large 3-dimensional tracking device
7、with fast readout for ultra-high multiplicity eventsNuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated EquipmentThe design, construction, and commissioning of the ALICE Time-Projection Chamber (TPC) is described. It is the main device
8、 for pattern recognition, tracking, and identification of charged particles in the ALICE experiment at the CERN LHC. The TPC is cylindrical in shape with a volume close to 90m3 and is operated in a 0.5T solenoidal magnetic field parallel to its axis. In this paper we describe in detail the design co
9、nsiderations for this detector for operation in the extreme multiplicity environment of central PbPb collisions at LHC energy. The implementation of the resulting requirements into hardware (field cage, read-out chambers, electronics), infrastructure (gas and cooling system, laser-calibration system
10、), and software led to many technical innovations which are described along with a presentation of all the major components of the detector, as currently realized. We also report on the performance achieved after completion of the first round of stand-alone calibration runs and demonstrate results c
11、lose to those specified in the TPC Technical Design Report.Article Outline1. Introduction2. Field cage 2.1. Vessels2.2. Central electrode2.3. Rods 2.3.1. Resistor rods2.3.2. High-voltage cable rod2.3.3. Laser rods2.3.4. Gas rods2.4. Strips2.5. Skirts2.6. Endplates2.7. I-bars3. Readout chambers 3.1.
12、Design considerations3.2. Mechanical structure 3.2.1. Wires3.2.2. Wire planes3.2.3. Anode-wire grid3.2.4. Cathode-wire grid3.2.5. Gating-wire grid3.2.6. Cover and edge geometry3.2.7. Pad plane, connectors and flexible cables3.2.8. Pad plane capacitance measurements3.2.9. Al-body3.3. Tests with proto
13、type chambers 3.3.1. Description of production steps3.3.2. Quality assurance and tests3.4. Chamber mounting and pre-commissioning4. Front-end electronics and readout 4.1. General specifications 4.1.1. System overview4.2. PASA4.3. ALTRO 4.3.1. Circuit description4.3.2. Physical implementation4.4. Fro
14、nt-end card (FEC) 4.4.1. Circuit description4.4.2. Physical implementation4.5. RCU 4.5.1. RCU motherboard4.5.2. DCS board4.6. Trigger subsystem4.7. Radiation tolerance 4.7.1. SEU4.7.2. SEL4.8. Testing procedure5. Cooling and temperature stabilization system 5.1. Overview5.2. The necessity for unifor
15、m temperatures 5.2.1. Heat load and computational fluid dynamics calculations5.3. Principle of underpressure cooling5.4. TPC cooling plants 5.4.1. Cooling circuits5.5. Cooling strategy5.6. Commissioning of the cooling system 5.6.1. Test with mock-up sectors5.6.2. Startup procedures and operation5.6.
16、3. Cavitation problem5.7. Temperature monitoring system 5.7.1. Temperature profile and homogenization6. Gas and gas system 6.1. Gas choice 6.1.1. Implications of the gas choice6.2. Description of the gas system 6.2.1. Configuration6.2.2. On-detector distribution6.2.3. Filling6.2.4. Running6.2.5. Bac
17、k-up system6.2.6. Analysis7. Laser system 7.1. Requirements7.2. System overview7.3. Optical system 7.3.1. UV lasers7.3.2. Laser beam transport system7.3.3. Micromirrors and laser rods7.4. Laser beam characteristics and alignment 7.4.1. Narrow beam characteristics7.4.2. Narrow beam layout7.4.3. Spati
18、al precision and stability7.4.4. Construction and surveys7.4.5. Online and offline alignment7.5. Operational aspects 7.5.1. Beam monitoring and steering7.5.2. Trigger and synchronization8. Infrastructure and services 8.1. Moving the TPC8.2. Service support wheel8.3. Low-voltage distribution8.4. Cham
19、ber HV system8.5. Gate pulser8.6. Calibration pulser9. Detector control system (DCS) 9.1. Overview 9.1.1. Hardware architecture9.1.2. Software architecture9.1.3. System implementation9.1.4. Interfaces to devices9.1.5. Interlock9.2. Electronics control 9.2.1. Front-end monitoring9.2.2. Front-end conf
20、iguration and control9.3. Interfaces to experiment control and offline10. Commissioning and calibration 10.1. Calibration requirements10.2. Commissioning 10.2.1. Commissioning phases10.2.2. Data sets10.3. Electronics calibration 10.3.1. Pedestal and noise determination10.3.2. Tail-cancellation filte
21、r parameter extraction10.4. Gain calibration 10.4.1. Krypton calibration10.5. Drift-time calibration 10.5.1. Shaping variations in the FEE10.5.2. Drift velocity11. Performance 11.1. Space-point resolution11.2. Momentum resolution11.3. Particle identification performance12. ConclusionsAcknowledgement
22、sReferencesPast, present and future of data acquisition systems in high energy physics experimentsOriginal Research ArticleMicroprocessors and MicrosystemsData Acquisition (DAQ) systems for large high-energy physics (HEP) experiments in the eighties were designed to handle data rates of megabytes pe
23、r second. The next generation of HEP experiments at CERN (European Laboratory for High Energy Physics), is being designed around the new Large Hadron Collider (LHC) project, and will have to cope with gigabyte-per-second data flows. As a consequence, LHC experiments will require challengingly new eq
24、uipment for detector readout, event filtering, event building and storage. The Fastbus and VME-based tree architectures of the eighties run out of steam when applied to LHCs requirements. New concepts and architectures from the nineties have substituted rack-mounting backplane buses for high speed p
25、oint-to-point links, abandoned centralized event building, and instead use switched networks and parallel architectures. Following these trends, and in the context of DAQ and trigger systems for LHC experiments, this paper summarizes the earlier architectures and presents the new concepts for DAQ.Ar
26、ticle Outline1. Introduction2. Instrumentation buses for HEP in the 1960s and 1970s3. DAQ and trigger systems in the 1980s4. New trends in the 1990s5. Trends in DAQ systems for the 21st century6. ConclusionsReferencesVitaeA software package for the configuration of hardware devices following a gener
27、ic modelOriginal Research ArticleComputer Physics CommunicationsThis paper describes a software package developed in C+ under the Linux environment that is intended for automatic hardware configuration in VME or PCI buses. Based on a generic model, users specify the configuration procedures and data
28、 in configuration files. Actual hardware configuration is performed by the software package, accessed through a simple C+ interface. The model is well suited for storage of configuration data in XML files or databases. The package is now being used in the local data acquisition system of the Electro
29、magnetic Calorimeter of the CMS experiment at CERN. Program summaryTitle of program: Generic Configurator Catalogue identifier: ADUK Program summary URL: http:/cpc.cs.qub.ac.uk/summaries/ADUK Program obtainable from: CPC Program Library, Queens University of Belfast, N. Ireland Computer for which th
30、e program is designed and others on which it has been tested: Intel Pentium IV PC Installations: ECAL Data Acquisition of the CMS experiment at CERN Operating systems or monitors under which the program has been tested: Linux 2.4.2 Programming language used: C+ Memory required to execute with typica
31、l data: depends on the complexity of the module configuration. Test runs requires less then 500 KB Number of bits in a word: 32 Number of processors used: 1 Distribution format: tar gzip file Number of bytes in distributed program, including test data, etc.: 234542 Number of lines in distributed pro
32、gram, including test data etc.: 17365 Nature of physical problem: Generalization of hardware device configuration procedure in VME or PCI buses. Method of solution: The developed package uses a generic configuration model that allows users to configure VME and PCI devices. The hardware configuration parameters a
copyright@ 2008-2022 冰豆网网站版权所有
经营许可证编号:鄂ICP备2022015515号-1