CERN, the European Organization for Nuclear Research, operates particle accelerators needed for high-energy physics research. Currently, CERN operates a network of six accelerators. Each accelerator increases the energy of particle beams before delivering them to the various experiments or to the next more powerful accelerator in the chain. The Large Hadron Collider (LHC) is the largest of these six accelerators. Located 100 meters underground, the LHC consists of a 27-km ring of superconducting magnets to control particle beams, as well as accelerating structures to boost the energy of the particles in the beams. Inside the LHC, two particle beams travelling in opposite directions at close to the speed of light are made collide. This is comparable to firing two needles 10 kilometers apart with such precision that they collide in the middle. The machinery required to perform these kinds of experiments weighs tens of thousands of tons and must be capable of withstanding conditions similar to those that prevailed when life on the universe first began, such as high levels of radioactivity and extreme temperatures, for example.

Dirk @ CERN

There are currently 4 major experiments underway at the LHC. One of these – the LHCb experiment – is the main focus of this case study. It is installed in a huge underground cavern built around one of the collision points of the LHC beams. The purpose of the LHCb is to search for evidence of antimatter. This is done by searching for a particle called the "beauty quark" (hence the "b" in LHCb). Like the other experiments, LHCb is designed to examine what happens when certain particles traveling at the speed of light collide. At the point of impact, many particles are generated. Some of these particles are very unstable and only exist for a fraction of a second before decaying into lighter particles. For more information on the physics theory behind all this, see the FAQ from CERN.

LHCB and Data Management

The LHC generates up to 600 million particle collisions per second (out of 40 million beam crossings = 40 MHz). A digitized summary of each collision is recorded as a "collision event." At the time of writing, CERN stores approximately 30 petabytes of data each year. This data needs to be analyzed by the physicists in order to determine if the collisions have yielded any evidence to prove their theories. The LHCb experiment's more than 1,000,000 sensors generate colossal amounts of data. It will be some time yet before it is possible to store and analyze all of the analog data produced by these vast armies of sensors. According to Sverre Jarp, recently retired CTO of CERN openlab: "Since it is impossible to manage and retain all of the massive amounts of analog data created by the LHC, we've had to devise a strategy for efficiently digitizing, compressing, filtering, and distributing this data for further analysis. Our solution to this problem is the Worldwide LHC Computing Grid (WLCG). The WLCG is a massive grid of low-cost computers used for pre-processing LHC data. We then use the World Wide Web – also invented at CERN – to give more than 8,000 physicists near real-time access to the data for post-processing."

As Andrzej Nowak, who worked closely with Sverre Jarp during his long-standing tenure at CERN openlabs, explains: "Our challenge is to find one event in 10 trillion. Since we can't retain all of the analog data created by these 10 trillion events, we have to filter some of the data out. This upsets our physicists, because they are afraid we will throw away the one golden grain that will make it all worthwhile. So we need two things: Firstly, massive scale to ensure that we can keep as much data as possible. Secondly, intelligent filtering to ensure that we are really only throwing away the white noise, and keeping all the good data."

CERN Overview

In order to address these challenges, the LHCb team has set up a multi-tiered approach to managing the data produced by these experiments (For an overview, see the figure above):

  • [A] Over 1,000,000 sensors are deployed inside the detector in the main LHCb cavern, right after the point of the primary particle collision. Once the collision has taken place, magnets are used to disperse the secondary particles, so that all sensors can capture parts of the now evenly distributed particles. Given the nature of the experiment, this area is highly radioactive when experiments are in progress. The sensors themselves are organized into different groups. For example, the main tracker contains sensors to help reconstruct the trajectories and measure the speed of charged particles. The electromagnetic and hadronic calorimeters measure the energy of electrons, photons, and hadrons. These measurements are then used at trigger level to identify particles with so-called "large transverse momentum" (see [C]).
  • [B] In order to protect the computer equipment from the high levels of radiation, the analog data from the sensors is transferred through a massive concrete wall via a glass-fiber network. The data transfer rate corresponds to 1 MHz. The first tier of this system for processing analog data comprises a grid of FPGA units (Field-Programmable Gate Arrays) tasked with running high-performance, real-time data compression algorithms on the incoming analog data.
  • [C] The compressed sensor reading is then processed by a large grid of adjacent general-purpose servers (>1,500 servers) to create a preliminary "reconstruction" of the event. This processing tier uses detailed information about the sensors' 3D positions in order correlate the data from the different sensors. If this analysis yields a so-called "trigger", a snapshot is created. For each trigger event of this type, the event data (i.e. the snapshot of all sensor data at that point in time) is written to a file, which is then transferred via LAN to the external data center (i.e. above ground). On completion of this step, the data transfer rate drops from 1 MHz to 5 KHz.
  • [D] The main CERN Data Center (shared with the other experiments) corresponds to Tier 0 of the system and consists of a grid of off-the-shelf computers for processing the data, as well as a farm of tape robots for permanent data storage. The input processing capacity of the data center for the LHCb is 300 MB per second. In addition to storage management, the main task of this data center is to ensure the "safe" offline reconstruction of events, which is something we will explain in more detail in the next section.
  • [E] From Tier 0/CERN, the data is distributed to twelve Tier-1 sites in different countries. It is transferred via dedicated 10-GB network connections, which enable the creation of new copies of the experimental data.
  • [F] About 200 Tier-2 sites receive selected data for further analysis and produce detector simulation data (to optimize and calibrate the detector and analysis.) Most of these sites are universities or other scientific institutions.

The multi-tiered approach chosen by CERN for the management of its data has proven very efficient. CERN only has to provide 15% of the overall capacity required to process the data from the different experiments, with no need for a super computer. According to Massimo Lamanna, Section Leader of the Data Storage Services (DSS) group: "The data from each collision is relatively small – in the order of 1 to 10 MB. The challenge is the enormous numbers of collisions that require the collection of tens of petabytes every year. Because each collision is completely independent, we are able to distribute the reconstruction process across multiple nodes of the CERN computer center and WLCG grid. Surprisingly enough, the calculations we perform can be done very effectively using large farms of standard PC (x86) servers with 2 sockets, SATA drives, standard Ethernet network, and Linux operating systems. This approach has proved to be by far the most cost-effective solution to our problem."

In order to increase processing capacity and ensure business continuity, CERN has recently opened a second data center in Budapest, Hungary. This data center acts as an extension of the main computer center at the Geneva site, and uses the same hardware and software architecture as the original data center. Budapest provides an additional 45 petabytes of disk storage, bringing the total number of processing cores to 100,000.

LHCB and Physical Data Analysis

Without going into the physics theory underlying the LHCb experiment in too much detail, we will take a brief look at data analysis from a logical perspective – as we will see later, there is a lot we can learn here, even as non-physicists. The collisions creates numerous secondary particles that need to be detected. These secondary particles move through a three-dimensional space based on a given trajectory, speed, etc. The raw analog data is delivered by sensors; a collection of points through which the particles pass. Because the amount of raw analog data is much too large to be processed by standard IT systems, a combination of FPGAs and microcontrollers work together to create "triggers", which initiate the generation of readouts, i.e. a collection of all sensor data existing at a given point in time. These readouts are transferred to the data center, where they are stored and undergo basic formatting operations in order to create snapshots of the status of the detector.

Physical Data Reconstruction at CERN

One of the main processor-intensive tasks performed at the Tier-0 data centeris the reconstruction process. Individual hits (detected by an individual sensor in a three dimensional space) are correlated and grouped to form trajectories, i.e. the path of a charged particle through the detector. Energy deposits from charged and neutral particles are detected via calorimeter reconstruction, which helps to determine exact energy deposit levels, location, and direction. Combined reconstruction looks at multiple particle trajectories, in order to identify related electrons and photons, for example. The final result is a complete reconstruction of particle ensembles. These particle ensembles are then made available to physicists at Tier-1 and Tier-2 centers. The general goal of the physics analysis performed at Tier-1 and Tier-2 level is to identify new particles or phenomena, and to perform consistency checks on underlying theories. Scientists working at this level often start with a simulation of the phenomenon they are examining. The output of the Tier 0-reconstruction phase is then further refined and compared with predictions from the simulation. When combined, simulation and Tier-0 experiment data often comprises tens of petabytes of data.

AI at CERN

The High Energy Physics community has been using Machine Learning methods for a long time for the selection of relevant events against the overwhelming background produced at colliders. Today, many experiments at CERN are integrating Deep Learning into their environments, e.g. for data quality assurance, real-time selection of relevant collision events, simulation and data analysis. This is a constantly evolving space. CERN is closely engaging with the ML and DL community, e.g. via Kaggle competitions. The CERN openlab is involved in a large number of Deep Learning and AI projects.

One interesting example project is using DL to enable higher efficiency in event filtering at LHC experiments. The data pipeline for this project is based on Apache Spark and Analytics Zoo:

  • Data Ingestion: Read physics data, feature engineering
  • Feature Preparation: Prepare input for DL network
  • Model Development: Specify model topology, tune model topology on small data set
  • Training: Train best suitable model

For data ingestion, events are represented as a matrix; for every particle, initially 19 features like momentum, position, energy, charge and particle type are given:

features = [ 'Energy', 'Px', 'Py', 'Pz', 'Pt', 'Eta', 'Phi', 'vtxX', 'vtxY', 'vtxZ', 'ChPFIso', 'GammaPFIso', 'NeuPFIso', 'isChHad', 'isNeuHad', 'isGamma', 'isEle', 'isMu', 'Charge' ]

The feature engineering process is using domain specific knowledge to add 14 High Level Features (HLF) to the 19 features recorded in the experiment. A sorting metric is used to create a sequence of particles which can then be fed to a classifier.

The feature preparation process is converting all features into a format which can be consumed by the network. This is done in PySpark using Spark SQL and ML.

A number of different models were investigated, including fully connected feed-forward DNN, as well as DNN with recursive layer. Based on the chosen network topology, the DNN hyper-parameter tuning is executed. Finally, the model is instantiated via APIs provided by Analytics Zoo.

Click here for more details. This is only one example of the many different AI-related projects at CERN. Follow the CERNCOURIER for more.

LHCb's Asset Integration Architecture

The figure below provides an overview of the Asset Integration Architecture (AIA) of the LHCb experiment. For the purposes of our case study, the primary asset is the detector; we will not be looking at the collider itself. The detector is supported by three main systems:

  • Data Acquisition System (DAQ): The DAQ is a central part of the detector and responsible for crunching the massive amounts of analog data into manageable levels from a digital perspective. The devices are mainly radiation-hard sensors and chips. The Local Asset Control for the DAQ consists of a farm of FPGAs and microcontrollers that compress and filter the sensor readings, as we've seen above.
  • Detector Control System (DCS): The DCS manages the LHCb experiment. It is mainly based on standard industry components; for example, a standard SCADA system is used for the DCS.
  • Detector Safety System and Infrastructure (DSS): The DSS runs critical components for cooling water, gas supplies, etc. The DSS uses standard PLCs for managing these mainly industrial components.
CERN Asset Integration Architecture

With the exception of the DAQ, the LHCb is constructed like a normal industrial machine (admittedly a very big and complex one). Both the DCS and DSS use standard industry components. However, the device layer can only use radiation-hard components, while the DAQ makes the LHCb one of the best existing examples of "Big Data" at work.

Lessons Learned and Outlook

This case study provides quite a few lessons that we believe can be applied to many IoT and Big Data projects in other domains.

  • Use of multi-tiered Architectures for big data capturing and processing: The entire data processing cycle is based on multiple, highly specialized tiers that together act like a big funnel, with each tier performing specialized data filtering and enrichment functions. Tier 0 performs the basic data reconstruction and distribution tasks, while Tier 1 and Tier 2 focus on different types of physics analysis. CERN has succeeded in outsourcing 85% of its data processing resources to Tiers 1 and 2. And even between the detector itself and the Tier-0 data center, the data passes through multiple layers involving multiple steps, from high-performance compression, to initial filtering, through to reformatting and reconstruction. Each tier and layer performs an optimized function.
  • Use of specialized hardware for high-performance analog data processing: The LHCb uses a grid of high-performance FPGAs (Field-Programmable Gate Arrays) for the real-time processing of analog data and conversion into compressed digital data. The benefit of this FPGA grid is that it can be re-programmed at a software level based on specialized Hardware Description Language (HDL). In many cases, FPGAs are used as the "first frontier" in the real-time processing of analog data, before being passed on to a "second frontier" of even more flexible but less real-time microcontrollers. The "hard" real-time processing that occurs at this level is also a prerequisite for the use of time as a dimension in the correlation of data, which is something we will cover in the next point.
  • Correlation of sensor data: Correlating data from different sensors is a key prerequisite for making sense of the "big picture." At CERN LHCb, this is done using the 3D positions of all sensors in combination with timing information. Conceptually, tracking the movements of particles through a three-dimensional detector is not so different from tracking the movements of customers through a shopping center, for example (except for scale and speed, of course).
  • Leverage grid computing: Because data structures and analytics patterns allow the division of processing tasks into different chunks, it is possible to use grids of inexpensive, standard hardware instead of more expensive, high-performance or even super computers.

And finally, another important lesson relates to the need to bring advanced filter logic as close to the data source as possible. As Niko Neufeld, LHCb Data Acquisition specialist, explains: "The goal should be to enable the trigger system to perform a full data reconstruction close to the sensors. This should help us to dramatically improve the way we decide which data is to be kept as close as possible to the data source, thus minimizing the chances of critical data slipping through the net as a result of the initial data filtering step in the DAQ."

This Case Study was researched and written by Dirk Slama and first published in Enterprise IoT; O'Reilly Media 2015; Dirk Slama, Frank Puhlmann, Jim Morrish, Rishi M. Bhatnagar. License: CC BY 3.0. It was edited in 2020 by Dirk Slama to better highlight the AI-related aspects.