Community

ATLAS

The ATLAS Experiment is one of the four major experiments at the Large Hadron Collider (LHC). Approaching one Exabyte of data on disk and tape in 2024, ATLAS has always been one of the largest scientific data producers in the world. A well-integrated, resilient, and efficient data management solution is crucial to overall experiment performance and ATLAS was in need of a better solution by the end of LHC Run-1 in 2012. Thus ATLAS invested heavily in the design and development of a new system called Rucio to ensure scalability for future LHC upgrades, allow expressive policies for our data flow needs, connect with our distributed computing infrastructure, automate operational tasks as much as possible, integrate new technologies from the storage and network areas, and combine all this within a modular and extensible architecture. When Rucio was first put in production in 2014, the improvements it brought into ATLAS data management were substantial and also attracted a lot of interest by the wider science community. By now, Rucio manages all ATLAS data, including centrally produced and user-generated, totalling over a billion files distributed across 120+ scientific data centers. Rucio orchestrates daily transfers of tens of Petabytes, ensures optimal usage of our network infrastructure, takes care of seamless integration between scientific storage, supercomputers, and commercial clouds, as well as provides various interfaces to make the daily life of scientists more comfortable. Rucio has also matured into an open-source community project, and we are extremely happy about its continued success. The idea of a common data management system, in use by many communities with similar needs, is a guiding principle for us and we will continue to invest into our shared future.

CMS

The CMS Collaboration brings together members of the particle physics community from across the globe in a quest to advance humanity’s knowledge of the very basic laws of our Universe. CMS has over 4000 particle physicists, engineers, computer scientists, technicians and students from around 240 institutes and universities from more than 50 countries.

The collaboration operates and collects data from the Compact Muon Solenoid, one of two general-purpose particle detectors at CERN’s Large Hadron Collider. Data collected by CMS are distributed to CMS institutions in over forty countries for physics analysis.

In 2018, CMS embarked on a process to select a new data management solution. The previous solution was over a decade old, difficult to maintain, and would not easily adapt to the data rates and technologies used for data transfers in the HL-LHC era. As a result of this selection process, CMS decided to adopt Rucio which was, at the time, used by on major experiment and a couple of smaller experiments.

This choice has been a good one for CMS, allowing them to no longer operate a service at each of more than 50 data sites, to scale easily to new rates of data transfer, and to adopt new technologies for data transfer as needed. CMS aggregate data rates, managed by Rucio, regularly top 40 GB/s and have been proven to reach 100 GB/s.

Get in touch

We are always happy to chat. You can drop us a mail and we will reply as quickly as possible.