Speaker
Description
In High Energy Physics (HEP) experiments, such as those conducted at CERN, the demand for increasingly powerful data acquisition systems (DAQ) is a common necessity. These systems typically encompass a network of computers, high-end FPGAs, and in certain instances, even GPUs.
Despite their substantial capabilities, these multi-million-euro facilities often lie dormant when experiments are not in use, like during technical stops.
Our proposal introduces a framework designed to reclaim these idle clusters for executing Machine Learning algorithms such as Monte-Carlo generation or further data analysis.
This innovative approach aims to maximize the utilization of resources within these facilities. We will share our results at running an example model used for parameterizing the LHCb tracking resolution in a DAQ FPGA, along with the outcomes we achieved. We will also outline our plans for moving forward towards heterogeneous computing.