SystemC simulation of the future SAMPA ASIC for use in the ALICE Experiment in Run 3
MetadataVis full innførsel
The ALICE experiment at CERN is making upgrades to most of its equip- ment. One of its sub-detectors the TPC, requires new readout electronics because of an increase in data volume gathered from the detector. This means that new custom electronic chips needs to be developed. It can be costly to create many different prototypes when testing different speci ca- tions for the chips. This motivates nding a different and cheaper way to test the electronics. One way this can be achieved is by creating a computer model of the electronic system, and do computer simulations on them. This thesis will evaluate the possibility of creating a model which is accurate enough to give realistic results, and by extension test different parts of the electronic system. The SAMPA ASIC is one of the new chips being developed for the readout electronics. This chip is the focus of this thesis, the goal is to identify the necessary size for its FIFO buffers. The SAMPA will receive a huge amount of data from the TPC detector, which means that it needs a compression scheme in order to deal with it. This thesis compares the performance and give some insight into two different compression schemes: Zero Suppression and Huffman encoding. There are many tools to create a computer simulation, one being the Sys- temC framework. SystemC is a C++ library, which can be used to design a computer model and run simulations on it. Using this framework, a model of the readout electronics was created, and different types of data was passed through it. The results gathered from the simulations were studied and eval- uated, determining their validity, and discussing what their impact on the development of the readout electronics. The thesis shows that it is possible to create an accurate representation of the electronic system that gives realistic results. The results found regarding the size of the SAMPA chips FIFO buffers indicate that initial numbers would i be too small, and should be increased. Regarding the comparison between the two compression schemes, it was found that their results highly depended on the amount and shape of the input data. Huffman encoding works better with higher amounts of data than Zero Suppression, but relied more on the shape, making it more unpredictable.
Masterthesis in Software Egngineering