Link to Curriculum Vitae: (PDF)
Over more than 15 years in scientific computing, I have gained extensive managerial experience across large organizational structures, including line management at CSAID, international collaborations such as CMS, and major U.S. operations programs.
Since August 2024, I have served as the Deputy Associate Lab Director (interim) for CSAID at Fermilab. In addition, since November 2024, I have held the roles of Acting Deputy Division Director for the Scientific Computing Systems and Services Division and Department Head for the Facility Evolution Department.
From March 2019 to December 2024, I was appointed U.S. CMS Software and Computing Operations Program Manager. This program, jointly funded by the National Science Foundation (NSF) and the Department of Energy (DOE), provides the software and computing infrastructure that enables U.S. CMS researchers to maintain leadership in CMS physics. Under my leadership, the program’s annual budget grew from 15M U.S. Dollars in 2019 to nearly 20M U.S. Dollars in 2024, supporting more than 60 FTEs, including scientists, technical staff, and computing hardware across Fermilab and seven U.S. university sites.
I am actively engaged in recruiting and sustaining a vibrant, high-performing workforce. I support this goal by building career pipelines through internship programs and by mentoring junior technical staff, early-career scientists, and postdoctoral researchers.
08/2024 - present | Deputy Associate Lab Director for CSAID (interim) |
11/2024 - present | Acting Deputy Division Director SCSS |
11/2024 - present | Facility Evolution Department Head |
09/2022 - 07/2024 | Department Head for Computing Resource Evolution Strategy Department |
09/2022 - 07/2024 | CMS Software & Computing Coordinator in the Scientific Computing Division |
10/2019 - 08/2022 | Associate Head of the Scientific Computing Division for CMS |
10/2016 - 09/2019 | Deputy Head of the Scientific Services Quadrant |
10/2014 – 09/2016 | Assistant Head of the Scientific Computing Division for Science Operations and Workflows |
10/2014 – 12/2014 | Interim Department Head of the Scientific Data Processing (SDP) Department of the Scientific Services Quadrant |
10/2013 – 09/2014 | Deputy Department Head of the Scientific Data Processing (SDP) Department of the Scientific Services Quadrant |
09/2012 – 09/2014 | CMS Operations Group Leader in the Scientific Data Processing (SDP) Department of the Scientific Services Quadrant |
03/2019 - 12/2024 | U.S. CMS Software and Computing Operations Program manager |
10/2016 - 02/2019 | U.S. CMS Software and Computing Operations Deputy Program manager |
10/2016 - 02/2019 | L2 manager for Software and Support in the U.S. CMS Software and Computing Operations Program Execution Team |
03/2014 – 09/2016 | L2 manager for Operations in the Software and Computing Operations Program Execution Team |
09/2015 – 09/2019 | CMS Offline & Computing Project Focus Area Lead for Infrastructure and Resources |
10/2014 – 08/2015 | Member of the CMS Offline & Computing Project Management Board |
01/2012 – 09/2014 | CMS Offline & Computing Project Computing Operations L2 Manager |
07/2009 – 12/2011 | CMS Offline & Computing Project Data Operations L2 Manager |
01/2007 – 07/2009 | CMS Offline & Computing Project Release Validation Manager |
Since August 2024, I have served as the interim Deputy Associate Lab Director for the Computational Science and AI Directorate (CSAID) at Fermilab. I lead strategic and tactical initiatives to support Fermilab’s scientific program with software and computing solutions. Since November 2024, I have also held the roles of Acting Deputy Division Director for the Scientific Computing Systems and Services Division and Department Head for the Facility Evolution Department. This department is responsible for maintaining and advancing the Computing Resources Evolution STrategy (CREST) process, which outlines a ten-year strategy for the evolution of Fermilab’s computing infrastructure based on current and future experiment needs, particularly those of DUNE and HL-LHC.
From September 2022 to July 2024, during the formation of CSAID from the former Scientific Computing Division, I served as the CMS Software & Computing Coordinator within the Directorate, based on my role as U.S. CMS Software and Computing Operations Program Manager. I acted as the primary contact for all CMS software and computing efforts in the Division and contributed to budget planning, personnel management, and major strategic decisions. Concurrently, I led the Computing Resource Evolution Strategy Department, developing a comprehensive strategy for evolving Fermilab’s computing to meet current and anticipated experiment requirements.
From October 2019 to August 2022, I served in a similar role under the title Associate Head of the Scientific Computing Division for CMS.
Between October 2016 and September 2019, I was Deputy Head of the Scientific Services Quadrant, the computing infrastructure arm of the Scientific Computing Division. In this role, I oversaw development of infrastructure software components for data and workload management across Fermilab’s scientific program—including neutrino, muon, and astro-particle experiments as well as CMS.
From September 2014 to September 2016, I served as Assistant Division Head for Science Operations and Workflows within the Scientific Computing Division. I was responsible for delivering scientific computing services to all Fermilab experiments, including High Energy Physics experiments (e.g., CMS), neutrino physics (e.g., NOvA, MINERvA), intensity frontier (e.g., Mu2e, Muon g−2), and astro-particle physics (e.g., DES). As a senior management team member, I contributed to strategic planning to evolve infrastructure and operational practices. Notably, I developed a new storage strategy that streamlined operation and access to more than 30 PB of disk space dedicated to all Fermilab experiments except CMS. I was also responsible for maintaining the laboratory’s computing strategy as part of the broader Laboratory Strategy Effort and regularly reported to Fermilab leadership.
From March 2019 to December 2024, I served as the U.S. CMS Software and Computing Operations Program Manager. The mission of this program is to provide software and computing infrastructure that enables U.S. CMS researchers to maintain leadership in physics within CMS, and to supply computing resources to the Worldwide LHC Computing Grid (WLCG) as the U.S. contribution to CMS computing. The program is jointly funded by the U.S. Department of Energy (DOE) and the National Science Foundation (NSF).
Under my leadership, the program’s annual budget increased from 15M U.S. Dollars in 2019 to nearly 20M U.S. Dollars in 2024. I led a team of over 60 FTEs, scientists and technical staff, from 20 U.S. institutions and national laboratories. I was directly involved in hiring at Fermilab and indirectly through university principal investigators for staffing new initiatives and succession planning. I provided technical direction and oversaw the funding and operations of the U.S. CMS regional facility at Fermilab and seven university facilities (Caltech, University of Florida, MIT, University of Nebraska–Lincoln, Purdue University, UC San Diego, and University of Wisconsin–Madison).
The program supports personnel responsible for site administration, software and infrastructure maintenance, and strategic R&D projects. These include the CMS software framework (CMSSW), workflow management, distributed alignment and calibration access software, and contributions to community projects such as ROOT and Rucio.
In 2019, I began integrating formal project management into the program, introducing a regularly updated risk assessment process modeled after Fermilab’s operational risk registry. This enabled timely and accurate communication with program management and funding agencies via reports, meeting calls, biannual in-person meetings, and biennial external reviews.
From 2015 to 2019, the CMS collaboration appointed me Focus Area Lead for Services and Infrastructure within the CMS Software and Computing project. I coordinated global efforts on submission infrastructure, explored innovative use of commercial clouds and supercomputers, and managed the development of core infrastructure services such as data and workflow management systems.
In 2009, I was appointed Lead of the CMS Data Operations Project, a role that leveraged my expertise in physics analysis and computing. I was responsible for the timely delivery of all data and Monte Carlo samples used in CMS analyses—a critical factor in the experiment’s success.
In 2012, CMS expanded my responsibilities by appointing me Lead of the CMS Computing Operations Project. This role included oversight of more than 70 computing centers worldwide and all central computing services for CMS. I led over 60 scientists and engineers, ensuring readiness of computing facilities, monitoring central workflows and analysis, and managing global data transfers.
Following the 2012 discovery of the Higgs boson, the CMS collaboration awarded me the CMS Young Researcher Prize in recognition of my significant contributions to the discovery through computing. This prize is awarded annually to ten members of the collaboration.
My approach to building and sustaining high-performing teams of professionals and scientists centers on cultivating a respectful, inclusive, and transparent workplace culture, underpinned by clear guidance and expectation management. In large organizations, structured goal-setting and performance review processes are essential for fostering motivation and alignment.
I am deeply committed to mentoring the next generation of professionals and scientists. Recognizing that recruitment starts well before the formal hiring process, I established the U.S. CMS Software & Computing Operations Program Internship Program during the early LHC years. This initiative invited junior physicists and computing engineers to spend 1–2 years at Fermilab, contributing directly to CMS’s global computing infrastructure. Many alumni of the program have continued their careers in academia and industry, including hires I made at Fermilab and other institutions, and have gone on to work at the Pittsburgh Supercomputing Center, Google, or pursued graduate studies at institutions such as Caltech, ETH Zurich, and the University of Cambridge.
More recently, I have collaborated with my teams to provide mentorship and project opportunities for students in the U.S. CMS Summer Undergraduate Research Internship Program (PURSUE), helping to develop the future workforce for scientific software and computing.
Leveraging my extensive expertise in scientific software and computing infrastructure, I contribute to global community efforts to shape the future of software and computing for science, with a particular focus on the High-Luminosity LHC (HL-LHC). My leadership centers on providing the strategic vision needed to enable groundbreaking scientific discoveries.
For HL-LHC, I co-authored a strategic plan for the U.S. CMS Software & Computing Operations Program (arXiv:2312.00772) that outlines four grand challenges that must be addressed:
To implement this plan, I established a Research Initiative within the U.S. CMS Software & Computing Operations Program. This initiative provides partial funding for postdoctoral researchers to explore novel and forward-looking solutions to these four grand challenges. The strategic plan also builds on prior contributions to broader community planning exercises, such as the Roadmap for HEP Software and Computing R&D for the 2020s.
In 2024, I was appointed co-lead of the CMS Collaboration Board Sub-Group for Offline & Computing for HL-LHC. I proposed the creation of this sub-group to introduce structured coordination and effort planning within the Offline & Computing area—an innovation for CMS, where such formal coordination had previously only existed in detector projects through institutional boards.
At Fermilab, I created the Computing Resources Evolution STrategy (CREST) process, which defines a ten-year strategic plan for evolving the lab’s computing resources. This plan considers both current experimental needs and anticipated demands from DUNE and HL-LHC.
As part of the Department of Energy’s Center for Computational Excellence (HEP-CCE), I co-authored a proposal in 2020 to lead four sub-projects addressing key challenges in adapting HEP computing for heterogeneous architectures:
I was appointed technical lead of the PPS sub-project, which has since completed its funding cycle.
I am nationally and internationally recognized for my leadership. I was selected to co-lead the Computational Frontier for the Snowmass 2021 process, the U.S. particle physics community planning exercise. I also serve on the editorial boards of the journal “Computing and Software for Big Science” and the European Physical Journal (EPJ C).
Starting in 2029, the HL-LHC will generate many times the data volume of current LHC runs. Additionally, the collisions and corresponding simulations will be significantly more complex. I have played an integral role in the community planning process, with my contributions documented in, for example, the Roadmap for HEP Software and Computing R&D for the 2020s. I also co-edited the HEP Software Foundation Community White Paper Working Group on Data Analysis and Interpretation. In recognition of my expertise, I was asked in 2020 to co-lead the Computational Frontier of the Snowmass 2021 process. Due to a one-year delay in Snowmass, I resigned in 2022 to focus on higher-priority commitments, overseeing the process until a successor was appointed.
In 2023, the CMS collaboration initiated the creation of a Conceptual Design Report (CDR) for Offline & Computing, documenting the current state of HL-LHC software and computing preparations. Scheduled for publication by the end of 2025, I was invited to co-lead the computing model chapter, the core of the CDR, which includes resource needs projections and guidance on processes and workflows governing HL-LHC software and computing. The capacity planning and computing model will integrate traditional and emerging computing infrastructures into a seamless, globally integrated system.
In February 2024, the CMS Collaboration Board appointed me co-lead of the Sub-Group for Offline & Computing for HL-LHC. This sub-group is a CMS first, providing a dedicated forum to discuss and coordinate effort needs within the Offline & Computing Coordination area. Unlike detector projects, which have a projectized structure with defined institute contributions for construction and operations, Offline & Computing lacks such a structure apart from hardware contributions through WLCG. This sub-group introduces structured coordination and planning into CMS Offline & Computing efforts.
To guide HL-LHC R&D, I co-authored a strategic plan for the U.S. CMS Software & Computing Operations Program outlining four grand challenges:
This plan is updated annually by the operations program management team and documented in the proceedings of CHEP 2023 (arXiv:2312.00772).
Under my leadership, the U.S. CMS Software & Computing Operations program established a Research Initiative to partially fund postdoctoral researchers exploring innovative solutions to the four grand challenges. This initiative has been highly successful in engaging new collaboration members and broadening the scope of potential HL-LHC R&D solutions.
In March 2023, I created Fermilab’s Computing Resources Evolution STrategy (CREST) process. Its goal is to document a strategic plan for the evolution of Fermilab’s computing resources that addresses current experiment requirements and anticipates the future needs of DUNE and HL-LHC. CREST enables staff in the Scientific Computing Systems and Services Division (SCSS) within CSAID to collaboratively develop and own a ten-year computing resource strategy. I lead the development of the first version of this plan, which underwent broad discussion within Fermilab’s scientific community and is intended to be updated annually to reflect changes in scientific priorities and the technological landscape.
In January 2020, the DOE Center for Computational Excellence (HEP-CCE) was funded as a three-year project to enable HEP experiments such as ATLAS, CMS, DUNE, LSST, and DESI to efficiently utilize HPC facilities at leadership-class centers including Argonne, Oak Ridge, and NERSC at Lawrence Berkeley National Laboratory. I co-authored the proposal, which includes four sub-projects aimed at:
I served as technical lead of the PPS sub-project and point of contact for CMS. Alongside my postdoctoral researcher Martin Kwok and staff from DOE labs including BNL, LBNL, and Fermilab, I contributed to investigate portability solutions for HEP software, enabling CMS to make informed decisions on portability approaches for LHC Run-3.
I am nationally and internationally recognized for my leadership in software and computing through membership in various journal editorial boards, conference committees, and mentoring programs:
and I am serving or served on the following committees:
I possess deep expertise in planning, developing, maintaining, and operating distributed computing infrastructures that provide access to several hundred thousand computing cores and hundreds of petabytes of disk storage. I am proficient in efficiently storing and retrieving data from permanent tape storage. I am intimately familiar with both High-Throughput Computing (HTC) and High-Performance Computing (HPC), including scientific grid sites, academic and commercial clouds, as well as the largest supercomputers at HPC centers in the U.S. and worldwide. This infrastructure supports scientific software composed of millions of lines of C++ and Python code, essential for extracting physics results. I am an expert in object-oriented software development, statistical data analysis methods, Monte Carlo simulation techniques, and various optimization and machine learning approaches.
The technical components of my work are tightly linked to scientific research, enabling the analysis of particle physics detector data and simulations as the foundation for producing physics results. My active engagement in High Energy Physics (HEP) research allows me to guide the scientific community in leveraging the latest computing innovations, effectively bridging the domains of science and scientific computing.
My current R&D projects include:
Distributed data-intensive computing depends on high-speed wide-area network connectivity to move data efficiently for processing workflows or end-user analysis. I actively invest in networking R&D, participating in the DOE ASCR ESnet requirements review and leading efforts with my team on key networking topics. Of particular interest are dynamically managed network paths. The ESnet SENSE/Rucio project focuses on enabling the data management system Rucio to dynamically create network paths between storage endpoints, thereby guaranteeing quality of service, improving predictability, and reducing contention in organized data flows.
In collaboration with ESnet researchers, I am working to characterize network flows at site borders by analyzing packet headers. We employ advanced AI techniques to distinguish between streaming, file transfers, and other flow types. This capability will enable deployment of managed network paths via SENSE without requiring instrumentation at the application layer.
U.S. High Performance Computing centers, especially DOE leadership-class facilities at Argonne, Oak Ridge, and NERSC, offer exceptional computational capabilities that open new frontiers in scientific research. These centers are optimized for computationally intensive workflows, prioritizing performance per watt. Many sites now use GPUs to maximize performance while minimizing power consumption. For High Energy Physics (HEP), transitioning experimental software and infrastructure to leverage GPUs is a pivotal challenge, arguably as significant as the historic shift from Fortran to object-oriented C++. I consider this transition essential for future scientific success.
I actively contribute to both software and infrastructure efforts supporting this transition. The HEP-CCE project, which I conceptualized, advances portability libraries enabling developers to write algorithms once and compile or execute them seamlessly across diverse CPU and GPU architectures, thereby reducing software development overhead. Under my guidance, the U.S. CMS Software & Computing Operations Program supports R&D on GPU-based tracking software and, through its R&D initiative, on many additional algorithms undergoing GPU adaptation. Through the Fermilab HEPCloud project, I am facilitating access to numerous HPC centers for CMS and the Fermilab community, contingent on allocation approvals by respective experiments. Combined with appropriate software, this approach aims to enable seamless future integration.
Another focus of my research in computing infrastructure is exploring whether High Energy Physics analysis can be conducted more efficiently using tools developed and widely adopted in industry. Rather than relying exclusively on the ROOT toolkit, which was developed entirely within the HEP community, I am investigating industry-standard platforms like Apache Spark and similar technologies. To this end, I initiated a collaborative research group including scientists from Fermilab, CERN, and the universities of Princeton, Padova, and Vanderbilt. The CMS Big Data Project worked closely with industry partners, most recently concluding a project with Intel in January 2019 under the auspices of CERN openlab. To enable this collaboration, Fermilab joined CERN openlab, and I coordinated the DOE approval process with support from the Fermilab Office of Partnership and Technology Transfer and Legal Office.
I also managed a Laboratory Directed Research and Development (LDRD) project to develop innovative Big Data delivery technology for array-based analysis code: the Striped Data Server for Scalable Parallel Data Analysis. Completed successfully in January 2019, this project produced a working prototype. These efforts have inspired numerous researchers and software experts, spawning projects such as the columnar data tools user front-end COFFEA and analysis support funded through IRIS-HEP.
These developments culminate in the creation of the analysis facility concept, built around the Python-based analysis ecosystem and graph scheduling tools like Dask. The analysis facility concept is now central to the U.S. CMS Software & Computing Operations Program’s strategy for the HL-LHC.
I am not alone in advancing these technologies, especially junior scientists are drawn to them, as they provide transferable skills valuable both in academia and industry.
I am a particle physicist at heart, originally motivated by conducting leading-edge research into New Physics Beyond the Standard Model, as well as precision measurements within the Standard Model.
I have many years of experience analyzing high-energy collisions from various particle colliders using a wide range of techniques. I have published numerous papers in leading journals and am currently a member of the CMS collaboration operating one of the four detectors at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The CMS collaboration is a global endeavor, comprising more than 3000 physicists from over 50 countries, including over 1000 students.
In my past LHC research, I have led searches for evidence of physics beyond the Standard Model using top quarks and contributed to investigations of Supersymmetry and Dark Matter. Among my most notable publications is the Observation of the Higgs Boson in 2012, where my work in scientific computing played a significant role.
2001-2005 | University of Hamburg, Doctor of Natural Sciences, Hamburg, Germany |
Thesis title | Measurement of beauty quark cross sections in photoproduction with the ZEUS experiment at the electron proton collider HERA (thesis) |
Advisors | Prof. Dr. Robert Klanner, Dr. Achim Geiser |
1996 - 2001 | University of Hamburg, Diploma in Physics, Hamburg, Germany |
Thesis title | Development of the trigger algorithm for the MONOLITH experiment (thesis) |
Advisors | Prof. Dr. Robert Klanner, Dr. Achim Geiser |
09/2019 - Present | Senior Scientist | Fermi National Accelerator Laboratory (Fermilab) |
09/2014 - 08/2019 | Scientist | Fermi National Accelerator Laboratory (Fermilab) |
06/2009 – 08/2014 | Application Physicist I | Fermi National Accelerator Laboratory (Fermilab) |
06/2005 – 05/2009 | Research Associate | Fermi National Accelerator Laboratory (Fermilab) |
09/2001 – 02/2005 | Doctoral Candidate | Deutsches Elektronen Synchrotron (DESY) |
03/2011-02/2016 | Jacob Linacre - Fermilab Postdoc in CMS |
- Searches for SM and BSM physics with top quarks | |
- 01/2014-12/2015: CMS Physics top quark properties subgroup convener | |
- 03/2011-12/2013: CMS Tier-1 production and processing L3 manager | |
now Staff at RAL in the UK | |
06/2015-06/2021 | Matteo Cremonesi - Fermilab Postdoc in CMS |
- Dark matter searches | |
- 09/2015-09/2019: CMS L3 manager for processing and production | |
- 12/2016-01/2019: Co-lead of CERN openlab/Intel Big Data project | |
- 01/2019-present: Co-lead of Coffea project using python industry tools for CMS analysis | |
now Faculty at Carnegie Mellon University | |
10/2018-04/2024 | Nick Smith - Fermilab Postdoc in CMS |
- Higgs precision measurements in b-quark and other final states | |
- 03/2019-12/2019: CMS L3 manager for data management | |
- 01/2020-01/2022: CMS L2 manager for Computing Operations | |
- 09/2021-09/2023: CMS Higgs Combination Sub-Group coordinator | |
- 01/2022-01/2023: U.S. CMS Storage R&D project: Ceph | |
Now staff at Fermilab | |
11/2020-09/2023 | Martin Kwok - Fermilab Postdoc in CMS |
- HEP-CCE Portable Parallelization Strategies | |
Now staff at U Nebraska-Lincoln. |
I joined the CMS collaboration at the LHC in 2005, focusing my research on searching for New Physics Beyond the Standard Model of Particle Physics, as well as performing precision Standard Model measurements.
I was a founding member of an analysis group composed of researchers from Fermilab, UC San Diego, and UC Santa Barbara, concentrating on final states involving leptons. This approach proved successful: following early publications such as a measurement of the top quark cross section, the group shifted its focus toward new physics and beyond Standard Model processes. We led the WW to dilepton analysis in the CMS Higgs discovery paper and conducted searches for Supersymmetry (SUSY) in same-sign and opposite-sign dilepton channels, as well as single lepton channels. The group continues its efforts in SUSY searches in lepton final states and Standard Model measurements.
I have supervised several Fermilab postdoctoral researchers in pursuit of these research goals:
Together with Jacob Linacre, I focused on exploiting the dilepton signature to search for the pair production of a heavy top-like quark (t’). We continued investigating top quark properties through angular distributions in the dilepton final state. Notably, we were the first to measure the top pair charge asymmetry at the LHC using the dilepton final state to further explore deviations previously observed at the Tevatron. Our work includes LHC Run 1 publications on top pair spin correlations and top quark polarization for the 7 TeV and 8 TeV datasets, as well as top pair charge asymmetry measurements for the 7 TeV and 8 TeV datasets.
From 2015 to 2021, I supervised Fermilab Postdoc Matteo Cremonesi, who established a new dark matter analysis effort at the Fermilab LHC Physics Center (LPC). This work involved searching for dark matter particles in various channels. The first publication presented the search for dark matter in events with energetic, hadronically decaying top quarks and missing transverse momentum using the 13 TeV 2016 dataset from LHC Run 2. The second focused on the search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in the same dataset. Subsequently, the group expanded to other mono-object channels at the LPC.
From 2018 to 2024, I supervised Fermilab Postdoc Nick Smith, who joined the LPC’s Higgs physics efforts. He contributed to the analysis of Higgs decay channels into two bottom quarks and two charm quarks and is currently focusing on effective field theory (EFT)-based analyses.
A. Tumasyan et al., A portrait of the Higgs boson by the CMS experiment ten years after the discovery., Nature. 607 (2022) 60–68, doi:10.1038/s41586-022-04892-x, arXiv:2207.00043 [hep-ex]
A.M. Sirunyan et al., Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in protonproton collisions at $\sqrt{s}=13{Te}{V}$, Eur. Phys. J. C. 79 (2019) 280, doi:10.1140/epjc/s10052-019-6730-7, arXiv:1811.06562 [hep-ex]
A.M. Sirunyan et al., Search for dark matter in events with energetic, hadronically decaying top quarks and missing transverse momentum at $\sqrt{s}=13$ TeV, JHEP. 06 (2018) 027, doi:10.1007/JHEP06(2018)027, arXiv:1801.08427 [hep-ex]
V. Khachatryan et al., Measurements of t t-bar spin correlations and top quark polarization using dilepton final states in pp collisions at sqrt(s) = 8 TeV, Phys. Rev. D. 93 (2016) 052007, doi:10.1103/PhysRevD.93.052007, arXiv:1601.01107 [hep-ex]
V. Khachatryan et al., Measurements of $t \bar t$ charge asymmetry using dilepton final states in pp collisions at $\sqrt s=8$ TeV, Phys. Lett. B. 760 (2016) 365–386, doi:10.1016/j.physletb.2016.07.006, arXiv:1603.06221 [hep-ex]
S. Chatrchyan et al., Measurements of $t\bar{t}$ Spin Correlations and Top-Quark Polarization Using Dilepton Final States in $pp$ Collisions at $\sqrt{s}$ = 7 TeV, Phys. Rev. Lett. 112 (2014) 182001, doi:10.1103/PhysRevLett.112.182001, arXiv:1311.3924 [hep-ex]
S. Chatrchyan et al., Measurements of the $t\bar{t}$ Charge Asymmetry Using the Dilepton Decay Channel in pp Collisions at $\sqrt{s} =$ 7 TeV, JHEP. 04 (2014) 191, doi:10.1007/JHEP04(2014)191, arXiv:1402.3803 [hep-ex]
S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, Phys. Lett. B. 716 (2012) 30–61, doi:10.1016/j.physletb.2012.08.021, arXiv:1207.7235 [hep-ex]
A. Apresyan et al., Detector R&D needs for the next generation $e^+e^-$ collider, (2023). http://arxiv.org/abs/2306.13567, arXiv:2306.13567 [hep-ex]
M. Atif et al., Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics, (2023). http://arxiv.org/abs/2306.15869, arXiv:2306.15869 [hep-ex]
B. Bockelman et al., IRIS-HEP Strategic Plan for the Next Phase of Software Upgrades for HL-LHC Physics, (2023). http://arxiv.org/abs/2302.01317, arXiv:2302.01317 [hep-ex]
V.D. Elvira et al., The Future of High Energy Physics Software and Computing, in: Snowmass 2021, 2022, doi:10.2172/1898754, arXiv:2210.05822 [hep-ex]
G. Cerati et al., Snowmass Computational Frontier: Topical Group Report on Experimental Algorithm Parallelization, (2022). http://arxiv.org/abs/2209.07356, arXiv:2209.07356 [hep-ex]
M. Bhattacharya et al., Portability: A Necessary Approach for Future Scientific Software, in: Snowmass 2021, 2022. http://arxiv.org/abs/2203.09945, arXiv:2203.09945 [physics.comp-ph]
D. Berzano et al., HEP Software Foundation Community White Paper Working Group – Data Organization, Management and Access (DOMA), (2018). http://arxiv.org/abs/1812.00761, arXiv:1812.00761 [physics.comp-ph]
L. Bauerdick et al., HEP Software Foundation Community White Paper Working Group - Data Analysis and Interpretation, (2018). http://arxiv.org/abs/1804.03983, arXiv:1804.03983 [physics.comp-ph]
J. Balcas et al., Automated Network Services for Exascale Data Movement, EPJ Web Conf. 295 (2024) 01009, doi:10.1051/epjconf/202429501009
O. Gutsche et al., The U.S. CMS HL-LHC R&D Strategic Plan, EPJ Web Conf. 295 (2024) 04050, doi:10.1051/epjconf/202429504050, arXiv:2312.00772 [hep-ex]
K.H.M. Kwok et al., Application of performance portability solutions for GPUs and many-core CPUs to track reconstruction kernels, EPJ Web Conf. 295 (2024) 11003, doi:10.1051/epjconf/202429511003, arXiv:2401.14221 [physics.acc-ph]
N. Smith et al., A Ceph S3 Object Data Store for HEP, EPJ Web Conf. 295 (2024) 01003, doi:10.1051/epjconf/202429501003, arXiv:2311.16321 [physics.data-an]
N. Smith et al., Coffea: Columnar Object Framework For Effective Analysis, EPJ Web Conf. 245 (2020) 06012, doi:10.1051/epjconf/202024506012, arXiv:2008.12712 [cs.DC]
M. Cremonesi et al., Using Big Data Technologies for HEP Analysis, EPJ Web Conf. 214 (2019) 06030, doi:10.1051/epjconf/201921406030, arXiv:1901.07143 [cs.DC]
J. Albrecht et al., A Roadmap for HEP Software and Computing R&D for the 2020s, Comput. Softw. Big Sci. 3 (2019) 7, doi:10.1007/s41781-018-0018-8, arXiv:1712.06982 [physics.comp-ph]
O. Gutsche, Scientific Computing and Scientific Software Infrastructure - High Performance Computing (HPC), (2025), Presentation for the Chicagoland Computational Traineeship in High Energy Particle Physics, (Material)
O. Gutsche, Scientific Computing and Scientific Software Infrastructure - Particle Physics Overview, (2025), Presentation for the Chicagoland Computational Traineeship in High Energy Particle Physics, (Material)
O. Gutsche, Computing and Software Infrastructure, (2024), Presentation at the Fermilab-CERN HCP Summer School 2024, (Material)
O. Gutsche, HL-LHC Computing, (2023), Presentation at the USCMS Undergraduate Summer Internship 2023, (Material)
O. Gutsche, (2023), Parallel Session Talk at the 26th International Conference on Computing in High Energy & Nuclear Physics (CHEP2023), (Material)
O. Gutsche, Computing, (2022), Plenary talk given at the DOE/HEP Review of the Energy Frontier Laboratory Research Program, (Material available upon request)
O. Gutsche, Computing, (2022), Lecture given at 17th Hadron Collider Physics Summer School, (Material)
A. Purcell, Oliver Gutsche: Fermilab joins CERN openlab, works on data reduction project with CMS experiment, (2017), Article in CERN openlab News, (Article)
M. May, Oliver Gutsche: A Spark in the dark, (2017), Article in ASCR Discovery, (Article)
M. May, Oliver Gutsche: Open-source software for data from high-energy physics, (2017), Article in Phys.Org, (Article)
published on: 01. September 2025