Curriculum Vitae


Link to Curriculum Vitae: (PDF)


Managerial Experience

During my career in scientific computing of more than 15 years, I have gained extensive experience managing large scientific projects like the U.S. CMS Software & Computing Operations Program, and in organizations like the international CMS collaboration at the LHC and the Computational, Science and Artificial Intelligence Directorate (CSAID) at Fermilab.

Since 2019, I am the U.S. CMS Software and Computing Operations Program manager providing for the software and computing infrastructure for U.S. CMS researchers to maintain physics leadership in CMS. Under my leadership, the program’s annual budget increased from $15M in 2019 to almost $20M in 2024, supporting more than 60 FTE of scientists and technical staff and computing hardware deployed at Fermilab and 7 U.S. university sites.

I am currently holding a senior management position in Fermilab CSAID and held many management position in the CMS collaboration.

I am active in recruiting and maintaining a diverse work force through building pipelines through internship programs, and mentoring of junior technical staff as well as junior scientists and postdoctoral researchers.

Assignments: U.S. CMS Software and Computing Operations Program

03/2019 - present U.S. CMS Software and Computing Operations Program manager
10/2016 - 02/2019 U.S. CMS Software and Computing Operations Deputy Program manager
10/2016 - 02/2019 L2 manager for Software and Support in the U.S. CMS Software and Computing Operations Program Execution Team
03/2014 – 09/2016 L2 manager for Operations in the Software and Computing Operations Program Execution Team

Assignments: Fermi National Accelerator Laboratory - Computational Science and AI Directorate (CSAID)

09/2022 - present Department Head for Computing Resource Evolution Strategy Department
09/2022 - present CMS Software & Computing Coordinator
10/2019 - 08/2022 Associate Head of the Scientific Computing Division for CMS
10/2016 - 09/2019 Deputy Head of the Scientific Services Quadrant
10/2014 – 09/2016 Assistant Head of the Scientific Computing Division for Science Operations and Workflows
10/2014 – 12/2014 Interim Department Head of the Scientific Data Processing (SDP) Department of the Scientific Services Quadrant
10/2013 – 09/2014 Deputy Department Head of the Scientific Data Processing (SDP) Department of the Scientific Services Quadrant
09/2012 – 09/2014 CMS Operations Group Leader in the Scientific Data Processing (SDP) Department of the Scientific Services Quadrant

Assignments: CMS Collaboration - Offline & Computing Coordination Area

09/2015 – 09/2019 CMS Offline & Computing Project Focus Area Lead for Infrastructure and Resources
10/2014 – 08/2015 Member of the CMS Offline & Computing Project Management Board
01/2012 – 09/2014 CMS Offline & Computing Project Computing Operations L2 Manager
07/2009 – 12/2011 CMS Offline & Computing Project Data Operations L2 Manager
01/2007 – 07/2009 CMS Offline & Computing Project Release Validation Manager

U.S. CMS Software & Computing Operations Program Manager

In March 2019, I was appointed the U.S. CMS Software and Computing Operations Program manager. The mission of the operations program is to provide for the software and computing infrastructure for U.S. CMS researchers to maintain physics leadership in CMS. The mission is also to provide computing resources to the Worldwide LHC Computing Grid (WLCG) as the U.S. contribution to the computing resource needs of the CMS collaboration. The operations program is funded by both the U.S. Department of Energy (DOE) and the U.S. National Science Foundation (NSF). Under my leadership, the program’s annual budget increased from $15M in 2019 to almost $20M in 2024. I lead a team of more than 60 FTE of scientists and technical staff from 20 U.S. institutes and national laboratories, providing for staffing of new initiatives and succession planning through hires by university PIs, and being involved in new hires at Fermilab myself. I provide technical guidance and oversee the funding and operation of computing hardware of the U.S. CMS regional facility at Fermilab and 7 U.S. university facilities at Caltech, Florida University, MIT, University of Nebraska-Lincoln, Purdue University, UC San Diego, and University of Wisconsin-Madison. The program funds effort to administer the sites, maintain the software and computing infrastructure and conduct strategic R&D projects. Amongst these projects are the CMS software framework CMSSW, the workflow software and distributed alignment and calibration access software, as well as contributions to community software projects like ROOT and Rucio. In 2019, I started introducing aspects of formal project management into the operations program and modeled a regularly updated risk assessment approach on the operational risk registry of Fermilab. This allows me to communicate timely and accurately with program management and funding agencies through regular reports and meeting calls as well as biannual in-person meetings and biennial external reviews.

Computational, Science and Artificial Intelligence Directorate (CSAID)

In September 2019, I was appointed Associate Head of the Scientific Computing Division of Fermilab for CMS, which in the meantime transitioned into the CMS Software & Computing Coordinator in the Computational, Science and Artificial Intelligence Directorate (CSAID). In my role as U.S. CMS Software and Computing Operations Program manager, I am the general contact for all aspects of CMS software and computing in the Directorate and involved in budgetary and personnel planning and major strategy decisions.

From October 2016 to September 2019, I was appointed Deputy Head of the Scientific Services Quadrant. This quadrant was the user facing arm of the Scientific Computing Division, and developed computing infrastructure software components for data and workload management for the whole scientific program of Fermilab, supporting neutrino, muon, and astro-particle experiments as well as CMS.

From September 2014 to September 2016, I was appointed Assistant Scientific Computing Division Head for Science Operations and Workflows in the Scientific Computing Division of Fermilab. I was responsible for the delivery of scientific computing services to all Fermilab experiments, including High Energy Physics experiments (e.g. CMS), Neutrino Physics experiments (e.g. NOvA, Minerva), Intensity Frontier experiments (e.g. mu2e, Muon g-2) and Astro Particle Physics experiments (e.g. DES). As member of the senior management team, I developed strategic plans to evolve the infrastructure and operational procedures. For example, I developed a new storage strategy that simplifies the operation and usage of the more than 30 PB of disk space at Fermilab dedicated to all experiments except CMS. I was also responsible for maintaining the computing strategy as part of the Laboratory Strategy Effort and reported to the laboratory directorate.

CMS Offline & Computing Coordination Area

The CMS collaboration appointed me Focus Area Lead for Services and Infrastructure in the CMS Software and Computing project from 2015 to 2019. I was coordinating the efforts of the worldwide submission infrastructure, innovative new ways of using resources at commercial clouds and supercomputing centers, and the development of computing infrastructure services like data management and workflow management systems.

The CMS collaboration appointed me lead of the Data Operations Project in 2009. Using my deep involvement in physics analysis and my expertise in computing, I was responsible for the timely delivery of all data and MC samples for analysis, a significant contribution to the overall success of the experiment. In 2012, CMS extended my responsibilities and appointed me to lead all of the CMS Computing Operations Project, adding the care of over 70 computing centers distributed all over the world and all central computing services of CMS. I was supervising the contributions of more than 60 scientists and engineers to the Computing Operations Project worldwide. The team was overseeing the readiness of all the computing facilities and monitor both central workflows and analysis and the transfers of data and MC samples between the sites. After the Higgs discovery in 2012, the CMS collaboration awarded me the CMS Young Researcher Prize for enabling the Higgs Discovery with computing. This award is given to 10 collaboration members every year.

Workplace Culture

Equity, Diversity, Inclusion and Accessibility principles are the cornerstones of building and maintaining teams of highly motivated scientists and professionals. The recruitment process starts a lot earlier than forming a candidate pool for a hire. In the early years of the LHC, I created the U.S. CMS Software & Computing Operations Program Internship Program. To contribute to the operations of the software and computing infrastructure of CMS on a daily basis, junior physicists and computing engineers were invited to Fermilab to spend 1-2 years working inside the international collaboration. Many continued in careers in academia and industry, and some I was able to hire at Fermilab and other institutes to continue contributing to software and computing. Through my mentorship, members of the internship program started careers at the Pittsburgh Supercomputing Center and Google or continued with graduate programs at Caltech, ETH Zurich or the University of Cambridge in the UK.

Recently, I am working with my teams to provide project and mentoring to students of the U.S. CMS Summer Undergraduate Research Internship Program (PURSUE) to build the scientific software and computing workforce of the future.

Leadership Experience

Using my extensive knowledge of scientific software and computing infrastructure, I am contributing to the worldwide community efforts to plan for the software and computing infrastructure for the future of science, with emphasis on the High Luminosity LHC (HL-LHC). The goal is to provide the means to enable scientific discoveries that are groundbreaking through strategic leadership and vision.

For HL-LHC, I co-authored a strategic plan for the U.S. CMS Software & Computing Operations Program outlining four grand challenges that need to be solved for HL-LHC: Modernizing Physics Software and Improving Algorithms; Building Infrastructure for Exabyte-Scale Datasets; Transforming the Scientific Data Analysis Process; and Transition from R&D to Operations (arXiv:2312.00772). To realize the strategic plan, I created an Research Initiative in the U.S. CMS Software & Computing Operations program to provide partial funding for postdoctoral researches to investigate novel and forward-looking software and computing solutions for the 4 grand challenges of HL-LHC. The strategic plan also includes previous contributions to a variety of community planning exercises like the Roadmap for HEP Software and Computing R&D for the 2020s. Recently, I was appointed the CMS Collaboration Board co-lead of the Sub-Committee for Offline & Computing for HL-LHC. I asked for the creation of this sub-committee to introduce coordination and effort planning into the Offline & Computing coordination area, a novelty for the CMS collaboration that only existed for detector projects in the form of their institutional boards.

For Fermilab, I created the Computing Resources Evolution STrategy (CREST) process. The goal is to document a strategy for the evolution of Fermilab’s computing resources in the light of current experiment needs and future anticipated needs of DUNE and HL-LHC with a planning horizon of 10 years.

For the DOE Center for Computational Excellence (HEP-CCE), I co-wrote the proposal to execute 4 sub-projects to (1) investigate parallel portability solutions (PPS) to develop algorithmic code once and compile it transparently for the various CPU and accelerator-based architectures; (2) Fine-Grained I/O and Storage (IOS) to optimize data structures on disk and in memory and optimize data access on large shared storage systems at HPC systems; (3) Event Generators (EG) to optimize HEP theory code for execution on HPC systems; (4) Complex Workflows (CW) to orchestrate workflows whose steps need different hardware platforms. I was appointed technical lead of the PPS sub-project and the funding period of this proposal finished.

I am recognized nationally and internationally through membership of various committees, being selected to co-lead the Computational Frontier of the Snowmass 2021 process, and asked to be a member of the editorial board of the journal for “Computing and Software for Big Science”.

High-Luminosity LHC

Starting in 2029, the HL-LHC will produce many times the amount of data of the current LHC running periods. In addition, the collisions and the corresponding simulations will be many times as complex as today. I am an integral part of the community planning process and my input was documented for example in the Roadmap for HEP Software and Computing R&D for the 2020s. In addition, I was co-editor of the HEP Software Foundation Community White Paper Working Group - Data Analysis and Interpretation. My expertise in the community was acknowledged in 2020 when I was asked to co-lead the Computational Frontier of the Snowmass 2021 process. Because the Snowmass process was delayed by one year, I needed to resign in 2022 due to other higher priority commitments. I oversaw the process till a successor was appointed.

In 2023, the CMS collaboration embarked on documenting the current state of the software and computing preparation for the HL-LHC by starting the creation of a Conceptual Design Report (CDR) for Offline & Computing. The CDR is planned to be published by the end of 2024. I was asked by the collaboration to co-lead the computing model chapter, the central piece of the conceptual design including resource needs projections and general guidance of processes and workflows that will govern software and computing in HL-LHC. The capacity planning for the HL-LHC and especially the computing model will outline the integration of many traditional and new computing infrastructure forms in a seamless globally integrated system.

In February 2024, I was appointed by the CMS Collaboration Board to co-lead the Sub-Committee for Offline & Computing for HL-LHC. This sub-committee is a novelty for the CMS collaboration, as it provides a home for discussing and coordinating effort needs of the Offline & Computing Coordination area in CMS. Detector components have a projectize structure with defined contributions from institutes all over the world, which also includes the operation and maintenance of these detectors after completion of construction. Offline & Computing does not have a similar projectize structure, except for computing hardware contributions through WLCG. The CB Sub-Committee is my approach to introduce coordination and planning of Offline & Computing effort into the CMS collaboration.

To plan the R&D needed for HL-LHC, I co-authored a strategic plan for the U.S. CMS Software & Computing Operations Program outlining four grand challenges: Modernizing Physics Software and Improving Algorithms; Building Infrastructure for Exabyte-Scale Datasets; Transforming the Scientific Data Analysis Process; and Transition from R&D to Operations. The plan is updated yearly by the management team of the operations program. I also documented the plan in the proceedings for CHEP 2023 (arXiv:2312.00772).

Under my leadership, the U.S. CMS Software & Computing Operations program created an Research Initiative to provide partial funding for postdoctoral researches to investigate novel and forward-looking software and computing solutions for the 4 grand challenges of HL-LHC. The R&D initiative is very successful in engaging new members of the collaboration and enlarging the solution phase space for the HL-LHC R&D challenges.

Fermilab’s Computing Resources Evolution STrategy (CREST)

In March 2023, I created Fermilab’s Computing Resources Evolution STrategy (CREST) process. The goal is to document a strategy for the evolution of Fermilab’s computing resources in the light of current experiment needs and future anticipated needs of DUNE and HL-LHC. The process enables the staff of the Scientific Computing Systems and Services Division (SCSS) in CSAID to document a strategy how to provide for the computing needs with a planning horizon of 10 years. It has also the goal of enabling operations expert to develop a strategy for the future that they own and execute. I am leading the creation of the first version of the plan that will be widely discussed with the scientific community at Fermilab. The plan is planned to be updated yearly to account for changes in the scientific program and technical landscape.

DOE Center for Computational Excellence

In January 2020, the DOE Center for Computational Excellence (HEP-CCE) was funded for a 3-year project to enable HEP experiments like ATLAS, CMS, DUNE, LSST, DESI and others to efficiently use HPC installations in the U.S. at the leadership class facilities at Argonne and Oak Ridge National Laboratories and NERSC at Lawrence Berkeley National Laboratory. I co-wrote the proposal to execute 4 sub-projects to (1) investigate parallel portability solutions (PPS) to develop algorithmic code once and compile it transparently for the various CPU and accelerator-based architectures; (2) Fine-Grained I/O and Storage (IOS) to optimize data structures on disk and in memory and optimize data access on large shared storage systems at HPC systems; (3) Event Generators (EG) to optimize HEP theory code for execution on HPC systems; (4) Complex Workflows (CW) to orchestrate workflows whose steps need different hardware platforms. I was appointed technical lead of the PPS sub-project and am also point-of-contact for the CMS experiment. With my postdoc Martin Kwok and staff members from the DOE laboratories BNL, LBNL and Fermilab, I was working on portability solutions and their applications to HEP software, which enabled CMS to make an informed decision about what portability solution to use in Run-3.

National and International Recognition

I am recognized nationally and internationally for my leadership in software and computing through membership in the following journals, conference committees and mentoring opportunities:

  • In 2020, I was asked to co-chair the Computational Frontier in the Snowmass 2021 process. Because the Snowmass process was delayed by a year, I needed to resign in 2020 due to other high priority commitments. I oversaw the process till a successor was appointed.
  • In 2018, I was asked to be Co-Editor of the American Physics Society (APS) Division of Particles and Fields (DPF) white paper as input to the European Particle Physics Strategy Update 2018 – 2020, responsible for the computing section.
  • In 2018, I was asked to organize the Computing & Machine Learning parallel session at The CPAD Instrumentation Frontier Workshop 2018 “New Technologies for Discovery IV”.
  • In 2017, I was asked to join the editorial board of the journal for “Computing and Software for Big Science” published by Springer.
  • Activities in public outreach:
    • Regular participation in Fermilab’s Ask-A-Scientist program as lecturer and answering questions of the general public.
    • Tour guide for Fermilab’s Saturday Morning Physics program, especially of the computing facilities.
    • Regular question-and-answer sessions for high school classes visiting Fermilab.
  • Active in mentorship programs
    • Since January 2020, I am mentoring a postdoc in the Fermilab Neutrino Division in the context of the inter-divisional mentoring program of Fermilab.
    • Since April 2022, I am mentoring a postdoc from UCSD in the context of the U.S. CMS collaboration mentoring program. The postdoc has been appointed a faculty position at UFL in 2023 and I continue to mentor them.

and I am serving or served on the following committees:

Technical Experience

I have deep knowledge of planning, developing, maintaining and operating distributed computing infrastructures that provide access to several hundred-thousand computing cores and many hundred of petabytes of disk space. I am versed in efficiently storing and retrieving data from permanent storage on tape. I am intimately familiar with both HTC and HPC, with scientific grid sites, academic and commercial clouds and the largest supercomputers at High Performance Computing centers in the U.S. and across the world. This infrastructure executes scientific software consisting of millions of lines of C++ and python code that is needed to extract physics results. I am an expert in object oriented software development, statistical data analysis methods and Monte Carlo simulation techniques as well as various optimization and machine learning techniques.

The technical aspects of my work are closely connected to physics research, as they enable the analysis of particle physics detector data and simulations as a basis to extract physics results. My active involvement in HEP science allows me to guide the science community to benefit from the latest computing developments, bridging the worlds of science and scientific computing.

My current R&D projects revolutionize the utilization of wide area network connections by managing data movements through dynamic SDN channels (ESnet SENSE/Rucio project), enabling CMS to utilize the latest supercomputers with emphasis on the DOE leadership class facilities based on GPUs, and changing the end-user analysis paradigm to adapt the usage of the industry based pythonic analysis ecosystem and employ columnar analysis techniques.

Managed networks

Distributed data-intensive computing relies on very fast wide-area network connectivity to move data to where it is needed, either for processing workflows or end-user analysis. I am investing in networking R&D both by being part of the DOE ESnet requirements review and contributing actively with my team to networking R&D topics. Especially of interest are dynamically managed network paths. The ESnet SENSE/Rucio project works on a solution that the data management system Rucio can dynamically create network path between storage endpoints and through this guarantee quality of service and improve predicability and reduce contention of organized data flows.

Enabling HPC utilization

High Performance Computing centers in the U.S., especially the DOE leadership class facilities at Argonne, Oak Ridge and NERSC, provide extraordinary computing capabilities and could open new avenues for scientific research. These facilities are designed for the largest computationally intensive workflows and optimize on performance per Watt. Currently many sites are using GPUs to achieve the best performance while keeping the power consumption low. To use GPUs, HEP experimental software and infrastructure need to transition to be able to use GPUs, a transition as big or even bigger than the transition from Fortran to object-oriented C++. I am very invested in this transition to succeed and consider it crucial for being successful in the science harvest in the future.

I invest both in the software and infrastructure to enable this transition. The HEP-CCE project I conceptualized paves the way for portability libraries that allow to develop an algorithm once and then compile/execute it on many different GPUs and CPUs, reducing the overhead for software development. Under my guidance, the U.S. CMS Software & Computing Operations Program supports R&D on tracking software on GPUs and through the R&D initiative on many more algorithms and their GPU transition. Through the Fermilab HEPCloud project I am making many different HPC centers accessible to CMS and the Fermilab community, if allocations by the different experiments have been attained. Together with the right software, this should provide for a seamless integration for the future.

End-User Analysis

My other research interest in computing infrastructure is asking the question if analysis in HEP can be conducted more efficiently using tools developed and used by industry. Instead of employing exclusively the ROOT toolkit that was entirely developed by the HEP community, I am exploring using toolkits used in industry like Apache Spark or similar technologies. As a first step, I started a research group spanning researchers from Fermilab, CERN and the Universities Princeton, Padova and Vanderbilt. The CMS Big Data Project was very closely working together with industry, latest in a project with Intel concluded in January 2019 in the context of CERN openlab. To realize this project, Fermilab joined CERN openlab and I organized the DOE approval process with the help of the Fermilab Office of Partnership and Technology Transfer and the Fermilab Legal Office. I also managed a Laboratory Directed Research and Development project (LDRD) to develop innovative technology for Big Data delivery to array-based analysis code, the Striped Data Server for Scalable Parallel Data Analysis. The project concluded successfully in January 2019 and produced a prototype. These projects inspired many researchers and software experts and lead to various projects like a user front-end to columnar data tools COFFEA and supporting analysis activities funded through IRIS-HEP.

These development are culminating into the creation of the analysis facility concept based on the pythonic analysis ecosystem and graph scheduling tools like Dask. The analysis facility concept is now at the heart of the U.S. CMS Software & Computing Operations Program strategy for HL-LHC.

I am not alone in pursuing these technologies. Especially junior scientists are drawn to these technologies because they more directly teach them transferrable skills that can be used in industry.

Research Experience

I am a particle physicist at heart and my original motivation in the field came from conducting leading edge research for New Physics Beyond the Standard Model of Particle Physics as well as precision Standard Model measurements.

I have multiple years of experience in analyzing high-energy collisions at different particle colliders using a multitude of different techniques. I have published many papers in leading journals and am currently a member of the CMS collaboration that is operating one of the 4 detectors at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The CMS collaboration spans the whole globe and encompasses more than 3000 physicists from more than 50 countries out of which over 1000 are students. In my former studies at the LHC, I have lead searches for evidence of physics beyond the Standard Model using top quarks, and contributed to searches for Supersymmetry and Dark Matter. One of my most noticeable publications is the Observation of the Higgs Boson in 2012, where my work in scientific computing had significant impact.

Education

2001-2005 University of Hamburg, Doctor of Natural Sciences, Hamburg, Germany
Thesis title Measurement of beauty quark cross sections in photoproduction with the ZEUS experiment at the electron proton collider HERA (thesis)
Advisors Prof. Dr. Robert Klanner, Dr. Achim Geiser
1996 - 2001 University of Hamburg, Diploma in Physics, Hamburg, Germany
Thesis title Development of the trigger algorithm for the MONOLITH experiment (thesis)
Advisors Prof. Dr. Robert Klanner, Dr. Achim Geiser

Research Positions

09/2019 - Present Senior Scientist Fermi National Accelerator Laboratory (Fermilab)
09/2014 - 08/2019 Scientist Fermi National Accelerator Laboratory (Fermilab)
06/2009 – 08/2014 Application Physicist I Fermi National Accelerator Laboratory (Fermilab)
06/2005 – 05/2009 Research Associate Fermi National Accelerator Laboratory (Fermilab)
09/2001 – 02/2005 Doctoral Candidate Deutsches Elektronen Synchrotron (DESY)

Supervision

03/2011-02/2016 Jacob Linacre - Fermilab Postdoc in CMS
- Searches for SM and BSM physics with top quarks
- 01/2014-12/2015: CMS Physics top quark properties subgroup convener
- 03/2011-12/2013: CMS Tier-1 production and processing L3 manager
now Staff at RAL in the UK
06/2015-06/2021 Matteo Cremonesi - Fermilab Postdoc in CMS
- Dark matter searches
- 09/2015-09/2019: CMS L3 manager for processing and production
- 12/2016-01/2019: Co-lead of CERN openlab/Intel Big Data project
- 01/2019-present: Co-lead of Coffea project using python industry tools for CMS analysis
now Faculty at Carnegie Mellon University
10/2018-present Nick Smith - Fermilab Postdoc in CMS
- Higgs precision measurements in b-quark and other final states
- 03/2019-12/2019: CMS L3 manager for data management
- 01/2020-01/2022: CMS L2 manager for Computing Operations
- 09/2021-09/2023: CMS Higgs Combination Sub-Group coordinator
- 01/2022-01/2023: U.S. CMS Storage R&D project: Ceph
Now offered staff position at Fermilab
11/2020-present Martin Kwok - Fermilab Postdoc in CMS
- HEP-CCE Portable Parallelization Strategies

CMS collaboration: 2005 – Present

I joined the CMS collaboration at the LHC in 2005 and my research focus has been the search for New Physics Beyond the Standard Model of Particle Physics as well as precision Standard Model measurements.

I was a founding member of an analysis group with members from Fermilab/UCSD/UCSB, focusing on final states with leptons. The approach proved to be successful; after early publications such as a measurement of the top quark cross section, the focus shifted to new physics and beyond the Standard Model processes. We were leaders of the WW to dilepton analysis in the CMS Higgs discovery paper, and searches for SUSY in same-sign and opposite-sign dilepton as well as single lepton channels. The group is currently continuing the searches for SUSY in lepton final states as well as Standard Model processes.

I have been supervising several Fermilab postdoctoral researchers helping me to pursue my research interests.

  • Together with Jacob Linacre, I concentrated on exploiting the dilepton signature to search for pair production of a heavy top-like quark (t’). I continued studying the properties of top quarks exploiting angular distributions of the dilepton final state. We were the first to use the dilepton final state to measure the top pair charge asymmetry at the LHC to further investigate the deviations seen at the Tevatron. We published LHC Run 1 papers for top pair spin correlations and top quark polarization for the 7 TeV and 8 TeV datasets as well as the top pair charge asymmetry for the 7 TeV and 8 TeV datasets.
  • From 2015 to 2021, I supervised Fermilab PostDoc Matteo Cremonesi. He created a new dark matter analysis effort at the Fermilab LHC Physics Center (LPC), searching for dark matter particles in various channels. The first publication presented the search for dark matter in events with energetic, hadronically decaying top quarks and missing transverse momentum in the 13 TeV 2016 dataset of LHC Run 2. The second publication describes the search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in the same dataset. The group then concentrated on other mono-object channels with an expanding effort at the LPC.
  • Since 2018, I am supervising Fermilab PostDoc Nick Smith. He joined the Higgs efforts of the LPC and is contributing to the analysis of the Higgs decay channel into two bottom quarks, and two charm quarks. Nowadays he is concentrating on EFT based analyses.

Physics Publications with Major Personal Contributions

A. Tumasyan et al., A portrait of the Higgs boson by the CMS experiment ten years after the discovery., Nature. 607 (2022) 60–68, doi:10.1038/s41586-022-04892-x, arXiv:2207.00043 [hep-ex]

A.M. Sirunyan et al., Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in protonproton collisions at $\sqrt{s}=13{Te}{V}$, Eur. Phys. J. C. 79 (2019) 280, doi:10.1140/epjc/s10052-019-6730-7, arXiv:1811.06562 [hep-ex]

A.M. Sirunyan et al., Search for dark matter in events with energetic, hadronically decaying top quarks and missing transverse momentum at $\sqrt{s}=13$ TeV, JHEP. 06 (2018) 027, doi:10.1007/JHEP06(2018)027, arXiv:1801.08427 [hep-ex]

V. Khachatryan et al., Measurements of t t-bar spin correlations and top quark polarization using dilepton final states in pp collisions at sqrt(s) = 8 TeV, Phys. Rev. D. 93 (2016) 052007, doi:10.1103/PhysRevD.93.052007, arXiv:1601.01107 [hep-ex]

V. Khachatryan et al., Measurements of $t \bar t$ charge asymmetry using dilepton final states in pp collisions at $\sqrt s=8$ TeV, Phys. Lett. B. 760 (2016) 365–386, doi:10.1016/j.physletb.2016.07.006, arXiv:1603.06221 [hep-ex]

S. Chatrchyan et al., Measurements of $t\bar{t}$ Spin Correlations and Top-Quark Polarization Using Dilepton Final States in $pp$ Collisions at $\sqrt{s}$ = 7 TeV, Phys. Rev. Lett. 112 (2014) 182001, doi:10.1103/PhysRevLett.112.182001, arXiv:1311.3924 [hep-ex]

S. Chatrchyan et al., Measurements of the $t\bar{t}$ charge asymmetry using the dilepton decay channel in pp collisions at $\sqrt{s} =$ 7 TeV, JHEP. 04 (2014) 191, doi:10.1007/JHEP04(2014)191, arXiv:1402.3803 [hep-ex]

S. Chatrchyan et al., Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC, Phys. Lett. B. 716 (2012) 30–61, doi:10.1016/j.physletb.2012.08.021, arXiv:1207.7235 [hep-ex]

Computing Publications with Major Personal Contributions

K.H.M. Kwok et al., Application of performance portability solutions for GPUs and many-core CPUs to track reconstruction kernels, in: 26th International Conference on Computing in High Energy & Nuclear Physics, 2024. http://arxiv.org/abs/2401.14221, arXiv:2401.14221 [physics.acc-ph]

O. Gutsche et al., The U.S. CMS HL-LHC R&D Strategic Plan, in: 26th International Conference on Computing in High Energy & Nuclear Physics, 2023. http://arxiv.org/abs/2312.00772, arXiv:2312.00772 [hep-ex]

N. Smith et al., A Ceph S3 Object Data Store for HEP, in: 26th International Conference on Computing in High Energy & Nuclear Physics, 2023. http://arxiv.org/abs/2311.16321, arXiv:2311.16321 [physics.data-an]

A. Apresyan et al., Detector R&D needs for the next generation $e^+e^-$ collider, (2023). http://arxiv.org/abs/2306.13567, arXiv:2306.13567 [hep-ex]

M. Atif et al., Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics, (2023). http://arxiv.org/abs/2306.15869, arXiv:2306.15869 [hep-ex]

B. Bockelman et al., IRIS-HEP Strategic Plan for the Next Phase of Software Upgrades for HL-LHC Physics, (2023). http://arxiv.org/abs/2302.01317, arXiv:2302.01317 [hep-ex]

V.D. Elvira et al., The Future of High Energy Physics Software and Computing, in: Snowmass 2021, 2022. http://arxiv.org/abs/2210.05822, arXiv:2210.05822 [hep-ex]

G. Cerati et al., Snowmass Computational Frontier: Topical Group Report on Experimental Algorithm Parallelization, (2022). http://arxiv.org/abs/2209.07356, arXiv:2209.07356 [hep-ex]

M. Bhattacharya et al., Portability: A Necessary Approach for Future Scientific Software, in: Snowmass 2021, 2022. http://arxiv.org/abs/2203.09945, arXiv:2203.09945 [physics.comp-ph]

D. Berzano et al., HEP Software Foundation Community White Paper Working Group – Data Organization, Management and Access (DOMA), (2018). http://arxiv.org/abs/1812.00761, arXiv:1812.00761 [physics.comp-ph]

L. Bauerdick et al., HEP Software Foundation Community White Paper Working Group - Data Analysis and Interpretation, (2018). http://arxiv.org/abs/1804.03983, arXiv:1804.03983 [physics.comp-ph]

N. Smith et al., Coffea: Columnar Object Framework For Effective Analysis, EPJ Web Conf. 245 (2020) 06012, doi:10.1051/epjconf/202024506012, arXiv:2008.12712 [cs.DC]

M. Cremonesi et al., Using Big Data Technologies for HEP Analysis, EPJ Web Conf. 214 (2019) 06030, doi:10.1051/epjconf/201921406030, arXiv:1901.07143 [cs.DC]

J. Albrecht et al., A Roadmap for HEP Software and Computing R&D for the 2020s, Comput. Softw. Big Sci. 3 (2019) 7, doi:10.1007/s41781-018-0018-8, arXiv:1712.06982 [physics.comp-ph]

List of Presentation and Talks

O. Gutsche, HL-LHC Computing, (2023), Presentation at the USCMS Undergraduate Summer Internship 2023, (Material)

O. Gutsche, (2023), Parallel Session Talk at the 26th International Conference on Computing in High Energy & Nuclear Physics (CHEP2023), (Material)

O. Gutsche, Computing, (2022), Plenary talk given at the DOE/HEP Review of the Energy Frontier Laboratory Research Program, (Material available upon request)

O. Gutsche, Computing, (2022), Lecture given at 17th Hadron Collider Physics Summer School, (Material)

List of Articles

A. Purcell, Oliver Gutsche: Fermilab joins CERN openlab, works on data reduction project with CMS experiment, (2017), Article in CERN openlab News, (Article)

M. May, Oliver Gutsche: A Spark in the dark, (2017), Article in ASCR Discovery, (Article)

M. May, Oliver Gutsche: Open-source software for data from high-energy physics, (2017), Article in Phys.Org, (Article)


  • Full List of Physics Publications with Major Personal Contributions can be found here.
  • Full List of Computing Publications with Major Personal Contributions can be found here.
  • Full List of Publications from all Collaborations and Experiments can be found here.
  • Full List of Presentations and Talks can be found here.
  • Full List of Articles can be found here.

published on: 26. April 2024


Contact