Link to Resume: (PDF)
During my career in scientific computing of more than 15 years, I have gained extensive experience managing large scientific projects like the U.S. CMS Software & Computing Operations Program, and in organizations like the international CMS collaboration at the LHC and the Computational, Science and Artificial Intelligence Directorate (CSAID) at Fermilab.
Since 2019, I am the U.S. CMS Software and Computing Operations Program manager providing the software and computing infrastructure for U.S. CMS researchers to maintain physics leadership in CMS. Under my leadership, the program’s annual budget increased from 15M U.S. Dollars in 2019 to almost 20M U.S. Dollars in 2024, supporting more than 60 FTE of scientists and technical staff and computing hardware deployed at Fermilab and 7 U.S. university sites.
I am currently holding a senior management position in Fermilab CSAID and held many management positions in the CMS collaboration.
I am active in recruiting and maintaining a diverse workforce by building career pipelines through internship programs, and mentoring of junior technical staff as well as junior scientists and postdoctoral researchers.
Using my extensive knowledge of scientific software and computing infrastructure, I am contributing to the worldwide community efforts to plan for the software and computing infrastructure for the future of science, with emphasis on the High Luminosity LHC (HL-LHC). The goal is to provide the means to enable scientific discoveries that are groundbreaking through strategic leadership and vision.
For HL-LHC, I co-authored a strategic plan for the U.S. CMS Software & Computing Operations Program outlining four grand challenges that need to be solved for HL-LHC: Modernizing Physics Software and Improving Algorithms; Building Infrastructure for Exabyte-Scale Datasets; Transforming the Scientific Data Analysis Process; and Transition from R&D to Operations (arXiv:2312.00772). To realize the strategic plan, I created a Research Initiative in the U.S. CMS Software & Computing Operations program to provide partial funding for postdoctoral researchers to investigate novel and forward-looking software and computing solutions for the 4 grand challenges of HL-LHC. The strategic plan also includes previous contributions to a variety of community planning exercises like the Roadmap for HEP Software and Computing R&D for the 2020s. Recently, I was appointed the CMS Collaboration Board co-lead of the Sub-Committee for Offline & Computing for HL-LHC. I asked for the creation of this sub-committee to introduce coordination and effort planning into the Offline & Computing coordination area, a novelty for the CMS collaboration that only existed for detector projects in the form of their institutional boards.
For Fermilab, I created the Computing Resources Evolution STrategy (CREST) process. The goal is to document a strategy for the evolution of Fermilab’s computing resources in the light of current experiment needs and future anticipated needs of DUNE and HL-LHC with a planning horizon of 10 years.
For the DOE Center for Computational Excellence (HEP-CCE), I co-wrote the proposal to execute 4 sub-projects to (1) investigate parallel portability solutions (PPS) to develop algorithmic code once and compile it transparently for the various CPU and accelerator-based architectures; (2) Fine-Grained I/O and Storage (IOS) to optimize data structures on disk and in memory and optimize data access on large shared storage systems at HPC systems; (3) Event Generators (EG) to optimize HEP theory code for execution on HPC systems; (4) Complex Workflows (CW) to orchestrate workflows whose steps need different hardware platforms. I was appointed technical lead of the PPS sub-project and the funding period of this proposal finished.
I am recognized nationally and internationally through membership of various committees, being selected to co-lead the Computational Frontier of the Snowmass 2021 process, and asked to be a member of the editorial board of the journal for “Computing and Software for Big Science”.
I have deep knowledge of planning, developing, maintaining and operating distributed computing infrastructures that provide access to several hundred-thousand computing cores and many hundred of petabytes of disk space. I am versed in efficiently storing and retrieving data from permanent storage on tape. I am intimately familiar with both HTC and HPC, with scientific grid sites, academic and commercial clouds and the largest supercomputers at High Performance Computing centers in the U.S. and across the world. This infrastructure executes scientific software consisting of millions of lines of C++ and python code that is needed to extract physics results. I am an expert in object oriented software development, statistical data analysis methods and Monte Carlo simulation techniques as well as various optimization and machine learning techniques.
The technical aspects of my work are closely connected to physics research, as they enable the analysis of particle physics detector data and simulations as a basis to extract physics results. My active involvement in HEP science allows me to guide the science community to benefit from the latest computing developments, bridging the worlds of science and scientific computing.
My current R&D projects revolutionize the utilization of wide area network connections by managing data movements through dynamic SDN channels (ESnet SENSE/Rucio project), enabling CMS to utilize the latest supercomputers with emphasis on the DOE leadership class facilities based on GPUs, and changing the end-user analysis paradigm to adapt the usage of the industry based pythonic analysis ecosystem and employ columnar analysis techniques.
I am a particle physicist at heart and my original motivation in the field came from conducting leading edge research for New Physics Beyond the Standard Model of Particle Physics as well as precision Standard Model measurements.
I have multiple years of experience in analyzing high-energy collisions at different particle colliders using a multitude of different techniques. I have published many papers in leading journals and am currently a member of the CMS collaboration that is operating one of the 4 detectors at the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. The CMS collaboration spans the whole globe and encompasses more than 3000 physicists from more than 50 countries out of which over 1000 are students. In my former studies at the LHC, I have led searches for evidence of physics beyond the Standard Model using top quarks, and contributed to searches for Supersymmetry and Dark Matter. One of my most noticeable publications is the Observation of the Higgs Boson in 2012, where my work in scientific computing had significant impact.
A.M. Sirunyan et al., Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in protonproton collisions at $\sqrt{s}=13{Te}{V}$, Eur. Phys. J. C. 79 (2019) 280, doi:10.1140/epjc/s10052-019-6730-7, arXiv:1811.06562 [hep-ex]
V. Khachatryan et al., Measurements of $t \bar t$ charge asymmetry using dilepton final states in pp collisions at $\sqrt s=8$ TeV, Phys. Lett. B. 760 (2016) 365–386, doi:10.1016/j.physletb.2016.07.006, arXiv:1603.06221 [hep-ex]
A. Apresyan et al., Detector R&D needs for the next generation $e^+e^-$ collider, (2023). http://arxiv.org/abs/2306.13567, arXiv:2306.13567 [hep-ex]
M. Atif et al., Evaluating Portable Parallelization Strategies for Heterogeneous Architectures in High Energy Physics, (2023). http://arxiv.org/abs/2306.15869, arXiv:2306.15869 [hep-ex]
B. Bockelman et al., IRIS-HEP Strategic Plan for the Next Phase of Software Upgrades for HL-LHC Physics, (2023). http://arxiv.org/abs/2302.01317, arXiv:2302.01317 [hep-ex]
V.D. Elvira et al., The Future of High Energy Physics Software and Computing, in: Snowmass 2021, 2022. http://arxiv.org/abs/2210.05822, arXiv:2210.05822 [hep-ex]
M. Bhattacharya et al., Portability: A Necessary Approach for Future Scientific Software, in: Snowmass 2021, 2022. http://arxiv.org/abs/2203.09945, arXiv:2203.09945 [physics.comp-ph]
O. Gutsche et al., The U.S. CMS HL-LHC R&D Strategic Plan, EPJ Web Conf. 295 (2024) 04050, doi:10.1051/epjconf/202429504050, arXiv:2312.00772 [hep-ex]
K.H.M. Kwok et al., Application of performance portability solutions for GPUs and many-core CPUs to track reconstruction kernels, EPJ Web Conf. 295 (2024) 11003, doi:10.1051/epjconf/202429511003, arXiv:2401.14221 [physics.acc-ph]
N. Smith et al., A Ceph S3 Object Data Store for HEP, EPJ Web Conf. 295 (2024) 01003, doi:10.1051/epjconf/202429501003, arXiv:2311.16321 [physics.data-an]
J. Balcas et al., Automated Network Services for Exascale Data Movement, EPJ Web Conf. 295 (2024) 01009, doi:10.1051/epjconf/202429501009
N. Smith et al., Coffea: Columnar Object Framework For Effective Analysis, EPJ Web Conf. 245 (2020) 06012, doi:10.1051/epjconf/202024506012, arXiv:2008.12712 [cs.DC]
J. Albrecht et al., A Roadmap for HEP Software and Computing R&D for the 2020s, Comput. Softw. Big Sci. 3 (2019) 7, doi:10.1007/s41781-018-0018-8, arXiv:1712.06982 [physics.comp-ph]
published on: 01. November 2024