Nano-scale transistors fill warehouse-scale supercomputers, yet their performance still constrains development of the jets that defend us, the medical therapies our lives depend upon, and the renewable energy sources that will power our generation into the next. The Computational Physics Group at Georgia Tech develops computational models and numerical methods to push these applications forward. We accompany our methods with algorithms crafted to make efficient use of the latest exascale machines and computer architectures, including AMD GPUs, Arm/RISC CPUs, and quantum computers. We develop open-source software for these methods that scales to the world’s largest supercomputers. Check out the rest of this website to learn more.
Openings? Visit this page if you’re interested in joining our group.
Bubble cavitation and droplet shedding are fundamental multiphase flow problems at the core of naval hydrodynamics, aerospace propulsion, and more. We developed a sub-grid method for simulating these phenomena. MFC, our open-source exascale-capable multi-phase flow solver, demonstrates such scale-resolving simulation of a shock-droplet interaction in the above video (via Ph.D. student Ben Wilfong).
The spectral boundary integral method leads to high-fidelity prediction and analysis of blood cells transitioning to chaos in a microfluidic device. This method of simulation provides resolution of strong cell membrane deformation with scant computational resources. We developed a stochastic model for the cell-scale flow, enabling microfluidic device design and improving treatment outcomes. The video above shows a microaneurysm (simulated by Suzan Manasreh).
11 July, 2025 GT wrote a news piece on our group’s effort and collaboration to help launch JSC JUPITER via its early access program. We helped benchmark the system in preparation for user access. JUPITER is now Europe’s top supercomputer, in fourth place after the US’s CORAL-II systems (El Cap. and friends).
8 July, 2025 The group receives a DOE ALCC allocation of about 500K node hours (2M GPU hours) on OLCF Frontier for the next year. This is part of a collaboration with Ryan McMullen at Sandia National Labs. The project is called Multiphase Mixing Induced by Interface Breakup.
7 July, 2025 We welcome Dan Vickers to the group! He joins us from his previous appointment at MIT Lincoln Lab.
29 June, 2025 Our paper on quantum-resource frugal algorithms for solving CFD problems was published by Future Generation Computing Systems. Congrats to group undergraduates Melody Lee and Sriharsha Kocherla and Ph.D. student Jack Song and thank you to collaborators Austin Adams and Alex Alexeev!
5 June, 2025 Spencer gives a talk at the workshop on Algorithms For Multiphysics Models In The Post-Moore’s Law Era on extreme scale compressible flow simulation via information geometric regularization.
29 May, 2025 Our group, including postdoc Tianyi Chu and Ph.D. student Ben Wilfong, collaborate with Sandia National Lab to uncover a new hydrodynamic instability that occurs when Rayleigh-Taylor and Faraday behaviors meaningfully co-exist. The preprint is linked above. Theory and simulations demonstrate our result.
22 May, 2025 Spencer gave an invited conference talk at the joint Meeting of the Acoustical Society of America and International Congress on Acoustics. He talked about the computation of high-amplitude acoustics with application of therapeutic ultrasound and implication of noise.
20 May, 2025 Tianyi Chu’s research on optimal experimental design for soft material cauterization is now on arXiv. We show that if one uses radial basis functions as local interpolants as an intermediate step in the BOED process, the model selection process is accelerated by about two orders of magnitude.
13 May, 2025 Our work on Hadamard-tree tomography via Hadamard Random Forests (HST) appears on arXiv. We show that, assuming real valued output states, one can reconstruct their values in linear time (in qubit count) as opposed to the exponential time of full state tomography. Thanks to our collaborators Bryan Gard (GTRI) and Nico Renaud (Netherlands eScience Center).
13 May, 2025 Collaborative work using information geometric regularization and carefully tuned numerics exceeds 100 trillion grid-points for a rocket-exhaust simulation is now on arXiv. This is the largest CFD simulation conducted to-date, to the authors’ knowledge, in this case by a factor of about 10. Florian and Spencer thank collaborators at NVIDIA, AMD, ORNL, and HPE, as well as the dedicated Ph.D. students.