Texascale Days with Frontera a Win-Win for Researchers and TACC

Testing the limits on most powerful U.S. academic supercomputer

    Texascale Days at TACC are a quarterly event lasting two weeks where select projects are given full use of all of the compute nodes of the NSF-funded Frontera, the most powerful academic supercomputer in the U.S. One example are stellar core convection images generated by the team of Paul Woodward, University of Minnesota.

    Texascale Days at the Texas Advanced Computing Center (TACC) allow awarded scientists full access each quarter to the capabilities of the Frontera supercomputer, the most powerful academic system in the U.S., which is funded by the National Science Foundation (NSF).

    During Fall 2022, about a dozen scientists benchmarked, tested, and completed production runs on code that needed at least half of Frontera's 8,192 compute nodes. Several projects used the whole system, remarkably running on over 450,000 computer processors, or cores. The projects ranged from material science, deep Earth imaging, quantum circuits, and stellar evolution.

    "We're accomplishing a couple of objectives with these codes for our researchers," said John Cazes, director of High Performance Computing at TACC. "We're giving them a taste of what's coming," he added, referring to the NSF-funded Leadership-Class Computing Facility (LCCF), which will be 10 times more powerful than Frontera and is slated for construction in 2024.

    Images from a simulation during the Fall 2022 Texascale Days of core convection by Paul Woodward, University of Minnesota. The central convection zone extends out to 1,531 Mm, where the convective boundary is clearly evident in the thin slice through the center of the star shown in the figure. At the left, the magnitude of the velocity component tangent to the sphere is clearly visible, and on the right the logarithm of the concentration of the gas originally located above the convective boundary is shown. For both images, values increase as the colors go from dark blue to aqua (cyan), white, yellow, and finally red.

    More immediately, Texascale Days are an opportunity for selected projects to reach dazzling scales on Frontera, a capability system intended for large applications requiring thousands of nodes. The quarterly event supports experiments that need more than 2,000 nodes, the more typical size run on Frontera. Researchers also stress test new techniques for scaling up their codes.

    "We like to use Frontera to its full potential. And we're allowed to do this during Texascale days," Cazes said.

    The following are some of the projects that ran during Texascale Days and what they accomplished:


    For this project, which ran on 3,510 nodes of Frontera, Paul Woodward's team at the University of Minnesota collaborated with the team of Falk Herwig at the University of Victoria to study the process of material mixing at the boundaries of convection zones in stellar interiors.

    TACC's Frontera, the fastest academic supercomputer in the U.S., is a strategic national capability computing system funded by the National Science Foundation.

    Woodward carried out a simulation of core convection in a main sequence star of 25 solar masses. Using a 2688^3 grid, the project simulated the flow in the central 2,700 Mm (megameter) in radius for this star with the PPMstar 3D stellar hydrodynamics code developed by Woodward.

    Ingestion into convection zones of gas from the fuel-rich layers above the convective boundaries can alter the evolution of stars. For massive stars burning hydrogen in their cores, this ingestion of new fuel can extend the main-sequence phase of evolution and delay the star's transition to the red giant phase. For stars in later stages of their evolution, convective boundary mixing can lead to bursts of violent combustion of fuels brought into convection zones and transported inward to regions where conditions are much hotter and reaction rates can become greatly enhanced.

    John Cazes, Director of High Performance Computing, Texas Advanced Computing Center.

    This study of simulated stars in the light of asteroseismology led Woodward's team to investigate the role of internal gravity waves, both those associated directly with the convective boundary layer as well as those in the stable envelope, in the material mixing process.

    "It's difficult to adequately stress how important it is for our simulation to be accurate enough to correctly capture the very subtle effects of the internal gravity waves in the stellar envelope. Inadequate resolution can easily lead to significant overestimates of mixing and/or angular momentum transport by internal gravity waves," Woodward said.

    "For this reason, the tremendous computing power of Frontera enables accurate investigation of these effects, which, over time, can have very significant impacts on the structure and evolution of the star."


    Improving the resolution of seismic images is essential for understanding mantle dynamics and related surface processes, such as the origin of mantle hotspots and volcanoes, plate tectonics, earthquakes and more broadly the origin of our planet, and how it evolves over time.

    From Paul Woodward's Texascale simulations, shown is the magnitude of the velocity component tangent to the sphere in a thin slice through a PPMstar simulation of a red giant star. A zoomed-in region (bottom) showing the interaction of the convection flow with internal gravity waves (IGWs) at the convective boundary. The IGWs are excited by the convection at this boundary and propagate into the stably stratified layers below it. The material mixing caused by the IGWs is accurately measured in the simulations.

    The team of Ebru Bozdag and Ridvan Örsvuran, Colorado School of Mines; Armando Espindola Carmona and Daniel Peter, King Abdullah University of Science and Technology constructed higher-resolution global mantle models based on 3D numerical seismic wave simulations and 3D data sensitive to the model parameters, the so-called Frechét derivatives.

    To this end, they used the global spectral-element seismic wave propagation solver SPECFEM3D_GLOBE, freely available from the Computational Infrastructure for Geodynamics for numerical forward and adjoint simulations.

    They utilized Frontera to improve upon their first global adjoint models (GLAD-M15 Bozdag et al. 2016, GLAD-M25 (Lei et al. 2023).

    Generally, they employed seismic tomography, which is like taking a CT scan of the deep Earth. It uses earthquakes to construct images of geological composition at depths currently impossible to reach with the best drills. The team used a technique called full-waveform inversion to simulate not only the wave speed travel times, but also wave attenuation, or dampening.

    While numerical simulations can capture the full 3D complexity of wave propagation, their team also addressed the more realistic physics in inversions through appropriate model parameterizations.

    "To this end, with our current allocation on Frontera, we address azimuthal anisotropy in the upper mantle and work towards building a global anelastic mantle model based on full-waveform inversion," Bozdag said.

    "During the Texascale days, we mainly focused on constructing an anelastic global adjoint model," she added.

    "For each simulation, we had 8,000 nodes on Frontera available," Örsvuran said. "We used real data from 250 earthquakes and simultaneously simulated them. And then, we performed the inversion."

    "In order to do these studies, we simulate the physics itself. That's why we need the TACC systems. We need large computational resources —that's what Texascale Days gave us."

    Attenuation plays a key role in constraining the Earth's water content, partial melting, and temperature variations. Despite being such a crucial parameter, the complex nature of amplitudes of seismograms and the trade-off between model parameters (i.e.,P and S wave speeds, attenuation, anisotropy, etc.) make its inversion challenging.

    "To understand problems such as the water content of the mantle, we need additional parameters," Bozdag said. "This is where attenuation becomes a powerful parameter for us to address that."

    The density of states (DOS) of the Si(202617)H(13836) nanocluster obtained after two Chebyshev-filtering steps overlaid with the DOS of bulk Si. For the nanocrystal, a histogram of the Kohn–Sham eigenvalues with 0.1 eV bin width is used to represent the DOS. Credit: James Chelikowsky, Oden Institute.

    "While we demonstrated anelastic global inversions with real data, to better understand the trade-off between elastic and anelastic parameters, we performed a set of 3D synthetic global inversions with realistic data coverage on the globe which we typically use in our global inversions. These tests are vital to understanding the robustness of the constructed models as they show the trade-off between anelasticity and shear wavespeeds which guides us while interpreting our inversions with real data," Bozdag said.

    The manuscript describing their strategies for global anelastic adjoint inversions is in progress and will appear in the Geophysical Journal International.


    The material silicon is of interest across all length scales, especially owing to the pursuit of smaller electronic devices. Some of these devices approach the nanoscale, on the order of one billionth of a meter. In this size regime the properties of nanoscale silicon can change from the bulk regime. Although systems of this size are physically small, they still contain large numbers of atoms.

    "Our approach enables us to study larger nanocrystals of silicon and access this size regime for the first time using quantum based methods," said James Chelikowsky of the Oden Institute for Computational Engineering and Sciences (Oden Institute), UT Austin.

    His team uses a unique, real-space based method that can scale up more easily on modern computational platforms. They've invented new algorithms using this method that are designed specifically for nanoscale systems.

    "These algorithms allow us to rapidly solve complex eigenvalue problems without an emphasis on any individual energy state," Chelikowsky said. "We also avoid computational bottlenecks involved with communication between processors that plague other approaches."

    His team's project ran a system with more than 200,000 atoms on 8,192 nodes using methods uniquely designed to work with such large numbers of computational nodes.

    "Modern materials science is driven by a unique synergy between computation and experiment," Chelikowsky said. "As computational scientists, we're able to guide and aid experimental scientists thanks to our access to cutting-edge supercomputers."


    "Our Texascale Days runs were meant to push our flagship software EPW and the PHonon code of the Quantum ESPRESSO materials simulation suite to extreme scaling," said Feliciano Giustino of the Oden Institute.

    Giustino is the co-developer of the "Electron-phonon Wannier" (EPW) open-source F90/MPI code which calculates properties related to the electron-phonon interaction using Density-Functional Perturbation Theory and Maximally Localized Wannier Functions.

    EPW is a leading software for calculations of finite-temperature materials properties and phonon-mediated quantum processes. The code enables predictive atomic-scale calculations of functional properties such as charge carrier mobilities in semiconducting materials, the critical temperature of superconductors, phonon-mediated indirect optical absorption spectra, and angle-resolved photoelectron spectra.

    The code is developed by an international team of researchers and is led by UT Austin via its lead software developer Hyungjun Lee. Lee has performed extensive tests during previous editions of Texascale Days, and has successfully demonstrated near-unity strong scaling of the code in full system runs on Frontera.

    The Fall 2022 Texascale Days provided Giustino's team with the opportunity to benchmark the low-I/O mode that they recently implemented into EPW and PHonon codes in the Quantum ESPRESSO suite. One of the main bottlenecks in large-scale calculations is file I/O operations. In particular, file-per-process (FPP), one of the most common parallel I/O strategies, and employed in EPW and Phonon codes, overwhelms the file systems by creating a huge number of files in large-scale calculations.

    To address this issue, his team added the low-I/O feature to EPW and Phonon which exploits memory instead of the file system.

    Giustino's team used the Frontera supercomputer at half (224,000 cores) and full (448,000 cores) scale to carry out electron-phonon calculations with EPW for the superconductor MgB2, and lattice-dynamics calculations with PHonon for the type-II Weyl semimetal WP2.

    "We demonstrated that in both cases the speedup for the full-system run with respect to the half-system run reaches above 75 percent. We believe that with this low-I/O feature we are further pushing the limits of EPW and Phonon in massively parallel calculations," Guistino said.


    In this project, the team of Brian R. La Cour, Center for Quantum Research, Applied Research Laboratories, UT Austin, used random quantum circuits to test the limits of classical digital computing over quantum computers.

    Quantum circuits are the basic instructions used to program a quantum computer, which instead of ‘bits' of value 0 or 1 found in digital computers, it uses ‘qubits' that can be 0, 1, or in a state in-between.

    Ebru Bozdag, Colorado School of Mines, and colleagues ran 3D global tests during Texascale Days for simultaneous inversion of attenuation and horizontally-polarized shear velocity. The models are the result of 13 L-BFGS iterations. Credit: Armando Espindola Carmona, King Abdullah University of Science and Technology.

    "We use random quantum circuits to create problems that are hard to solve with classical computers but are expected to be more easily solved by a quantum computer," La Cour said. "Although they do not solve a practical problem, they are used to demonstrate the computational advantage of quantum computers over their classical counterparts."

    Fully simulating large quantum circuits on a classical computer requires an enormous amount of memory. Previously, the largest such simulation consisted of 45 quantum bits (qubits) and 0.5 petabytes of memory.

    During this Texascale event, La Cour's team performed a full-scale run on Frontera with all 8,192 nodes to complete a 46-qubit simulation.

    "This is the largest such simulation ever performed and required a combination of one petabyte worth of real and virtual memory," La Cour added. "Running such memory intensive simulations is extremely challenging due to the complexity of inter-node communication and the difficulty of coordinating with local storage for virtual memory. Large quantum circuit simulations serve as a benchmark for assessing the accuracy and computational advantage of future quantum computers."


    In addition to benefitting researchers, Texascale Days helps TACC with an opportunity to ‘stress-test' Frontera with simulations beyond the bulk of the allocations awarded that utilize about 2,000 nodes.

    "We use the Texascale Days events to prepare our users for the next LCCF system and to push our resources to the limits of their capabilities," Cazes said.

    "These scaling studies during Texascale help scientists determine if their application can handle four times as many nodes and tasks as they normally use. This will help enormously as the LCCF nears the launch horizon," he added.