Researchers Create Two IT Solutions Amid Global Supply Shortage
Article courtesy of National Renewable Energy Laboratory (NREL) of the U.S. Department of Energy (DOE). By Brooke Van Zandt
Imagine spending $6.5 million in 30 seconds. For automakers, it was a price to pay to announce their latest electric vehicles to more than 100 million viewers during the 2022 Super Bowl.
Celebrity cameos and special effects can spark interest, but they can’t overcome the barriers that prevent people from buying electric vehicles or adopting other clean energy technologies.
A 2020 survey by Consumer Reports cited the cost of new electric cars and limited access to charging stations as the biggest obstacles for the public.
Creating clean energy options that are affordable and accessible to everyone is possible at the National Renewable Energy Laboratory (NREL), where researchers rely on high-performance computing to turn data into models and simulations of their breakthrough discoveries.
NREL has its foot on the (electric) pedal to clean up our entire energy economy at unprecedented speed and scale. No delay — not even a pandemic-induced global supply shortage — can throw the lab off course, with its Computer Science experts performing dazzling feats of creative problem solving.
A new challenge
From creating more efficient and cleaner transportation to developing better buildings, networks, and solar, hydro, geothermal, and wind power generation and storage, the U.S. Department of Energy (DOE) relies on NREL to address a wide range of energy challenges. In fact, of the 17 national labs, NREL is the only lab solely dedicated to energy efficiency and renewable energy research for the DOE.
Each of these energy challenges requires the powerful computing capabilities of NREL’s supercomputers – like Eagle and the highly anticipated Kestrel — to help researchers quickly identify ideas and accelerate solutions.
Approximately 85% of NREL’s high performance computing (HPC) time is spent on DOE projects. But in the final months of 2020, the DOE’s Vehicle Technologies Office (VTO) asked NREL to plan to meet the projected doubling of computing resource requirements by 2022.
A quick fix
NREL’s advanced computing and computational science experts were tasked with a daunting challenge: to design a world-class HPC resource nearly half the size of Eagle that could be operational within a year. This is an aggressive schedule for a normal year, compounded by global semiconductor chip shortages and supply chain delays caused by COVID-19.
Nevertheless, the resulting machine – appropriately named Swift – was completed and became operational in the NREL Ease of integration of energy systems (ESIF) last summer. Although Swift physically occupies only one row of servers in the ESIF, it contains 2 petabytes of storage and over 28,000 compute cores (for multiple concurrent processes) across 440 nodes. For context, Facebook relies on 1.5 petabytes to store its users’ 10 billion photos.
In anticipation of future demands, NREL researchers designed Swift with flexibility in mind. That’s why they chose Spack, the packaging software from the DOE’s Office of Science Exascale Computing Project, to serve as the software environment for Swift.
“Spack is an international project focused on delivering easily deployable software in complex, high-performance computing environments,” said NREL computing scientist Jon Rood, who outlined why Spack makes long-term strategic sense. “Spack’s popularity continues to grow as it evolves to serve system administrators, scientific software developers, and supercomputer end users to provide them with a cohesive platform in which productivity is paramount.”
Rood added, “Additional benefits of Spack are its ability to connect to the Extreme-Scale Scientific Software Stack ecosystem, also known as E4S, where researchers can benefit from pre-built software applications and containers. -builts, which provide some of the most popular scientific software. — no waiting time between downloading apps and using them to get results.
Placing Swift on NREL’s ESIF reflects a strategy that combines NREL’s advanced IT operations and computer science expertise. NREL World Class design and delivery of IT solutions enables rapid data movement and shared support infrastructure savings. In 2022 and beyond, the combination of Eagle (or Kestrel) and Swift will provide strong support for the VTO wallet. Additionally, future drops in the software environment that are continually optimized by the User and Application Engagement team will occur, allowing for performance optimization, more flexibility, and alignment of resources reporting to NREL’s HPC.
Living on the edge
Remember that 15% chunk of NREL’s HPC capacity? It is dedicated to research and development led by the NREL laboratory and Technology Partner Program portfolio, which targets The vision of NREL: a clean energy future for the world. If 15% computing power doesn’t seem like enough to support such a bold vision, it isn’t; While NREL researchers designed and delivered the Swift solution for DOE, they simultaneously did the same for other NRELians with Vermillion.
Vermillion is the first phase of a flexible, on-premise cloud resource suitable for large NREL projects such as artificial intelligence (AI) training. This on-premises cloud computing – or edge computing – is done close to the original data source, instead of accessing data on the cloud in one of twelve data centers around the world. The latency – or delay – of accessing cloud-based information requires edge computing; Autonomous vehicles rely on split-second access to data to keep passengers safe. Other AI-based energy solutions (for smart grids and buildings) also benefit from edge computing.
NREL is a living laboratory — we simulate and test our proposed solutions to see how they can work in our complex and interconnected world. With Vermillion, NREL can now experiment with HPC, commercial cloud computing and edge computing to envision more clean energy technology scenarios. Vermillion is designed to be accessible and flexible to meet the current and future needs of researchers.
The system software is built on powerful open source standards using Linux, OpenStack and Kubernetes infrastructure, known in the technical world as LOKI. This software stack aggregates virtual resources for dynamic clustering, providing greater flexibility to adapt to demanding NREL workflows. And it leverages Slurm scheduling to quickly assign and run compute jobs and maximize job throughput.
In true NREL style, Vermillion’s name is inspired by the natural world and alludes to growing possibilities. Named after a tributary of the Green River, Vermillion is the first computing resource exclusively dedicated to NREL that will power a meteoric flow of research. Already, Vermillion is positioned to evolve and keep up with the leading edge of the NREL workload and the IT industry. Just as more tributaries amplify a river’s strength, NREL researchers are eagerly anticipating the amplified effects of what may soon join Vermillion next.
A global crisis has brought the supply chain to its knees, and these impacts continue to reverberate across all sectors. But nothing could slow NREL researchers on the path to a clean energy future.
If your organization has IT needs that NREL could support, contact Aaron Anderson or Jennifer Suderland for more information. The Vermillion complex is meant to be increased over time.
Do you appreciate the originality of CleanTechnica? Consider becoming a Member, supporter, technician or ambassador of CleanTechnica — or a patron on Patreon.