Kevin Sanders

September 13, 2023

How can research universities overcome computing challenges?

Keeping up with the technological demands of data-intensive research is a core challenge facing many universities today. Creating data center infrastructures capable of sustaining big data is essential to continuing the research mission of universities.

The essence of a research institution’s mission is to push the boundaries of knowledge in every area possible. These institutions are a critical engine of the world’s economy. So, it’s no surprise that academic research reflects the explosive demand for computing – particularly high-performance computing (HPC) – in both scale and advanced architectures.
As work in traditional areas of scientific and mathematical computing increase, so have the collateral areas like big data, healthcare and life sciences, weather modeling, and image and graphic computing, along with network and communications demands.
Colleges and universities, particularly those with heavy research components in their mission, have struggled to meet that demand. The problem stems from the history of development and deployment of these assets, increased pace of change in computing architectures, and their use across a growing landscape of research challenges that often outpace capability.
For decades, research computing has largely developed through:
  • Grant-sourced facilities that are customized to that pursuit with differing scales and architectures
  • Departmental facilities dedicated to their mission and research type
  • Large-scale shared facilities with mixed architectures
  • HPC, which can follow the form of all the above
This landscape operates largely independently of each other and is mostly compartmentalized. And this model has always worked well – until it didn’t.
The computing landscape
The unfortunate result, in many cases but not all, is a maze of “official and ghost” compute resources that theoretically sub-optimize their focused mission but could have a much broader impact if collectivized. The supporting data center facilities feature mechanical and electrical environments and operational management that can be problematic and pose significant risks to data and security. Some common observations across the landscape are:
Legacy data centers The collection of legacy data center facilities is no longer a cost and energy efficient solution in an environment that has deep financial challenges funding new computer platforms. Most existing legacy data centers cannot support the density (power and cooling) required of advanced computing architectures for new HPC platforms. This becomes the 2X problem in that limited dollars need to chase both hardware and facilities.
Cloud computing Cloud computing has proved useful for administrative computing and has provided some assistance in HPC but has had less impact on large-scale research and multi-use shared environments. In the medium- to long-term, it can prove costly. It can also be challenging in the provisioning and management of resources.
Colocation facilities The use of colocation facilities has been highly successful when implemented, but it is hampered by culture, tradition, and perceived location proximity requirements.
Change can be difficult to plan, and even more difficult to implement. Sometimes a neutral change agent is needed to help broker the technological, financial, and political obstacles to fuel large-scale innovative projects.
Do your homework
It’s not a revelation that successful, large-scale rebuilding of research infrastructure is the result of very thought-out, consensus-built planning. These projects develop a data center strategy that supports compute environments servicing both HPC and other general-purpose needs with phased approaches that are matched to the financial resources available over time. In aggregating resources, the possibilities are significantly better than the piecemeal funding that has come to define these environments.
A more sustainable solution
This type of planning also contributes to other goals. There can be significant repatriation of space back to the mission of teaching as data center space is consolidated. Underlying this is also the opportunity for reducing an institution's overall carbon footprint through retiring power inefficient facilities for ones with better utilization and often a higher concentration of renewable energy sources. Equally important is the potential for reducing data security risk and increasing operational excellence.
To do this well, technical, design, and financial skills must be combined to create several solution scenarios for consideration. These scenarios need to reflect academic community-driven forecasts of computing resource requirements and architectures in a comprehensive timeline. Phased implementation allows for better beget control complete with cost of ownership models, and risk planning is necessary to propose plans commanding high-level capital commitments.
Assembling such a team is not easy, and embedding the organizational and governance skills needed for success is a challenge. But when done correctly, the program outcomes have a significant impact that often is as rewarding as the research it eventually fuels.
How Ramboll can help
We have a long history of advising public and private educational institutions in the development and implementation of their research data center roadmaps. With a unique, multidiscipline team combining best-in-class forensic engineering and design capabilities with deep information technology, network, and financial analysis skills, we understand the critical nature of this complex environment.

Want to know more?

  • Kevin Sanders

    Managing Principal, Data Center Strategy

    +1 5088019669

    Kevin Sanders