Skip to main content

3 hot use cases for supercomputing

Businesses are being transformed by the application of massive amounts of computer processing. And we can expect to see much more of this as supercomputing continues to spread.

IT organizations are being asked to manage a breadth of digital applications that was unimaginable in a world primarily focused on analog interactions. Many of those applications also require a massive scale of computing also unimaginable in the analog world. Starkly put, every IT organization needs to be prepared for scale, and that requires developing skills in supercomputing.

For example, it wasn't so long ago that a significant part of any new business initiative would be preparing call center staff for customer inquiries. Today, however, many customers prefer—even insist on—digital interactions, viewing interacting with a human as a burden.

This example illustrates how our society and economy are moving from analog to digital processes. It's sometimes difficult to comprehend how quickly this transformation is occurring, but it is happening all around us in real time.

Through 2020, IDC projects the market will grow an average of 5.9 percent. Clearly, the digital enterprise consumes massive quantities of computing to create new applications that form the foundation of tomorrow's business practices.

Here are three industries where companies are applying supercomputing today.

1. Supercomputing applications for agriculture

Genetic engineering has a very long history. For centuries, farmers have improved crops by identifying individual wheat plants noted to have desirable properties and sowing more of those seeds on their land. Over the past century, this approach has resulted in better wheat varieties, greater yields, and less pesticide use.

There are limits to this approach, though. It essentially relies on waiting for nature to create a desirable variety. The rise of genetic sequencing points to a way to improve and accelerate the process. With the addition of potential gene transfer across varieties or species, plants can be improved beyond the limits of traditional techniques.

The approach requires analyzing the plant’s genes to map differences in specific genes to desirable traits. This analysis must be performed across many individual plant genomes to isolate the sequences in a gene that are responsible for those traits. In turn, this requires a lot of computing power to sort through massive data sets.

Extending the analysis to incorporate epigeneticsthe study of gene expression controlled by environmental factors, which can lead to different outcomes based on what a plant is exposed to—means the size of data sets explodes, further increasing the amount of computing power required.

Historically, the high cost of computing has meant that most genetic analysis resources have been directed toward high-value efforts like drug discovery for human diseases.

Fortunately, the plummeting cost of supercomputing allows genetic analysis to be applied to more use cases. As a result, plant genetic analysis is now an affordable effort. The results of this analysis are now showing up on grocery store shelves—keeping fruits ripe longer and ensuring their best flavor, for example. The importance of food crops cannot be overstated: Ensuring every person has access to a reliable food supply is one of the most important factors in human development.

We are still early in applying supercomputing-based genetic analysis to plants, but we can expect to see much more effort in this direction over the next decade, with benefits and business opportunities proliferating. 

Take on new opportunities with Hybrid IT

Get our report: Navigating IT transformation - Tales from the front lines

2. Supercomputing applied to car manufacturing

Safety features such as seatbelts and airbags have contributed greatly to automobile safety over the years, and they have helped to produce a dramatic drop in traffic deaths per million miles driven.

Vehicle structure is another critical aspect of automobile safety. In the early days of the automobile industry, car designers tried to protect occupants by building rigid auto bodies. However, that meant all of the force of an accident was transferred to the occupants. In 1937, a Mercedes engineer developed the concept of a crumple zone—a section of a car designed to deform in a collision, thus reducing the force transferred to its occupants. Crumple zones are now part of every car manufactured. They have improved over time, as seen in this video of a collision between a 1959 and a 2009 Chevrolet.

Traditionally, engineers designed crumple zones via manual calculations using known properties of the relevant parts of the car—the steel strength of body panels or the tension resistance of struts, for example.

Assessing how the overall structure crumpled, and thus protected occupants, relied on physical testing, meaning crashing real cars to see how they fared. This is an expensive process, and the cost has restricted the opportunities to evaluate many potentially useful crumple designs.

Fortunately, the engineering assessment and evaluation can now be largely done by computer analysis. This offers the opportunity to test many variations of a crumple design by, say, substituting a lighter, stronger metal in place of a less expensive but heavier one.

This simulation ability is particularly important today, when car manufacturers must optimize for several different, conflicting requirements: safety, which calls for strength and mass; mileage, which is improved by lower auto weight; and wind resistance, which is helped or hindered by body panel shape. An improvement for one requirement must be evaluated in light of how it affects the other requirements—a classic engineering problem.

The ability to quickly modify a car's design to evaluate these trade-offs, all without requiring expensive physical production, means time and cost savings for the manufacturer. The benefits of simulation can be seen in the 2016 Honda Civic, which is significantly safer than previous models due to the extensive crash simulation Honda performed.

3. Supercomputing applications for public transport

Recently, I had the opportunity to attend a conference session about maintenance scheduling in Hong Kong's subway system. In the past, Hong Kong Mass Transit Railway relied on experienced humans to select and assign maintenance tasks. While that is not a bad way to make sure important tasks get done, MTR found a better way: using machine learning to optimize its maintenance schedule. Supercomputing is employed to handle the huge amount of computing required in the process.

Applying machine learning to maintenance scheduling has paid off for MTR. Not only does it save two days a week of expensive labor in performing scheduling, but it also gets 30 minutes of extra maintenance done each day with smarter task assignment. Given how central public transportation is to Hong Kong residents, this is a significant improvement.

The article mentions that workers found the maintenance schedule eerie because no explanation was offered for why certain work tasks were assigned. In the presentation I attended, the speaker noted that maintenance employees were uneasy with the whole system because it is a black box—no one could understand exactly why certain decisions got made.

Notwithstanding these reservations, MTR stuck with the system. Within a short period, employees accepted the scheduling arrangement.

Now, you might say, "That's great, but I don't run a mass transit system, so I don't need to think about this." That would be unfortunate, because nearly every business has some ongoing process that uses human judgment for optimization that can potentially be improved by applying supercomputing to the domain. Google, for example, applied these techniques to its data center operations and improved energy use by 15 percent, a phenomenal outcome. So don't let these examples blind you to the potential for supercomputing to make your business better. Instead, seek out opportunities within your company to apply supercomputing to increase efficiency in fundamental business processes.

Supercomputing business opportunities: Lessons for leaders

  • Supercomputing powers advanced analytics, enabling more experimentation and improved outcomes for businesses.
  • Its use is exploding as companies identify new opportunities that can be explored using massive computing power.
  • Human judgment can be augmented—or even replaced—by large-scale number crunching.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.