Passer au contenu principal
Nuage

Devriez-vous transférer les charges de travail dans les grands systèmes ou dans le nuage? Ou dans les deux?

Article 6 oct. 2023 Temps de lecture: min

Version française prochainement disponible

By: Frank DeMarco

As popular as the cloud has become in recent years, its benefits have been slow to materialize for some, for a variety of reasons. The cloud isn’t going anywhere—far from it—but today, the mainframe is still the core platform for 67% of Fortune 100 companies, including 45 of the top 50 banks, 4 of the top 5 airlines, and 7 of the top 10 retailers.1 Mainframes are still considered central to business strategy, according to 90% of business leaders surveyed in Kyndryl’s 2023 State of Mainframe Modernization report.

The mainframe is best suited at the heart of a well-integrated hybrid cloud strategy. It’s not about whether moving workloads to the mainframe or the cloud is right for you, but rather, which workloads are right for which platforms.

Putting the right workload on the right platform calls for reviewing your individual workloads and determining the most optimal solution for each in terms of scale, security, and cost-effectiveness. Not everything belongs on the cloud and not everything belongs on a mainframe. Typically, a hybrid solution is needed, and when building your own hybrid cloud strategy, there are four critical factors to take into consideration: security, availability, performance, and innovation.

Security-critical workloads should modernize on the mainframe

The convenience of the cloud can’t be denied: outsourcing your data storage, cataloguing, and maintenance to a third party can be a huge benefit. However, that level of access has come with increased vulnerabilities. Nearly 7 in 10 organizations, 69%, say data breaches and exposures have happened due to inconsistent security practices across their multicloud environment.2 That’s a real concern, especially when it comes to sensitive data.

Some workloads, like project management data or office administration files, don’t have to constantly send sensitive data. The convenience of the cloud outweighs the mildly increased security risk. But for others, like financial trading or bank transfers, confidential information is fundamental. Fines for security breaches can be steep, with financial institutions paying some of the largest fines ever levied due to security breaches in the cloud. Of course, the cloud can still be made secure with security policies like zero trust, but it’s important to consider security when selecting the right platform for your workloads.

With the right security controls and strategies in place, the mainframe is eminently securable. Controls and strategies implemented on the mainframe—like multifactor authentication, data encryption, penetration testing, auditing, transaction monitoring, and constant sweeps for new vulnerabilities—are the key reasons why so many major companies, particularly in banking and industries that deal with personal information, primarily rely on the mainframe to run their workloads.

It also enables total vertical control over your data, from user to server, eliminating a cloud provider as a threat vector (not to mention an expenditure). For workloads where security is vital, it can be better to modernize by moving workloads to the mainframe instead of taking them off the platform.

... the cost per outage is going up. 60% of outages in 2022 racked up over $100,000 in losses, up from 39% in 2019.

Always-on workloads should integrate the cloud with the mainframe

Service outages happen to even the most prepared platforms. Some noncritical workloads can be down for an hour without affecting the overall business, but others can be disastrous for a company if they’re down for even a minute. While outages are declining overall, the cost per outage is going up. 60% of outages in 2022 racked up over $100,000 in losses, up from 39% in 2019.3

Consider the airline industry. If a key system is down for just a few minutes, customers may select a different supplier, an entire airline could be grounded, or even worse, like when a massive winter storm in December 2022 canceled thousands of flights in the United States across multiple carriers. No company was hit harder than Southwest Airlines, which had to cancel more than 60% of its flights over two days.4 The process of matching crewmembers to available aircraft “could not be handled by our technology,” according to Southwest COO Andrew Watterson, forcing Southwest to initiate a systemwide reset and operate on a reduced schedule through the end of the year.5

Both the cloud and the mainframe, when used correctly, could have helped Southwest match available crewmembers and aircraft when the system failed. The outage demonstrates the importance of planning and designing for availability and scalability. Diversifying across multiple cloud instances and vendors can mitigate this risk, but integrating with the mainframe often provides that extra reliability and scalability with less complexity and cost. The mainframe provides hardware redundancies and other strategies, like a parallel sysplex, to fundamentally deliver reliability at its core. Each platform can protect against service outages, but for large, business-critical applications, the mainframe often can do it for far less complexity and cost.

Workloads that scale should modernize on the mainframe

Keeping up with massive, second-to-second demand is a make-or-break need for some businesses. Relevant workloads like transaction processing have to keep pace no matter how intense demand gets, and both the cloud and the mainframe can handle workloads with a high number of transactions per second. However, while the cloud needs thousands of instances and multiple data centers to maintain reliable performance at scale, one or two mainframes can handle it easily.

The capacity to handle billions of transactions a day is why 45 of the top 50 banks operate on the mainframe.1 IBM’s z16 can process 300 billion inference operations per day, or about 3,472,222 per second;6 when workloads scale to global reach, capacity is a crucial factor in choosing a platform. If a workload requires constant, unbroken service to handle a massive number of transactions per second, moving workloads to the mainframe could be a better long-term investment than spreading that load across thousands of cloud instances. The redundancies needed to maintain service at scale on the cloud are simply too expensive when compared with the simplicity of a few mainframes.

Even with its age, there are still some workloads that simply can't be handled as effectively anywhere else as they can on the mainframe.

The mainframe has been modernizing and innovating for nearly 60 years

IBM's very first mainframe, the System/360, was released in 1964.7 Its most recent release, the z16, was released in 2022.8 That's nearly six decades of exponential progress, and some of the best minds on Earth are continuing to improve upon what's already been built. There’s a treasure trove of talent and expertise ready to be utilized, and the mainframe will continue to be the platform of choice for mission-critical workloads for decades to come.

Even with its age, there are still some workloads that simply can’t be handled as effectively anywhere else as they can on the mainframe. Take fraud detection in banking.

Milliseconds are critical in stopping fraud, so putting the detector as close to the potential fraud as possible—like in real time on the same mainframe as the transaction is happening—is significantly more secure than only sampling some transactions offline after the transaction has occurred. Today’s mainframes have the ability to reduce response time to such a degree that fraud can be detected before the transaction even completes.8 Even reduced to milliseconds, the latency inherent in performing the fraud detection offline—such as in the cloud or on separate distributed systems—often means fraud detection is only run on a representative sample of transactions. The mainframe, on the other hand, can check every single transaction for fraud as it happens.8 It can catch fraud more quickly and at far greater scale than a cloud or distributed setup, and depending on your workload, this detection can be utterly crucial. For relevant workloads, moving to the mainframe and modernizing is best both for actual performance and for cost-effectiveness.

The cloud will continue to revolutionize how we work for decades to come, but the choice between it and the mainframe is a false one. A hybrid approach, asking which workloads are right for which platforms, is a far better way to consider whether you need to work on the mainframe, off the mainframe, or integrate the cloud with the mainframe.

In the end, there’s no one right answer beyond the one that is the most cost-effective and workload-appropriate for you. As you decide what that answer is for your business, be sure to ask the right questions and put your workloads on the right platform.

Frank DeMarco is the Vice President of Core Enterprise and zCloud Global Delivery at Kyndryl.