X
Innovation

Intel x86: No cloud for you

While the 40-year-old x86 architecture is king of public cloud today, it is likely to become a legacy system in future application deployments.
Written by Jason Perlow, Senior Contributing Writer
no-cloud-for-you.jpg

While there are many milestones in the history of computing, few are as important as the introduction of the Intel 8086 microprocessor, a 16-bit CPU that laid the foundation for the client-server, multi-tier systems architecture that dominates just about every IT environment today, 40 years after it was brought to market in 1976.

Other than the IBM mainframe, which began with the System/360 in 1964 and used code that can run emulated on modern IBM System z systems in 2017, no computing systems architecture has managed to survive the ages in widespread, continuous use that long.

In industry terms, where major paradigms tend to shift every five or 10 years, the x86 is a true dinosaur.

Although the Intel x86 architecture was first released with the introduction of the 8086 microprocessor in 1976, it wasn't until 1981 when an 8-bit derivative, the 8088, was chosen for the original IBM PC.

As PC generations evolved, it gave rise to the 286, the 386 (which was the first 32-bit version), the 486, the Pentium, the Pentium Pro, the Pentium II, the Pentium III, the Pentium 4, and so on.

Depending on who you talk to, it is generally regarded that we are now in the seventh generation of the Intel x86 architecture.

A lot of things have transpired in the IT industry since 1981. Workgroup LANs, PC servers, enterprise servers, multi-tier applications, wide area networks, widespread internet use, virtualization, and now the cloud, just to name a few.

Never mind the utter explosion of mobile device proliferation, and I'm also ignoring the Open Source revolution, as well as a bunch of other things related to software development methodology and multiple generations of programming languages.

In the cloud, x86 is still king. The reason for this is that workloads running in public cloud today are predominantly Infrastructure as a Service (IaaS) based.

For organizations undertaking migrations of on-premises workloads to the cloud or doing hybridized workloads, this makes the most sense, because it doesn't require complete redesigns or re-architecture of the applications.

In IaaS, you are virtualizing entire OS images on the cloud fabric just as you would in your datacenter, and the virtual networking and storage can be set up to mimic your existing on-premises environment.

Certain changes may need to be made to incorporate automation for optimizing consumption, as well as security and identity management, but overall, the workloads running in the cloud work the same in IaaS as they do in your datacenter. That's the beauty of IaaS.

The problem is that IaaS is an extremely inefficient way to run cloud workloads. It's very resource-intensive, which translates into high billable costs to the end-customer unless the aforementioned workload optimization for consuming resources takes place. And not every app can be optimized in that way.

Public clouds bill for every virtual machine that is turned on and running, regardless of the CPU utilization. And while "basic" storage is extremely cheap and billed out per gigabyte, high performance "premium" storage, such as what is offered in Azure, is billed out per SSD, whether you use all of it or not.

Also: How ethics migrate to the cloud | Cloud migration: Dipping in a toe before diving all in | How to build a multi-cloud strategy | Secure your clouds across AWS, Azure, and more

Platform as a Service (PaaS) workloads, however, are much less expensive, because the cloud provider is giving you access at the application programming interface (API) level, not at the VM level. You are billed on how many API calls or transactions you eat, so it is more utility-based.

But PaaS does require a complete re-thinking of how applications are written, and it is a "born in the cloud" approach rather than a like-for-like application migration approach. So, right now, most PaaS things running are turn-key applications and services written by the cloud vendor or the SaaS provider, not applications written originally in enterprise environments.

They are net-new types of things.

I think it is fair to say that as long as IaaS exists, the x86 platform will exist. If the workloads running on x86 systems stay relatively unchanged in enterprise environments, x86 is likely to be around for a long time to come.

Containerization

However, changes are afoot. One of the more talked about technologies is containerization, which is a transitionary technology between full virtualization/IaaS and PaaS. Containers are in effect the building blocks for creating PaaS apps.

Containers are much more efficient that VMs because they allow much more granular consumption of cloud/server resources. Containers permit many application instances to run within a single OS kernel instance while using cloned instances of shared resources and security constructs -- along with separate storage accounts -- to create cloud multi-tenancy that otherwise would require VMs to perform the isolation.

The other huge benefit of containers is that containerized apps are "packaged" up like zip files or JAR files. They contain everything they need to run; as long as the host operating system supports containerization, these apps will run without additional modification and with minimal additional configuration. Additionally, since container hosts share a single instance of an operating system, patch management and other configuration-related issues are simplified.

The only commonality that a container and a host needs is that the container is tied to the type of operating system it was developed on, although even that, potentially, is not an obstruction.

More on this in a bit.

Containers can run virtualized, within VMs, or "on the metal." Public clouds, such as Google Compute, Microsoft Azure, and Amazon Web Services, currently provide the option of doing it as VMs using their respective container services and orchestration engines.

From a billing and consumption perspective, as well as from a provisioning standpoint, containers running on the metal may eventually be preferable to VMs, because you don't need to run a full-blown OS image to provision a container. The fabric (infrastructure) is run by the cloud provider all the way up to the OS level. You only touch the apps.

And if it's PaaS or true SaaS, you aren't even touching/maintaining the apps as the end-user; you simply buy a subscription to use those things.

Unlike today's public cloud IaaS implementations, in which you are responsible for maintaining the OS as well as the networks and storage, what you will end up managing as SaaS provider or an enterprise in a public cloud will be the actual code in packaged apps, which will run on container instances that can be scaled up and scaled down as needed by rules set in the orchestration engines.

Your infrastructure becomes a true appliance.

Linux and Unix have been container optimized for a long time, and they can run just enough OS, or JeOS, for supporting containerized apps using technologies such as Docker. Windows Server 2016 provides deployment configurations for Nano Server VMs, as well as Hyper-V and Windows Containers, which allow you to pare down the OS to headless, light JeOS instances as well.

x86 irrelevance?

What does this mean to x86's relevance in the cloud? Everything.

Many applications running in the enterprise are written to legacy code that may have architectural dependencies on the x86 instruction set. Modern applications? No so much.

Instead, they are written in high-level, abstract programming languages using APIs that don't reference the systems architecture directly. Examples of this includes .NET, Java and many scripting languages like Ruby, Python, and, of course, Javascript.

If the applications are architecturally agnostic, why run them on x86 at all? I think we now need to ask: Is x86 still a valid systems architecture in a hyperscale, containerized world that is increasingly cloud-based? Why continue to drag all that legacy compatibility cruft with us into the next computing paradigm?

Nine years ago, when I started writing for ZDNet, I asked that very same question.

At the time, I thought it was a good idea to think about developing a new systems architecture for the PC as well as the datacenter.

Intel, in partnership with HP, did develop an alternative systems architecture for enterprise computing, in the form of the IA-64, the Itanium.

While the IA-64 was a high-performance CPU architecture, its timing was off -- the world was still running on x86-dependent code using the Win32 platform, and the Linux port and toolchain for IA-64 was lacking many features and was not closely developed in tandem with the Open Source community.

Instead, the Open Source community rallied around x86-64, which was introduced with the AMD Opteron processor. Eventually, Intel released its own 64-bit x86, with a nearly identical instruction set, and interest in IA-64 waned.

It didn't help that the IA-64 chips were very expensive compared to their Xeon and Opteron counterparts and weren't optimized for most regular workloads. IA-64 survives today as a niche processor for high-end UNIX computing using HP Superdome servers.

There is, however, an alternative systems architecture that is well-optimized for low-cost cloud computing -- ARM Cortex. And it has many architectural licensees, including Samsung, TSMC, Apple, Qualcomm, Huawei, and, yes, Microsoft and Intel.

Linux has been running on ARM for a long time. If you use an Android phone, it most likely has an ARM processor and runs on the Linux kernel. Windows 8.1 and the NT kernel was ported to ARM for the ill-fated Surface 1 and Surface 2 tablets, as well as Windows Mobile/Windows Phone, and Microsoft has recently announced that Windows Server will also be ported to the ARM architecture on Qualcomm's SoCs for use in its Azure public cloud.

When you compare the price/performance ratio and total cost of ownership of even the fastest ARM SoCs with the cheapest Intel i3 or Atoms, there's no contest, let alone if you compare them with server-class Intel or AMD chips.

Although Intel PC and server chips have become much more energy efficient in recent years, the ARM chipsets to build a comparable performing server blow the Intel chips away in terms of component cost and relative performance.

Potentially, you can put in thousands of datacenter-optimized ARM CPUs into a cloud fabric for a mere fraction of the cost of comparable low-power Intel chips, particularly if you are talking about a cloud vendor like Microsoft, Google, or Amazon that is buying at hyperscale quantities.

Increased density and lower datacenter costs means that the compute costs can be lowered drastically, especially if all the end-customer needs to manage is their application container code and not the host OS.

Today, the cloud is IaaS and x86. Tomorrow, it is likely to be containers, PaaS and ARM. When a container server instance on ARM becomes four times or eight times cheaper than a comparable x86 VM, the paradigm will change, and the x86 will be relegated to legacy and niche workloads, much like the IBM System z is now.

VIDEO: How to manage multiple public cloud strategy

Editorial standards