Today’s Cloud: Flexible and Hyperscalable

Hyperscale data centers, including colocation centers, along with cloud tech designed to ensure high uptime, are what set the top public cloud platforms apart. The ongoing transition to SDDC in complex organizations traditionally maintaining private data centers is implemented with VMware, OpenStack, and other solutions. Hyperscale data centers define the future of the cloud, and because so much of enterprise IT is moving to the cloud, the profit margins in IT businesses are looking up as well. Gartner estimates that public cloud resources will be a $206 billion USD market in 2019. The majority of this, as expected, is going to Microsoft Azure, AWS, Apple, IBM, and Google.

Public cloud hosting companies operate at a scale that may represent anywhere from 15% to 40% of total internet volume. Companies with huge data center requirements like Netflix, Goldman Sachs, and GoDaddy, for instance, share servers with millions of other companies in a massively-distributed multi-tenant environment on AWS. The problem is, however, that most business owners assume that if the security of public cloud hosts is enough for the largest banks, financial institutions, and Wall Street trading firms, then it is more than adequate for small businesses or startups.

This article will discuss how SMEs need to contextualize their own business priorities in a hyperscale cloud environment and avoid adopting products that are over-sized for their requirements.

Hyperscale: The Magnitude of Public Cloud Data Centers

The seminal piece of research on public cloud architecture was published by Google researchers in 2013. Although presented as groundbreaking to the public, this report actually documented the technical engineering that Google had implemented in their internal data centers over the previous ten years. Similarly, when the Kubernetes project was open sourced by Google around the same time, they were making public software that had previously been used by the company in production to serve billions of active containers at a time.

These examples represent the vast magnitude of difference between the largest public cloud IT companies and the rest of the IT world with regards to data center innovation, the ability to hire and retain top research talent, or the financial ability to support hyperscale operations. Amazon hired Werner Vogels as CTO in 2004, launched AWS in 2006, and five months later introduced the world’s first elastic web server platform with EC2. The company is now estimated to serve around 40% of the total volume of internet traffic from AWS data centers worldwide. AWS currently supports high frequency trading on Wall Street as well as the Pentagon at scale.

Google’s Borg to TPU and “Mobile First!” to “AI First!” shift in data center operations had already been fully implemented across all aspects of company operations before it was made public to customers as a Platform-as-a-Service product with TensorFlow. Facebook, eBay, AWS, Google, and other IT majors are currently competing over the definition of standards of what a hyperscale data center will be defined as for industry regulation. There are an estimated 430 hyperscale data centers worldwide, of which around 40% are located in the USA.

According to  Scott Fulton of ZDnet:

‘Hyperscale is automation applied to an industry that was supposed to be about automation to begin with. It is about organizations that happen to be large, seizing the day and taking control of all aspects of their production. But it is also about the dissemination of hyperscale practices throughout all data center buildings — not just the eBays and Amazons of the world, but the smaller players, the little guys, the folks down the street. You know to whom I’m referring: pharmaceutical companies, financial services companies, and telecommunications providers.

 One vendor in the data center equipment space recently called hyperscale “too big for most minds to envision.”.’

The article is worth reading simply because IT administrators have to understand their own business requirements in the context of hyperscale data center architecture when provisioning cloud resources.

In the next section, we will discuss how to contextualize multi-cloud or hybrid cloud requirements for enterprise and SMEs uniquely.

SDDC Flexibility: Multi-Cloud & Hybrid Cloud Orchestration

On the most fundamental level, web servers are represented as bricks in a hyperscale data center, bought and sold as commodity hardware by the public cloud IT companies at the lowest wholesale rates. Hyperscale data center companies not only secure lower prices on hardware at volume than other corporations or consumers, but they also engineer their own networking equipment, processing chips, and code. On a secondary level, the public cloud majors create ecosystems through platform introduction that third-party development companies program applications for and consultant companies provide integration or conversion services for businesses.

Amazon, Google, Microsoft, and IBM all share a percentage of the total IT spend of the Fortune 500 companies and other complex organizations worldwide. These businesses, as well as startups, government, education, etc., all need to implement completely unique solutions on the public cloud architecture. This involves the replacement of traditional private data center facilities or the support for internal software services in operations. Most companies also use public cloud hosting services to run web and mobile applications in support of services at scale. Many corporations now own thousands of brands and domains for products.

Software-Defined Data Center (SDDC) technology was pioneered by VMware and other public cloud IT companies as a solution for the largest enterprise companies and complex organizations to orchestrate hardware resources through virtualization. The stack layer is then unique for every business, depending on the application code that each business server or web server will support. This has led to VLAN and SD-WAN innovation in the corporate space that is based on developments in cloud hosting for SaaS applications. It is important to recognize the difference in business server and web server requirements for SDDC orchestration

Public Cloud: SDDC and Virtualization Solutions that Scale

Rapid, on-demand scalability (both up and down) has been a big driver of cloud adoption. Hyperscale data centers allow enterprise corporations to take advantage of the best features of the cloud through flexibility. This flexibility is actually implemented through SDDC tech like VMware, Microsoft Azure, and OpenStack. With VMware, the company is owned by Dell and specializes in multi-cloud and hybrid cloud solutions with all of the public cloud hardware providers. OpenStack is the vendor agnostic open source solution. Even Microsoft now supports Kubernetes container virtualization on their public cloud service plans as well as Windows products.

SDDC solutions work for both business server and web server requirements on hyperscale architecture to allow the largest organizations in the world to operate on public cloud architecture. SMEs have different needs and requirements than industry majors which means finding a way to build operations in public cloud data centers for the future with complexity and scalability at more affordable costs. Business owners need to find ways to integrate into public cloud ecosystems and use the PaaS products provided by the IT majors to boost their own productivity or profitability according to the unique aspects of their business plan or project requirements.