Welcome!

Mobile IoT Authors: Pat Romanski, Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Elizabeth White

Related Topics: Mobile IoT, Containers Expo Blog

Mobile IoT: Article

Virtualization Economics: Balancing Efficiency and Risk

Virtualization significantly changes the economics of constructing and operating IT environments

The fact that virtualization can have a positive impact on total cost of ownership is not new to those familiar with designing and managing IT environments. Many of the contributors to these cost savings, such as reduction in physical hardware and the corresponding savings in power and cooling, are key drivers for virtualization initiatives, and feature prominently in many ROI models. But these factors are just the beginning when it comes to the savings that can be realized with virtualization - having highly efficient assets provides the "first wave" of efficiency, but being extremely clever about how those assets are used is where the next wave of savings can be found.

Making Clever Use of Assets
To illustrate this it is useful to look to other industries that make clever use of assets. In the early days of shipping and logistics the time to deliver goods was measured in weeks, as ships plied the world's oceans to enable global trade. With the advent of the airplane it became possible to move goods much more quickly, reducing the end-to-end time to deliver some items (such as airmail) to days. But the use of these revolutionary assets alone was not enough to enable overnight delivery - sheer speed would not help you if there was not an airplane going in the right direction at the right time. To achieve the modern notion of overnight delivery it was necessary to introduce planning models that make extremely clever use of these assets, allowing their speed and agility to be leveraged to the benefit of end customers. The very same is true of the data center, where the use of virtualization must be combined with the right planning models in order to reach the highest levels of efficiency.

Although there is a direct analogy between this example and the data center, there is another element that must be considered in data center management: risk.  To illustrate how risk factors into the equation there is another analogy that also provides useful insight into the dynamics of virtualization, and it too involves airplanes.  When commercial airlines strive to make the best use of their assets they invariably employ a technique referred to as "overbooking", where they essentially sell more tickets than there are seats on a plane.  They do this because they know that a certain percentage of passengers will typically not show up, and it is in their interest to make efficient use of their aircraft by filling all the seats.  But by doing so they are assuming a certain amount of risk; if more passengers show up than is typical they end up angering customers and footing the bill for hotel rooms.  In other words, this is actually a way of balancing the efficiency associated with full aircraft and the risk associated with customer satisfaction and financial penalties, and clearly illustrates the potentially inverse relationship between the two.

Workload Placements and Allocations
Turning our attention back to the data center, these two analogies provide some very useful insights into the economics of virtualization. First, by making clever use of virtualized environments, a second wave of efficiency can be achieved beyond the initial consolidation benefits. Second, to reach this optimal level of efficiency it is necessary to understand the tradeoff between efficiency and risk and to start asking some tough questions, such as how much operational risk is acceptable to achieve a certain gain in efficiency within a specific business service (see Figure 1).

In virtual environments the main mechanism to achieving these goals is through the proper management of placements and allocations. Placements define where virtual workloads are running at a given point in time (i.e., which VMs are on which physical host) and allocations define what share of the physical resources that workload is entitled to (e.g., CPU limits, memory reservations). These two capabilities are at the core of capacity management in virtual environments and enable accurate alignment of supply and demand that is simply not possible in physical environments.

This leads to one of the greatest challenges currently facing capacity management in the data center. Because the ability to continuously align supply and demand is new, capacity management solutions rooted in the physical world are ill-equipped to deal with these new management challenges. Because proper capacity management is now critical to achieving high efficiency and realizing the corresponding financial savings, a lot of potential savings are lost as virtual environments are left to run in sub-optimal states. Applying too much supply to a given demand (overprovisioning) is extremely common and wastes valuable resources. Less common but perhaps more dangerous - applying too little supply to a given demand (underprovisioning) runs the risk of performance degradation and even failure, potentially incurring financial penalties (see Figure 2). Without the proper techniques to strike the right balance, the true benefits of virtualization often go unrealized.

Properly Aligning Supply and Demand
There are several factors that govern where workloads should be placed and how they will interact with each other. By properly considering all of these factors, it is possible to actively manage virtual environments in a way that is both safe and highly efficient. It should be noted that the terms "safe" and "highly efficient" are purely subjective, and part of the challenge in aligning supply and demand is determining what constitutes "safe" for a given application or business service, which is turn dictates what can be considered "efficient" in that particular case.

The following areas hold the key to unlocking efficiency and managing risk in virtual environments:

What Constraints Affect Workload Placement?
Under ideal circumstances, virtual environments allow workloads to be placed on any physical host. Unfortunately, data centers and production environments rarely provide ideal circumstances, and there are typically a series of constraints that limit the free movement of workloads. These will in turn impact efficiency, as it places limitations on placements that may in turn limit efficiency. These constraints generally fall into three categories: Technical Constraints, Business Constraints and Resource Constraints.

Technical Constraints typically arise from compatibilities between technologies, connectivity between systems or other technical requirements, and limitations impacting the particular IT environment and the virtualization technology deployed. For example, VLAN or storage connectivity may impact which VMs can go on which servers, thus constraining the solution. Business (or Non-Technical) Constraints often stem from organizational, regulatory, security, process and even political requirements. Some of these are non-negotiable, such as limitations on the mobility of customer data, while others can be analyzed to determine if greater efficiency can be achieved by challenging certain business assumptions and lifting certain constraints. Finally, Resource Constraints arise from the fact that physical hosts can only perform a finite amount of work, and placing more workloads on a system than it can handle is generally not advised. This, however, is an oversimplification, as different workloads have different requirements when it comes to performance and availability, thus providing a number of potential opportunities to drive higher efficiency.

What Are the Application Performance Requirements?
The level of performance required by an application or business service is also a critical factor when striving for high efficiency, as it tends to limit the density of workloads. Certain workloads follow predictable, repeatable patterns and can be combined to a higher density than unpredictable, more "random" workloads can. Also, many workloads exhibit peak activity levels that are different from their "sustained" activity, creating a situation where capacity may go unused during periods of low activity. How these situations are dealt with when virtualizing systems is largely dependent on how well the applications in question need to perform.

For applications that don't need to perform well, such as low priority batch jobs and many dev/test applications, sustained levels of utilization can often be used as the determining factor when "stacking" workloads. The resulting virtual environments will be sized to handle the majority of the activity occurring on the VMs, but will likely become saturated when peak activity levels occur. This is not necessarily a problem - it simply means that the applications will perform very poorly at certain times of day, which may be a worthwhile tradeoff given the efficiency that higher VM densities creates.

For applications where performance is a concern, using peak utilization levels (or a weighted scorecard of peak and sustained activity) is advisable as it provides some level of assurance that applications will get the resources they need when they need them. This approach, however, raises a new question: How much assurance is necessary for a given environment? Just as it is wasteful to run applications at 99.999% availability if they don't need it, it is also wasteful to design virtual environments to unnecessarily low "risk tolerances" as it lowers VM densities. Finding the balance between efficiency and risk in this case requires analysis of Contention Probability, or the probability of two or more workloads contending for resources at the busiest times of the operational cycle. Determining what level of contention risk is appropriate for a given environment involves making some difficult decisions, but can have tremendous impact on efficiency (see Figure 3). For example, rather than being completely risk averse, in some environments it is possible to double virtualization ratios by simply assuming a 1% risk tolerance. In other words, being overly safe has a significant price tag that may not be justifiable for a given application or business service.

Conclusion
Virtualization significantly changes the economics of constructing and operating IT environments. But the gains in efficiency that lead to greatest cost savings don't come easily. To realize these gains it's necessary to take a careful look at risk and specifically at how much operational risk, if any, can be tolerated in the name of efficiency. By considering this as an input to the planning process, it is possible to safely walk the line between overprovisioning and underprovisioning, striking a balance between efficiency and risk that unlocks the next wave of efficiency in IT environments.

More Stories By Andrew Hillier

Andrew Hillier is CTO and co-founder of CiRBA, Inc., a data center intelligence analytics software provider that determines optimal workload placements and resource allocations required to safely maximize the efficiency of Cloud, virtual and physical infrastructure. Reach Andrew at [email protected]

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
OpsRamp is an enterprise IT operation platform provided by US-based OpsRamp, Inc. It provides SaaS services through support for increasingly complex cloud and hybrid computing environments from system operation to service management. The OpsRamp platform is a SaaS-based, multi-tenant solution that enables enterprise IT organizations and cloud service providers like JBS the flexibility and control they need to manage and monitor today's hybrid, multi-cloud infrastructure, applications, and wor...
The Master of Science in Artificial Intelligence (MSAI) provides a comprehensive framework of theory and practice in the emerging field of AI. The program delivers the foundational knowledge needed to explore both key contextual areas and complex technical applications of AI systems. Curriculum incorporates elements of data science, robotics, and machine learning-enabling you to pursue a holistic and interdisciplinary course of study while preparing for a position in AI research, operations, ...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
Tapping into blockchain revolution early enough translates into a substantial business competitiveness advantage. Codete comprehensively develops custom, blockchain-based business solutions, founded on the most advanced cryptographic innovations, and striking a balance point between complexity of the technologies used in quickly-changing stack building, business impact, and cost-effectiveness. Codete researches and provides business consultancy in the field of single most thrilling innovative te...
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, discussed some of the security challenges of the IoT infrastructure and related how these aspects impact Smart Living. The material was delivered interac...
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactu...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...