Brian Allen
Manager, IT and Infrastructure
Conclusive Analytics

For many IT organizations, server virtualization has become a real game-changer.  No longer are projects delayed by hardware sizing, load guesstimates, or physical hardware procurement.  Today, IT staff are able to quickly spin up tens or even hundreds of virtual servers in a matter of minutes, meaning the availability of hardware no longer stands in the way of a project kickoff.  Instead of procuring new servers for every new project, virtualization allows IT administrators to stay ahead of the game.  Server availability can now be stated in terms of available capacity of resources; that is, at any given time, the company has X number of CPU’s, X gigabytes of RAM, and X terabytes of storage available for new server allocations.  When the pool of available resources gets low, that’s when administrators either clean up what’s there (decrease / shrink the current footprint) or add more resources to the pool (by purchasing more RAM, disks, or physical hosts).  This “scale-out” architecture – compared to traditional “scaling up” – makes upgrades extremely easy.  No longer must administrators schedule downtime in order to replace a big production server with a bigger server.  Instead, more physical hosts are simply added to the existing resource pool.

Another important difference in the virtual world is the ability to easily scale-down performance once an actual production load has been determined.  Historically, before making a large hardware purchase, IT staff had to essentially “guesstimate” what the anticipated load would be for a given server 2, 3, 5 years down the road.  And with so many uncertain variables, administrators tended to over-provision, since you never wanted to guess low and experience performance issues down the road.  What we often found, though, was a lot of high-capacity production servers sitting idle the majority of the time.  With virtualization, however, we can easily correct those over-calculations once we get a better idea of what a typical production load looks like.  A server that was originally configured with 64GB of RAM, 8 CPU’s, and 2TB’s of disk space can easily be scaled down to 16GB of RAM, 4CPU’s, and 1 TB’s of disk space.  And those resources that were previously wasted by sitting idle most of the time can then be applied to a new or different virtual server.

Since 2012, Conclusive Analytics has transformed its’ datacenter from approximately 50% virtual to its’ current state, at well over 95% virtualized.  And resource utilization on those physical hosts typically remains in the 60-80% range; that is, 60-80% of available resources are being utilized at any given time.  A buffer of 20-40% is in place to account for unexpected peaks, in addition to providing resources for upcoming projects.  In all, Conclusive hosts nearly 200 total servers, running both Windows and Linux operating systems, spread across 10 physical hosts, each running Microsoft Hyper-V as it’s virtualization platform.  Those 10 physical hosts take up less than half of one standard 48u cabinet of rack space at our hosted datacenter.  So in all, virtualization effectively allows us to run nearly 200 high-performance servers, with the entire footprint taking up about as much physical space as a medium sized gym locker.

At Conclusive Analytics, we pride ourselves in being experts in Managed Analytics as a Service.  We derive insights that intelligently power our customers’ sales activities and marketing programs, enabling them to grow revenues and increase profitability.  Our experts integrate data and deploy the latest in decision science, predictive analytics, and visualization techniques enabling better, faster decisions to achieve improved and sustainable business results.