The growing use of virtualisation has really helped many organisations.
Not only have the average utilisation rates of servers and storage improved, but the use of applications and other software packaged ready for installation – commonly known as virtual images or virtual machines – has meant that systems can be implemented or recovered far faster than they used to be.
However, this can be a two-edged sword.
The good side of being able to implement a runtime application rapidly is seen in hosted systems, cloud computing and private datacentres; but the bad side is seen most in development and test departments, and is spreading out into the runtime.
The problem is that virtual machines (VMs) are just too easy to use. In the past, if you wanted to install a copy of an application, the first thing to do was order a server. Then wait to receive the server. Then get it up and running, install all the patches to the operating system that the supplier had neglected to put in place. Then install all the support software that is required – app server, database, whatever, followed finally by the software you want to run. Long-winded? Yes – and often enough to put a general developer off, and they would just re-use a single server time and time again, cleaning the server down after each test and building back up from a golden back-up image to then test the next iteration of their software. Maybe a couple of hours each time to get to a "clean" position.
Today it is possible to grab some spare resource from a virtualised hardware base, spin up a VM and then install your software. This takes just a few minutes, and as the resource pool can be pretty big, it is easy for the developer to "forget" that they have a live VM running and just start up another one. IT departments could experience greater problems with VM sprawl – with test groups growing the VM pool and users being able to self-service systems that they may only use a couple of times.
The move towards a development/operations (DevOps) model for organising IT, where the development and test employees can push new images directly into the runtime, will make it much harder for IT administrators to keep track of all VMs.
Effective management of software licences and VMs
The result is that not only are resources being locked down by VMs that are not doing anything useful, but there could also be licences tied up in these VMs that are doing absolutely nothing useful. For many, it may not appear to be an issue – unless someone from the Federation Against Software Theft (FAST) walks in through the door asking to carry out a licence inspection.
Managing licences is something that many organisations still do not do. Suppliers such as Flexera offer full-service licence management, which can not only track licence usage, but also manage them against suppliers' licence agreements and, in most cases, against their tiering systems, ensuring that an organisation gets the best value from its licences. Others, such as Centrix Software, can track licences and advise on how they are being used so that an organisation can decide how licences should be allocated more effectively, although Centrix really is for dealing with virtual desktop systems. However, what a buyer really should be looking for is a system that not only manages licences, but also manages the lifecycle of the VM itself. Features to look out for include:
- Building – the capability to create the VM from the component parts on the fly, using the right components for the right VM every time.
- Provisioning – the capability to take the VM and make it live out on the target platform.
- Patching and updating – the capability to ensure that all components and VMs are at the right level of patch for the job – not necessarily that everything is at the latest patch level, but that the build engine can gain access to components that meet the needs of the final provisioned system. For example, there may be a dependency on a certain piece of software to run on an operating system that is not patched to the latest level – the chosen system must have the granularity to be able to ensure that such rules are followed.
- VM monitoring – ensuring that VMs are running correctly and are "healthy". Also, tracking usage and advising when VMs seem to be unused but live, so using up resources and licences that could be used elsewhere.
- Resource management – the capability to provision VMs with the right amount of resources at the right time, through thin-provisioning storage and low-resourcing central processor unit (CPU) and network to managing peaks and troughs of resource demands in a flexible and elastic manner.
- VM management – full reporting on VM usage to both technical and line-of-business users, along with rules-based lifecycle management of VMs in test and development and in the runtime environment, as well as full inventory of VMs and their contents.
- VM portability – the capability for VMs to be moved from development to test machines and then to runtime systems in a seamless and fully audited manner. Also, the capability for runtime VMs to be moved from one platform to another, particularly where an organisation is looking to use hybrid cloud environments and may need to move a VM from an on-premise platform to a co-location datacentre or into the public cloud.
- Auditability – every action on a VM and how it is used needs to be logged so that a full audit path is maintained. With an increase of activity in governance, risk and compliance (GRC), the need to be able to prove exactly what was used when dealing with any outsider or even for a particular transaction is an issue that is not going away, and as such, audit capabilities should be high on the list of requirements of any systems for managing VMs.
Optimising the virtual environment
Most of the incumbent systems management companies – IBM with Tivoli, CA, BMC – are moving in this direction in one way or another. However, others are doing more. Dell has been building on its Kace acquisition, and now that it has acquired Quest Software, expect to see a rapid move to a more full-service physical/ virtual systems management toolset.
Another company to watch is Serena Software. Under the umbrella of "orchestrated IT", Serena is taking its existing application lifecycle management (ALM) approach and expanding it through to offer an organisation the choice of running as separate, but closely managed, development and test teams and a runtime team, or moving towards a more seamless DevOps approach where the various VMs are all fully managed according to a corporately and technically defined set of rules.
Outside of its Tivoli systems management capability, IBM also has its PureSystems and its z/Enterprise groups, with a universal resource manager that can ensure that a workload is placed on the best available resources – whether this be Windows, Linux or even a mainframe platform in the case of z/Enterprise, and also whether an Intel or Power chip is the best place for that workload to lie. This still needs the basic capabilities of Tivoli for other areas of managing the build and management of VMs, but gives good pointers as to the probable future of a fully managed virtual environment.
Virtualisation is a definite positive evolution in the use of available hardware resources.
However, organisations and technical teams have to understand that it is no silver bullet on its own. In fact, uncontrolled usage of virtualisation can lead to bigger problems where VM sprawl happens, at both the resource and the corporate responsibility levels. It is incumbent on those responsible for the IT function to ensure that the right systems are in place, to enable VMs to be managed at the right levels of granularity for full lifecycle management, with licence recovery and full audit capabilities in place to ensure that everything works to the best possible level.
Clive Longbottom is an analyst at Quocirca