A new age has dawned within the enterprise telephony marketplace. Over the past 15 years customers have been putting pressure on telephony manufactures to allow PBX software the ability to run on standards based x86, SUN platforms, and even standard Ethernet Switches. The requests have mostly been from overworked network administrators trying to ease their maintenance needs while standardizing the equipment they have to support. Many manufacturers listened and hardware choices have grown well beyond the PBX appliance of yesteryear. Based upon my reading of the tea leaves it is clear to see that the next frontier for telephony applications is Virtualization.
The impact the data world has had on the telephony world over the past 15 years is unmistakable to everyone that is involved in making a telephone call today. Yesterday our telephone service was delivered via Plain Old Telephone Lines. You had the choice of Loop Start or Ground Start. T1’s were often times used for dedicated long distance and 800 Inbound calls. PRI’s were something new that could be used for dial tone and in more advanced applications for dial-up access to the internet. What the telephony marketplace might call advancement back in the day might be a built in CSU instead of an external CSU. This is hardly a noteworthy accomplishment. The difference today is striking as nearly all carriers deliver their dial tone via a SIP protocol carried on a standard TCP/IP network, whether they convert it back to LOOP Start, T1, PRI or leave it SIP on premise is up to the customer.
According to the Gartner Group 60% of all server workloads will be virtualized by 2013. Virtualization is gaining increasing market acceptance year after year for very good reasons. At first glance companies Virtualize servers for two reasons. The first is to save money by using excess CPU capacity and memory. A cursory glance at any of your existing servers will show that they typically are only using 10% of their processing power. Many may ask why we don’t just load multiple programs onto the server like we do on our laptops. The reason is quite simple, reliability. Ask any IT Manager who has spent a year or more supporting a server environment and they will tell you, without hesitation, that server applications are most stable when they are the exclusive program running on a single machine. Stability equals more uptime, and more uptime equals satisfied customers for the IT Manager. The second part of the cost saving equation is the “Green Side”. Virtualization allows multiple server applications to exist on a physical server, but each application knows nothing of the other Virtual Servers because each lives in its own Virtual environment. An example of how independent each server application is from the others running on that same machine is that each server application needs its own copy of an operation system, be that Windows NT, 2000, 2003, 2008 or Linux. If you are running 15 different VM’s you will need 15 Operating System licenses loaded onto that VM. By adding VM’s to a server we are creating a more efficient use of the available hardware. Efficiency translates directly to less energy used running each application which in turn reduces the amount of heat generated requiring less server room air conditioning. While these reasons are good, once a company virtualizes many other benefits are realized.
Virtualization is the basic building block for Cloud Computing. In a poll released last month by CDW LLC. 28% of all companies based in the United States are using some sort of Cloud Computing. Companies are adopting a Cloud Computing model because it is a more efficient use of capital. How is it a more efficient use of capital? I have walked through a few local “Cloud Computing” environments and they resemble, in a good way, bomb shelters. For example, power is fed to the facility in a normal manor with one major exception. The power is merely keeping their battery backup unit topped off. Huge battery backups called On Line Battery Backups feed the server room power. They oftentimes have the capacity within the batteries to run the entire facility for more than 12 hours and even up to 24 hours, enough to handle all but the longest blackout. Another huge benefit of On Line Battery Backup units is that they feed the equipment very, very clean power. The frequency and the voltage are far more stable than power from the Grid. A benefit of this clean power is prolonged server life. In addition to 12 hours of battery backup, they also have enough commercial generation on site to hold out as long as their supply of fuel lasts. Some cloud computing facilities rely on diesel generators and others rely on natural gas. Redundant Fiber Optic Loops come into the facility at different points and from multiple suppliers. Should a contractor inadvertently cut a fiber optic line down the street, your applications continue to run almost uninterrupted on the “Backup” fiber optic loop. All this reliability can be retained for as little as no cost. I checked out Amazon.com for pricing and found that cloud computing can be had at no cost for your first year! Talk about efficient use of capital.
A big question many people may have is why should I put all my eggs in the one basket called a Virtual Server? Virtualization has solved the problem of reliability too. Virtualization allows you to run your server environment simultaneously in the Cloud and On Site, or On Site + On Site, or in the Cloud and a second Cloud instance, whatever your choice. Should the primary fail, the backup Virtual Machine will automatically take over all the computing tasks. The IT Manager’s customers never even knew a failure occurred. Should the onsite server need maintenance, the IT Manager has software tools to manually move applications to another server in a matter of minutes with no interruption, no downtime, and no late night scheduling to handle the repair. It is easy for an organization to see that a Virtual Computing Environment is going to be more reliable, less expensive, and easier to maintain that the server farm many IT Managers are currently using. How does this affect telephony?
The Telephony application is late to the Virtualization Dance not because of our stodgy nature, but because of technological issues. I recall my first deployment of VoIP involved a Point to Point T1 running Ethernet over two Cisco 1600 routers. We quickly found out that when the data traffic got heavy the voice quality suffered. Fortunately for us Cisco had released the 1720 router that had a new feature called Quality of Service and a simple upgrade to the network quickly eliminated the issue. The Cisco 1720 allowed us to prioritize the voice traffic over the data traffic ensuring quality sounding voice. I bring this up because digitized voice is unlike digitized data. 400 milliseconds of latency over a network for a data application is hardly noticed by the end user, but put that same amount of latency into a voice conversation and it becomes unintelligible. Virtualization of real time telephony applications introduces that same type of issue, poor voice quality, but now it is caused at the CPU level. The IT Manager can allocate more resources to the voice application, but just like the T1 got overwhelmed, so too will the CPU. It is only through allowing an application like Voice to have direct access to the CPU hardware that this issue has been recently resolved. Today virtualized telephony systems can reach thousands of ports, where without direct access a few ports were all that were possible in the past. From where I sit it is obvious that the telephony marketplace is going to continue to follow the lead of our Data cousins and adopt the Virtual Machine. Citrix, Microsoft and VMware all play a role in this market. As our customers continue to demand ever increasing levels of reliability and functionality at a reduced cost, it’s easy to see how important Virtualization is going to become in the Telephony market place.
Craig Hodges, 586-330-9252