We’ve talked several times on this blog about how virtualization is a big part of our network transformation. We’re moving network functions previously handled in specialized, custom-built hardware into software running on commodity servers. That virtualization process drastically accelerates our network capabilities. But it also creates some new speed bumps.
So we’re smoothing those bumps out with a technology called “containers.”
As a quick reminder, network virtualization is a concept you’re probably familiar with, even if you don’t realize it. Have you replaced your portable CD player with a streaming music app on your phone? That’s virtualization. Hardware into software. It’s faster, more efficient, and upgradeable at the speed of the Internet. We’re doing the same thing in the network. We’re replacing physical routers, switches and other single-use gear with apps running on multipurpose servers.
Historically, when you wanted to run multiple virtual network functions on a single server, each function needed its own “virtual machine”, or VM. The server has its own operating system. The VM also has its own operating system. In between is another software layer, called a hypervisor.
Those layers of software burn up a lot of processing horsepower and reduce the benefits of virtualization.
Enter containers.
A container is a dedicated software compartment for a virtualized network function. The best part is that the container runs directly on the server’s own operating system. No hypervisor. No virtual machines. It’s much more efficient. But it’s not enough.
Being able to spin up a new container in seconds doesn’t do much good if it takes a human operator minutes or hours to push the button. And it’s hard to predict when traffic will surge. We need that container to be able to activate additional resources quickly without waiting for human input. So we’re using an architecture called micro-services to expand the capabilities of containers.
Micro-services allow you to connect across multiple containers the resources needed by the virtual network functions, when and where they need them. Pre-set policies allow those virtualized functions to activate additional containers in the cloud in just seconds to respond to point-in-time demand.
This technology is very new. But we’re already running live tests internally. We’ll be bringing this capability into our network soon.
We also want the container format to be an open standard. You can write a virtualized network function once, and run it anywhere.
AT&T recently joined two groups – the Open Container Initiative and the Cloud Native Computing Foundation – spearheaded by the Linux Foundation. Our goal is to help create an open industry standard for container and micro-services technology.
We have an ambitious goal of virtualizing 75 percent of our network by 2020. Containers and micro-services are key to reaching that goal.