Each minute, we collectively create millions of videos, pictures, GIFs and emails. In fact, the world created more than 90 percent of all data in just the last two years.

Think of it: in the short amount of time you take to read this blog, the world created terabytes of data.

All of this data has to live somewhere. Personally, you might use an online cloud storage service for a small fee, if not for free. But for large companies like AT&T that store hundreds of petabytes of data, storage can be expensive.

Costs for data storage systems range from $5,000 to $15,000 per terabyte. (For context, there are one million megabytes in a terabyte and one billion megabytes in a petabyte.)

And IT managers expect storage needs to more than double over the next two years.

Businesses also have very specific needs for how they access their data. They need reliable and ready access. They also need to know that – regardless of where it’s stored – data is secure and won’t be lost in the event of an emergency. This is critical to the bottom line. And it’s even more important as businesses become increasingly data-driven.

So the question is:  How can we keep storage costs in line, even as data volumes continue to grow?

AT&T Labs researchers are using some of the same software-defined networking principles that are part of our network transformation. We’re separating software from hardware. And we’re building open platforms with intelligence and control in the software.

All of this allows us to more quickly rollout new services and features, and now we’re applying it to storage.

We created a proof-of-concept technology called Software-Defined Storage (SDS). It creates a software layer on top of commercial disks to address enterprise cloud storage needs, while also remaining cost-effective.

When we set out to create SDS, we wanted to solve two problems:

  • Is there a way to get the reliability, availability, and redundancy that we need to meet service level agreements, but still provide a very cost-effective way of doing it?
  • Can we change the storage requirements in real-time, but still ensure every single parameter is operating most efficiently?

The result: a fully automated way to build custom storage plans. And it takes only minutes. Normally, it takes weeks to find equipment and engineers to deploy customized storage services.

Using a web interface, users can “tweak the dials” on various parameters that are important to them, like cost, reliability or performance. SDS’s visualization tool allows you to see the impacts as you adjust individual requirements.

With SDS you can customize a multitenant cloud and configure storage applications at lower costs.

SDS stores data backups more efficiently, too. Rather than pure triple redundancy, we use erasure coding. It’s a popular technique, and we used it to create an algorithm that reduces raw storage needs that also protects the integrity of the data even better than triple redundancy.

Today we’re beta testing the SDS system internally. Our goal is to prove out its effectiveness at scale and harden technologies with erasure coding for use across our data centers.

Chris Rice
Chris Rice Senior Vice President – AT&T Labs, Domain 2.0 Architecture and Design