Change is required, and the change leader has the authority to implement the change

Change Leadership Journal

Subscribe to Change Leadership Journal: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get Change Leadership Journal: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Change Leadership Authors: Jason Bloomberg, Sharon Drew Morgen, Michael Bushong

Related Topics: Cloud Computing, Change Leadership Journal, Memopal, SOA Best Practices Digest, SOA & WOA Magazine, SOA in the Cloud Expo, Government News, Cloud Hosting & Service Providers Journal, Cloud Backup and Recovery Journal, Platform as a Service, CIO/CTO Update, F5 Networks, Java in the Cloud

Blog Feed Post

Like “API” Is “Storage Tier” Redefining Itself?

There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly

There is an interesting bit in high-tech that isn’t much mentioned but happens pretty regularly – when a good idea is adapted and moved to new uses, raising it a bit in the stack or revising it to keep up with the times. The quintessential example of this phenomenon is the progression from “subroutines” to “libraries” to “frameworks” to “APIs” to “Web Services”. The progression is logical and useful, but those assembler and C programmers that were first stuffing things into reusable subroutines could not have foreseen the entire spectrum of what their “useful” idea was going to become over time. I had the luck of developing in all of  those stages. I wrote assembly routines right before they were no longer necessary for everyday development, and wrote web services/SOA routines for the first couple of years they were about.


YES INDEED, THIS IS A STORAGE BLOG

I think we see that happening in storage and don’t even realize it yet, which is kind of cool, because we all get a ring-side seat if you know where to look. image

When the concept of tiering first came around – I am not certain if it was first introduced with HSM or ILM, someone can weigh in on that distinction in the comments – it was aimed at the difference in performance between disks and arrays of disks. The point was that your more expensive disk wasn’t necessary for everyday tasks. And it was a valid point. Tiering has become a part of most large organizations’ storage management plans, just because it makes sense.

But the one truth about technology over the last twenty or thirty years is that it absolutely does not stand still. The moment you think it has plateaued, something new comes along from left field and changes the landscape. Storage has seen no shortage of this process, with the disks that were being used at the time tiering was introduced being replaced by SAS and SATA, then eventually SATA II. The interesting thing about these changes is that the reliability and access speed differences have gone down as a percentage since the days of SCSI vs. ATA. The disks just keep getting more reliable and faster. and with RAID everywhere, you get increased reliability through data redundancy. Though the amount of reliability you gain is dependent upon the level of RAID you choose, that’s relatively common knowledge at this point, so we won’t get too deep into it here.

Image Courtesy of www.clickonF5.org


BRING ON THE CHANGE!

And then the first bombshell hit. SSD. The performance difference of SSD versus hard disk is astounding and very real. It’s not something so close that you could choose to implement the slower technology (as is  true with hard disks), if you need the performance level of SSD for a given application, there are very few options but to bite the bullet and buy SSD. But it’s fast. It’s very fast. And prices are coming down.

Now the second bombshell hits. Cloud Storage. It’s immense. It’s very immense. And with a Cloud Storage Gateway, it looks like all your other storage – or at least all your other NAS storage. Companies like Cirtas and Nasuni are making cloud usable with local caches and interfaces to cloud providers. Some early reports like this one from Storage Switzerland claim that they make access “as fast as local storage”, but I’ll wager that’s untrue, simply because the cache IS local storage, all else has to go out through your WAN link. By definition that means the aggregate is slower than local disk access unless every file operation is a cache hit. Mathematically, I think that would be highly improbable. But even so, if they speed up cloud storage access and make it enterprise friendly, you now have a huge – potentially unlimited – place to store your stuff. And if my guess is right (it is a guess, have not tested at all, and don’t know of any ongoing testing), our WOM product should make these things perform like LAN storage due to the combination of TCP optimizations, compression and de-duplication in-flight reducing the burden on the WAN.


AND THAT’S WHERE IT GETS INTERESTING

So your hard disks are so close in performance and reliability – particularly after taking RAID into account – that the importance of the old definitions is blurred. You can have tier one with SATA II disks. No problem, lots of smaller and medium sized orgs DO have such an arrangement.

But that implies that what used to be “tier one” and “tier two” have greatly merged, the line between them blurring. Just in time for these two highly differentiated technologies to take on. I have a vision of the future where high-performance, high-volume sites use SSD for more and more of tier one, RAIDed SAS and/or SATA drives for tier two, and cloud storage for backup/replication/tier three. Then tiers have meaning again – tier one is screaming fast, tier two is the old standby, combining fast and reliable, and tier three is cloud storage (be it public or private, others can argue that piece out)…

And that has implications for both budgeting and architecture. SSD is more expensive. Depending upon your provider and usage patterns, cloud is less expensive (than disk, not tape). That implies a shift of dollars from the low end to the high end of your spending patterns. Perhaps, if you have savvy contract negotiators, it means actual savings overall on storage expenses, but more likely you’re just smoothing the spending out by paying monthly for cloud services instead of “Oh No, we have to  buy a new array”.


A BRIGHT, TIERFUL FUTURE

But tiering is a lot more attractive if you have three actual distinct tiers that serve specific purposes. Many organizations will start with tape as the final destination for backup purposes, but I don’t believe they’ll stay there. Backing up to disk has a long history at this point, and if that backup was going to disk that you can conceivably keep for as long as you’re willing to pay for it, I suspect that archival will be the primary focus of tape going forward. I don’t predict that tape will die, it is just too convenient and too intertwined to walk away from. And it makes sense for archival purposes - “we have to keep this for seven billion years because of government regulation, but we don’t need it” is a valid use for storage that you don’t pay by the month for and is stable for longer periods of time.

Of course I think you should throw an ARX in front of all of this storage to handle the tiering for you, but there are other options out there, something will have to make the determination, so find what works best for you.

Not so long ago, I would have claimed that most organizations didn’t need SSD, and only heavily stressed databases would actually see the benefit. These days I’m more sanguine about the prospects. As prices drop, ever more uses for SSDs are apparent. As of this writing they’re running $2 - $2.50 per gig, a lot more than SATA or even SAS, but most companies don’t need nearly as much tier one storage as they do tier two.


WATCH FOR IT

That’s the way I see it falling out too – prices on SSD will continue to drive down toward SAS/SATA, and you’ll want to back up tier one a lot more – which you should anyway – while cloud storage started pretty inexpensive and will likely continue to drop while all gets sorted out.

And like “subroutine”, you’ll only find traditional hard disks alone in the data center for small or very special purpose uses. Like the subroutine, it will give way to more specialized collections of storage on one end and “inling” SSDs on the other end.

Until the Next Big Thing comes along anyway.

 

 

 

 

 

 

 

Image compliments of steampunkworkshop.com go ahead, click on the image, it’s a steam USB charger – the next big thing…


Follow me on Twitter icon_facebook

AddThis Feed Button Bookmark and Share

Related Articles and Blogs

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.