The Problem with SDS under Cinder – Part 2

Ok… so there’s been all sorts of comments and follow up blogs from my initial posting.  The most recent of which came from my good friend Kenneth Hui The problem sds under openstack cinder solves.  I’m likely to say some “unpopular” things in this post and I really hope that Ken and the folks at EMC don’t take it the wrong way.  They’re a great group, I really enjoy debating with them, and even more, I enjoy talking about OpenStack and common interests that we have.  I also really value Kens viewpoint and friendship, he’s a great guy and have the utmost respect for him both personally and technically.

Ken makes some pretty good points (as does the rest of the EMC team).  Here’s the problem though, I don’t see VIPR (or any of the sudden influx of these storage abstraction solutions calling themselves SDS) really doing anything that unique or different.  Ken makes a great point about true SDS being a separation of control and data planes and most importantly this functionality has requirements on the storage platform itself.  I couldn’t agree more, and I don’t see how VIPR is offering me anything different here than what we’re already doing in Cinder; more importantly I don’t see how it could.

Another issue I’ve been having is mixed messages from EMC on what VIPR is and what it supports.  In Atlanta I was told I was incorrect and that VIPR was strictly for consolidation of EMC products, but again I see things like this: Which Storage Platforms are supported by VIPR SRM.  Many of those devices are in fact devices that already have drivers in Cinder and provide the same abstraction that VIPR would provide.  Are you saying that VIPR has some magic that none of us know about that actually changes what can be done with the device?  Or that EMC and the VIPR team have discovered some hidden programmatic interface to NetAPP, IBM, HDS and HP devices that the engineers who are full time contributors to Cinder currently simply didn’t know about?  I’m failing to see what the value add is here for OpenStack (or anybody for that matter).  What is VIPR actually providing that Cinder doesn’t or can’t in terms of these Block devices?

Ken also mentions things in his post like “exposing unique features” but I don’t understand how that is being done any differently with VIPR than it is in Cinder today?  My point here is that you’re using the same API so how is it different?  Seems to me you’d still use the same mechanism we use in Cinder today of Volume-Types or Capability scheduling.

Finally, one of the most common arguments I get from EMC on this topic is “Neutron does it this way”, well…  comparing block storage and Networking isn’t very fair in my opinion.  Networking is vastly different and in my opinion more complex.  That being said, Neutron has some challenges as a result of this model in my opinion (and others as well).  I’m not criticizing Neutron or the great folks that are working on it in any way at all; but I will say that it’s probably not a good argument to use here (at least not with me).

So, where are we… with all of my ranting and gibberish nonsense that frankly probably nobody really cares about anyway; I’ve come pretty close to just accepting the fact that the VIPR driver is likely going to be included in Cinder for the Juno release.  I’m struggling with this because on the tail of it comes everybody elses duplicate abstraction (Atlantis, ProphetStor and a list of others that may not be “public” yet).  I’m not sure how to handle this, I still would prefer that they aren’t shipped in OpenStack but provided as an option for folks to use with OpenStack as outside products if they so desire.  Alternatively, I don’t think they should be called drivers, I think they should probably be plugins that are designed higher up in the architecture (sit just below the API), so if you don’t want to use Cinder for anything other than an API that’s your choice, go for it.

At any rate, the only good thing is that they say imitation is the most sincere form of flattery; if that’s the case all of the Cinder team (and the original nova-volume team) should be pretty flattered because it seems that there are a vast number of vendors out there that are imitating Cinder and trying to productize it.

 

 

7 thoughts on “The Problem with SDS under Cinder – Part 2

  1. John,

    Great points in response to my post. I think we need to have further discussion on the capabilities of SDS solutions like ViPR, particularly in regards to how it differs from a typical storage virtualization solution. Chad’s post which breaks out the data services from the control plane services of ViPR will be helpful here.

    I hope though that we can have some agreement that an SDS plugin to Cinder, in and of itself, does not necessarily preclude a vendor from being a strong participant in the project. I also hope that we can agree we should treat all vendors fairly and require the same commitment from all to the community, obviously in relation to the available resources and capabilities of each vendor.

    Ken

    • Hey Ken,
      Yes, I think there’s some common ground here and I’m certainly open to hearing more. Also, your point about fair treatment of vendors etc, I want to make sure everybody knows I have ABSOLUTELY NO intention of making things more difficult for one vendor versus another. I’ve worked really hard while being involved in OpenStack to be sure and remain vendor-neutral and I hope that I’ve done a good job of that, if I haven’t somebody needs to point out where I’ve messed up here and help me “fix” it. I most certainly agree that there should be (and I believe there is) a standard set of requirements for all vendors; I think some of this concern might be around things like 3’rd party CI. That’s a new requirement for all Cinder back-ends, and what gets tricky for things like VIPR is that there can be quite a few of these. Do we require CI on “ALL” supported devices, just a subset, just one etc etc. I don’t have an answer here, some folks on the Cinder team say “all” some say “subset of supported”, I don’t know the answer but I don’t think it’s realistic to say you run CI on every single device that VIPR supports.

  2. Pingback: The Problem SDS Under OpenStack Cinder Solves | Cloud Architect Musings

  3. John,

    As an 18 year storage veteran who has worked both in the trenches, in the channel, and with the manufacturer, I agree with your assessment of the situation and approach with the Cinder project. I believe that the project has an obligation to the community to ensure that vendors who benefit from the work of others (including other manufacturers contributions in many cases) contribute to the community in a substantive way.

    I tend to agree with your preference that drivers, or as you more aptly titled them ‘plug-ins’, should be offered and supported by the manufacturer rather than packaged with the core distribution, unless the inclusion provides benefit beyond the single manufacturers ecosystem, or the code base is provided as an open source addition to the project.

    I also feel, as you do, that EMC VIPR is performing many of the same functions as Cinder, and that the functionality, when layered under Cinder, is somewhat duplicitous. On the other hand, I fully understand that there may be instances where EMC customers wish to utilize VIPR to benefit other areas within their datacenter as Kenneth mentioned in his earlier posting. Hitachi provides similar functionality with their VSP and now G1000 enterprise arrays, abstracting the control and data planes, and increasing horizontal scalability and mobility within the storage environment. While I think that maximum benefit is provided when a single control plane is adopted across the enterprise, I think customers will initially take a silo approach to deployments, and products that provide a degree of control plane abstraction apart from OpenStack may prove beneficial.

    Steve Jobs made a comment at the Worldwide Developers Conference back in 1997 – “You‘ve got to start with the customer experience and work back toward the technology – not the other way around.” I believe that customers are looking at the OpenStack project as a way to provide a unified control plane within the datacenter that improves efficiency and time to delivery for IT services, while at the same time enabling a broad degree of choice from a hardware perspective. When choice is exercised, it should be minimally or even non-impactful to delivery workflows.

    This flexibility that customers are ultimately looking for at times runs contrary to manufacturer interests. This presents an interesting situation in which the open source community has the “wolf by the tail” so to speak, it can neither afford to hang on nor turn loose. On one hand, vendor involvement raises awareness of the open communities efforts, provides support and credibility to the approach, and in many cases – direct code contributions to the core efforts and more. On the other hand, we often see many of the specific interests of the vendor woven directly into their involvement, which may not necessarily be beneficial to the community or inline with the spirit of a given project. The OpenStack board, and PTL’s such as yourself, have been tasked with a delicate balancing act weighing the interests of the two, and hopefully providing a path for vendor inclusion that is fair and balanced, and the customer, community, and contributor ultimately win.

    Keep up the good work – there are plenty of people listening who care, and who are most appreciative of your efforts.

    My .02,

    Chris Williams
    Director of Technology
    Alliance Technology Group LLC

  4. Pingback: Technology Short Take #43 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, networking, cloud, servers, & Macs

  5. Pingback: Vendor hijack – contributions from vendors to open source projects | Die wunderbare Welt von Isotopp

Leave a comment