I’m moving to github.io and Jekyll

I’ve had j-griffith.github.io for a while and played with Jekyll a bit.  I decided I wasn’t doing myself any favors having things posted in multiple places, so it was time to consolidate.

I’m a big fan of github, I use it daily… so I thought I’d give it a shot as a hosting site for my blog.  So far it’s been fun to figure out Jekyll and play around with it.  I’ve also tried Hugo, but didn’t have as much luck with that.

I’m not sure that Jekyll is better/worse than WordPress in any way (WordPress is certainly super easy).  I figure I’ll try it out for a while and see how it goes.  I’d love to hear feedback from others that have switched, decided not to, or even better those that switched and switched back to WordPress.

Anyway… I’ll try it out for  abit, and in the meantime; check me out over on my github page!

OpenStack Cinders reference implementation

As many are fully aware, over the years I’ve had pretty strong opinions about good and bad things in the OpenStack Cinder project.  Let’s be clear, most of the “bad” are criticisms against myself and how things evolved than they are about the project or the people contributing to it.

Lately there’s been a resurgence of the “LVM as a reference implementation” discussion due to a comment I made on twitter about one off vendor features being in the Cinder API.  This created quite the buzz, ranging from “LVM sucks” to “Criticism doesn’t help the project”.  All of that aside, I wanted to point a few things out for people to think about on this topic.

First, keep in mind the purpose of a “reference” implementation.  The WHOLE point of a reference implementation is to provide a reference for people to use in development.  LVM has been the reference because it most closely fits the model that other backend-devices use.  In particular iSCSI targets attached to LVM volumes, in my opinion provides a really flexible and dynamic transport/connect mechanism in a cloud environment.  Some argue that things like NFS are better.. and maybe in some cases that’s true, but the original design and architecture in OpenStack was to provide Block Storage, so in that context iSCSI was a pretty cheap and flexible way to go about it.  I’d also point out that iSCSI and networking in general have come a VERY long way in the last 5+ years!  This isn’t your old iSCSI where people were trying to attach storage over a 1G WAN and shockingly had “issues”.

So, what’s wrong with LVM?  Well, if you ask me the only thing wrong with it is that the OpenStack community only has a very few people that are really interested in paying any attention to it.  There was a comment last week to the effect that “using LVM as a reference implementation makes me sad”, my response was “why”?  There seems to be a misconception that there are things LVM can’t do that other OSS products can (in a Cinder context), but I’ve yet to figure out what those things are?  If you look at the code on Github and do a check between features in LVM and features in other OSS drivers you won’t find “missing” API calls.  Of course, have fun trying to figure out the structure with the whole feature-class thing going on in the base driver to link things together and figure out what’s actually real and what’s not (I’ll save that whole topic for another day… or just keep it between me and the Cinder community).

The point is, this statement that “LVM is bad because it can’t do things” is just plain false.  There’s no Cinder API feature that it’s not capable of, the only exception to this is some of the custom one-off API calls that have been added specifically for one or two proprietary vendors that nobody was creative enough, or spent the time or effort on to implement in the LVM driver.  We used to have a *rule* that if you wanted to add a new feature or API call to Cinder, you had to FIRST implement it in the LVM driver.  Over time however Cinder grew, and folks decided this wasn’t such a big deal, and there should be exceptions.  So some vendors were allowed to add their own custom things.  The result, API calls in Cinder that may or may not work, and nobody EVER spending the time to go back and implement them in the Reference/LVM driver.  What’s also very telling here for me is to point out that these features that I was complaining about are NOT available in the other Open Source drivers either, Ceph included.  I’d also continue to assert as I have that the ONLY thing lacking with the LVM driver is interest from developers and the perpetual off-handed comments by people who don’t actually even know anything about how LVM works or the things that can be done with it.  Rather than say “LVM sucks”, maybe you should educate yourself on LVM a bit more and write some cinder/volume/drivers/lvm.py code.  I’ll admit, it requires a bit more effort than some other backends, and you certainly have to be creative.. but that’s the fun part.

That brings me to my next point.  There’s been a few snarky comments about Ceph as the reference implementation.  Even better, there’s been some suggestions that “Vendors are thwarting this”, I find this almost as hilarious as I find it sad.  I’ve actually proposed the topic to a number of folks over the years, including at the Summit in Paris on a public panel with Sage Weil (creator and overall Ceph GURU).  The overwhelming response was “probably not a good idea”.  The reason for most folks that had that opinion was because Ceph is VERY different than any other storage backend.  In terms of a reference, it doesn’t necessarily help other technologies (iSCSI being predominant) .  Now, let me also say it again publicly and very clearly, I think Ceph is AWESOME, as do a lot of people deploying OpenStack.  It is not however the ONLY option.  If we wanted to deliver a product instead of a framework then yes, that might be what we’d want to consider.  The fact is though, we (OpenStack) are pretty explicit that we are NOT delivering a product, but we ARE building a framework.  I’d also point out that there are a lot of use cases out there, and there is no one product/backend that meets all of them.  There are very few large deployments these days that only have a single backend for Cinder.  Yes, I know there are exceptions here, but I’m saying “majority”.

So where are we now?  Well, given where we are now maybe we should revisit this.  Here’s why; it turns out that earlier on a number of vendors didn’t follow the reference implementation anyway.  They either didn’t understand how it worked, didn’t care or didn’t like it.  So a number of the methods in the internal API do “different” things (or nothing) depending on the driver.  The problem with this is that then, a new developer comes along and wants to implement a driver.  They happen to notice Vendor-X’s driver does things a certain way that seems to align with their product so they use it as a template.  Now we have another driver that’s diverged.  Then the next developer comes along… he/she ends up taking some pieces from one driver, combined with some pieces from another driver that somebody in IRC worked on and pointed to in order to answer some questions.  What you end up with is inconsistency across all the drivers and a very tangled mess.  Remember what I said about criticism earlier?  Well, this is something that I criticize myself for.  As the person that started Cinder and the PTL for the first few years I should have caught these disparities in reviews and made sure they were fixed up.  I also should’ve made the reference cleaner and more clear so that there wouldn’t be questions about it.  I also should’ve continued my objections via -2’s for adding API methods that were NOT in the reference implementation.  Lessons learned, and that’s where the comments I’ve made on twitter came from over the past week.

I’m all for revisiting the topic of Cinder’s reference implementation, BUT do me a favor; if you’re going to participate please do some homework.  Make sure you at least have a general idea of what the capabilities, use cases and problems we’re trying to solve are.  Also, ask the question, are we “building a reference, or a default”, because they’re not necessarily the same thing.

 

OpenStack Cinder – Volume Attach flow

Intro

Something that comes up fairly often in IRC, is “how does attach work”?  From a Cinder perspective the code path is fairly simple, but it seems to throw people for a loop.  So, I figured why not take a look at the reference implementation and walk through the steps of volume attach from the Cinder side.

Our Reference

The Cinder project includes a reference driver, so we’ll use that as our reference here to walk through the code.  The reference driver is built in Cinder using a combination of LVM and iSCSI targets (tgt-adm or LIO most commonly).  As with everything in OpenStack you have choices, we’re just going to focus on the default options here, thick provisioned LVM and TgtAdm for the iSCSI component.  We’re also using the default libvirt/KVM config on our Nova side.

A few high level details

It’s important to understand that most of the work with respect to attaching a volume is done on the Nova side.  Cinder is mostly just responsible for providing a volumes information to Nova so that it can make an iSCSI attach on the Compute Node.
The communication path between Nova and Cinder is done via the cinderclient, the same cinderclient a command line user accesses; however Nova uses a special policy that allows it to access some details about a volume that regular users can’t, as well as a few calls you might not have seen before.
So what we’re going to do is look at an OpenStack deployment that has a volume ready to go (available) and an Instance that’s up and ready.  We’ll focus on the calls from Nova to Cinder and Cinders response.  In a follow up post we’ll dig into what’s happening on the Nova side.

Process flow

As I mentioned, things on the Cinder side are rather simple.  The attach process is just three calls to Cinder:
  1. reserve_volume
  2. intialize_connection
  3. attach

reserve_volume(self, context, volume)

context: security/policy info for the request
volume: reference object of the volume being requested for reserve
Probably the most simple call in to Cinder.  This method simply checks that the specified volume is in an “available” state and can be attached.  Any other state results in an Error response notifying Nova that the volume is NOT available.  The only valid state for this call to succeed is “available”.
If the volume is in fact available, we immediately issue an update to the Cinder database and mark the status of the volume to “attaching” thereby reserving the volume so that it won’t be used by another API call anywhere else.

initialize_connection(self, context, volume, connector)

context: security/policy info for the request
volume: reference object of the volume being requested for reserve
connector: information about the initiator if needed (ie targets that use access groups etc)

This is the only Cinder API method that really has any significant work to do, and it’s the only one that really has any real interaction with the storage backend or driver.  This method is responsible for building and returning all of the info needed by Nova to actually attach the specified volume.  This method returns vital information to the caller (Nova) that includes things like CHAP credentials, iqn and lun information.  An example response is shown here:

{‘driver_volume_type’: ‘iscsi’,  ‘data’: {‘auth_password’: ‘YZ2Hceyh7VySh5HY’,
                ‘target_discovered’: False,
                ‘encrypted’: False,
                ‘qos_specs’: None,
                ‘target_iqn’: ‘iqn.2010-10.org.openstack:volume-8b1ec3fe-8c5
                ‘target_portal’: ‘11.0.0.8:3260′,
                ‘volume_id’: ‘8b1ec3fe-8c57-45ca-a1cf-a481bfc8fce2′,
                ‘target_lun’: 1,
                ‘access_mode’: ‘rw’,
                ‘auth_username’: ‘nE9PY8juynmmZ95F7Xb7′,
                ‘auth_method’: ‘CHAP’}}

In the process of building this data structure, the Cinder manager makes a number of direct calls to the driver.  The manager itself has a single initialize_connection call of it’s own, but ties together a number of driver calls from within that method.    

        driver.validate_connector
            Simply verifies that the initiator data is included in the passed in 
            connector (there are some drivers that utilize pieces of this connector
data, but in the case of the reference, it just verifies it’s there). 

        driver.create_export
            This is the target specific, persistent data associated with a volume.
This method is responsible for building an actual iSCSI target, and
providing the “location” and “auth” information which will be used to
form the response data in the parent request.
We call this infor the model_update and it’s used to update vital target
information associated with the volume in the Cinder database.

        driver.intialize_connection
            Now that we’ve actually built a target and persisted the important
bits of information associated with it, we’re ready to actually assign
the target to a volume and form the needed info to pass back out
to our caller.  This is where we finally put everything together and
form the example data structure response shown earlier.


This method is sort of deceptive, it does a whole lot of formatting
of the data we’ve put together in the create_export call, but it doesn’t
really offer any new info.  It’s completely dependent on the information
that was gathered in the create_export call and put into the database.  At
this point, all we’re doing is taking all the various entries from the database
and putting it together into the desired format/structure.

            The key method call for updating and obtaining all of this info was
done by the create_export call.  This formatted data is then passed
back up to the API and returned as the response back out to Nova.


At this point Nova can use the returned info and actually make the iSCSI attach on the compute node, and then pass the volume into the requested Instance.  If there are no errors, the volume is now actually attached to the Instance as a /dev/vdX device and ready for use.  Remember however there was still one Cinder call left in our list:  attach.
 

attach(self, context, volume, instance_uuid, host_name,
mount_point, mode)

context: security/policy info for the request
volume: reference object of the volume being requested for reserve
instance_uuid: UUID of the Nova instance we’ve attached to
host_name: N/A for the reference driver
mount_point: device mount point on the instance (/dev/vdb)
mode: The attach mode of the Volume (rw, ro etc)
This is another method that falls into a category I call “update methods”.  It’s purpose is to notify Cinder to update the status of the volume to “in-use” (attached) and to populate the database with the provided information regarding “where” it’s attached to.  

 

This also provides a mechanism to send notifications and updates back to the driver.
 

OpenStack Live Migration With Cinder Backed Instances

Over the last several weeks I keep getting questions about Live Migration, does it work, how do you configure it, what are the requirements to use it and so on.  To be honest up until recently I hadn’t actually dove into Live Migration so I tried it out (and failed), then started asking around and resorting to Google Searches.  The good thing is, there’s TONS of information out there, problem is, there’s TONS of information out there.  The bad thing here is the info isn’t all that consistent.  Like many things there’s always more than one way to achieve an end result, Live Migration is really no different.  So I thought I’d try and write up what worked for me and in my opinion was the simplest method to use.

First off, for those that aren’t familiar; Live Migration of OpenStack Instances is the process of migrating an OpenStack Instance from one Compute Node to another with little or no downtime.  The docs page has a good description here: http://docs.openstack.org/admin-guide-cloud/content/section_configuring-compute-migrations.html

One of the big misconceptions about Live Migration is that if you’re Instances aren’t stored on a Shared FS or a Shared backend device like CEPH you can’t do Live-Migration.  That’s not true at all, in fact notice the doc has an entry specifically for Volume-Backed Instances.  So let’s take a look at how to create a Volume-Backed Instance using Juno and migrate it from one Compute Host to another.

First, my config; I’ve set up a multi-node deployment using devstack stable/juno.  I went ahead and configured Cinder to run multiple backends, in my case SolidFire and LVM because I wanted to test both and make sure everything worked properly.  For info on Cinder Multi-Backend checkout the wiki here: https://wiki.openstack.org/wiki/Cinder-multi-backend

I chose to specify ssh as my connection method for live-migration.  There are of course other ways to do this by configuring libvirt and qemu, which you can check out here: https://libvirt.org/uri.html but rather than do any of that I just wanted to put a simple entry in my nova.conf file and use ssh.  Here’s the libvirt section of my nova.conf:


[libvirt]
inject_partition = -2
use_usb_tablet = False
cpu_mode = none
virt_type = kvm
live_migration_uri = qemu+ssh://root@%s/system

Notice I’m using root here in my connection.  I’ve used “regular” accounts with sudo permissions before to do remote connections to libvirt, but there’s something about the process from Nova that the Key Verification will fail for any user other than ‘root’.  If anybody knows how to fix this up let me know, It’d be great to learn what I’m missing here.  For those that are interested in more detail, my nova.conf other than the entry here is just the defaults from devstack.  Here’s a link to my nova.conf on the controller/compute: /etc/nova/nova.conf.  I’ve also added a link to my devstack local.conf; in this setup I actually started with a SolidFire deployment and manually updated cinder.conf to do multi-backend with LVM and added the default volume-type: devstack local.conf

So since we’re using ssh and root, you’ve probably guessed we’re going to need to setup ssh keys between our compute nodes.  I went ahead and setup root keys on each of the compute nodes, and installed each machines key on all of the nodes just doing a simple ssh-copy-id.  You may want to get a bit more sophisticated, distribute a key and modify /root/.ssh/config to include entries for each of the compute nodes.  Something like this should do the trick:


Host fluffy-compute
HostName fluffy-compute
User root
IdentityFile ~/.ssh/live-migrationkey.pem

Verify you can ssh as root to->from each of the Compute nodes, restart Nova Compute services and you *should* be ready to go.  Notice there I said *should*, what i found out was things didn’t work.  I’d get an error message stating that I couldn’t do live migration because I wasn’t using shared storage… huh?  What’s up with that?

So it turns out there was a bug introduced during the Juno development cycle.  You can check out the bug on LaunchPad here: https://bugs.launchpad.net/nova/+bug/1392773

Fortunately there’s a proposal up for master already that can be pulled down (and hopefully will merge soon), but I was running Juno, so I put together a quick backport to try out.  Note this path isn’t quite complete, needs some method signature updates for the other hypervisors in OpenStack and of course Unit Tests.  Anyway, you can check out what I started via this gist: https://gist.github.com/j-griffith/5e9be4e6548a03697dc5

I went ahead an applied the Juno patch from the gist above, restarted all of my Nova services and tried again.  Here’s how things go:

First, grab the image-id you want to use and create a Cinder Volume with it…


ubuntu@fluffy-master:~$ nova image-list
+--------------------------------------+---------------------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------------------+--------+--------+
| 40520350-25c7-4786-9586-5430d52d7663 | RedHat 6.5 Server | ACTIVE | |
| 8a69ae88-a729-4d68-a7e6-25048b1cf885 | RedHat 7.0 Server | ACTIVE | |
| afaa2feb-3a58-4e72-bebd-38f66bd0b611 | Ubuntu 14.04 Server | ACTIVE | |
| 28b77f27-a922-4627-8b1d-f0df627be907 | cirros-0.3.2-x86_64-uec | ACTIVE | |
| 63c173cd-3f5a-49ff-89ab-8c90a1701808 | cirros-0.3.2-x86_64-uec-kernel | ACTIVE | |
| e3aa2ab0-383c-4f75-bdc0-dbee86645ad5 | cirros-0.3.2-x86_64-uec-ramdisk | ACTIVE | |
+--------------------------------------+---------------------------------+--------+--------+

ubuntu@fluffy-master:~$ cinder create --image-id afaa2feb-3a58-4e72-bebd-38f66bd0b611 --display-name trusty-volume 10
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2014-12-08T17:34:50.000000 |
| description | None |
| encrypted | False |
| id | 6419112b-bf40-4994-809f-12dce25f5f1f |
| metadata | {} |
| name | trusty-volume |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 56dc791650074cc9a2858bd5053a7116 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 10 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| user_id | 3dcc1572d7594a4eb6b043b819e415de |
| volume_type | solidfire |
+---------------------------------------+--------------------------------------+

ubuntu@fluffy-master:~$cinder list
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+
| 6419112b-bf40-4994-809f-12dce25f5f1f | available | trusty-volume | 10 | solidfire | true | |
+--------------------------------------+-----------+---------------+------+-------------+----------+-------------+

ubuntu@fluffy-master:~$ nova boot --flavor 2 --block-device-mapping vda=6419112b-bf40-4994-809f-12dce25f5f1f --key-name fluffycloud trusty-migrate
+--------------------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000002 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | rKL9sfs2vPs4 |
| config_drive | |
| created | 2014-12-08T17:41:39Z |
| flavor | m1.small (2) |
| hostId | |
| id | 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904 |
| image | Attempt to boot from volume - no image supplied |
| key_name | fluffycloud |
| metadata | {} |
| name | trusty-migrate |
| os-extended-volumes:volumes_attached | [{"id": "6419112b-bf40-4994-809f-12dce25f5f1f"}] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 56dc791650074cc9a2858bd5053a7116 |
| updated | 2014-12-08T17:41:39Z |
| user_id | 3dcc1572d7594a4eb6b043b819e415de |
+--------------------------------------+--------------------------------------------------+

ubuntu@fluffy-master:~$ nova list
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904 | trusty-migrate | ACTIVE | - | Running | private=10.1.0.2 |
+--------------------------------------+----------------+--------+------------+-------------+------------------+

ubuntu@fluffy-master:~$ nova show 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | fluffy-compute |
| OS-EXT-SRV-ATTR:hypervisor_hostname | fluffy-compute |
| OS-EXT-SRV-ATTR:instance_name | instance-00000002 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2014-12-08T17:41:47.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2014-12-08T17:41:39Z |
| flavor | m1.small (2) |
| hostId | 48adbabbab9babab4c8948d3478e9d2fae8ffb407bab1ca21a843981 |
| id | 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904 |
| image | Attempt to boot from volume - no image supplied |
| key_name | fluffycloud |
| metadata | {} |
| name | trusty-migrate |
| os-extended-volumes:volumes_attached | [{"id": "6419112b-bf40-4994-809f-12dce25f5f1f"}] |
| private network | 10.1.0.2 |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 56dc791650074cc9a2858bd5053a7116 |
| updated | 2014-12-08T17:41:47Z |
| user_id | 3dcc1572d7594a4eb6b043b819e415de |
+--------------------------------------+----------------------------------------------------------+

ubuntu@fluffy-master:~$ubuntu@fluffy-master:~$ nova live-migration 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904
ubuntu@fluffy-master:~$ nova list
+--------------------------------------+----------------+-----------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+-----------+------------+-------------+------------------+
| 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904 | trusty-migrate | MIGRATING | migrating | Running | private=10.1.0.2 |
+--------------------------------------+----------------+-----------+------------+-------------+------------------+

ubuntu@fluffy-master:~$nova list
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+------------------+
| 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904 | trusty-migrate | ACTIVE | - | Running | private=10.1.0.2 |
+--------------------------------------+----------------+--------+------------+-------------+------------------+

ubuntu@fluffy-master:~$ nova show 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-SRV-ATTR:host | fluffy-master |
| OS-EXT-SRV-ATTR:hypervisor_hostname | fluffy-master |
| OS-EXT-SRV-ATTR:instance_name | instance-00000002 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-STS:task_state | - |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2014-12-08T17:41:47.000000 |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| config_drive | |
| created | 2014-12-08T17:41:39Z |
| flavor | m1.small (2) |
| hostId | 97a1a99eaf8e1fab2ea7cb0be641ff55f2da5df4e68d0ecb62293e00 |
| id | 9afb79e1-f07a-4c98-9d6c-9ce66dcbf904 |
| image | Attempt to boot from volume - no image supplied |
| key_name | fluffycloud |
| metadata | {} |
| name | trusty-migrate |
| os-extended-volumes:volumes_attached | [{"id": "6419112b-bf40-4994-809f-12dce25f5f1f"}] |
| private network | 10.1.0.2 |
| progress | 0 |
| security_groups | default |
| status | ACTIVE |
| tenant_id | 56dc791650074cc9a2858bd5053a7116 |
| updated | 2014-12-08T17:41:47Z |
| user_id | 3dcc1572d7594a4eb6b043b819e415de |
+--------------------------------------+----------------------------------------------------------+

Pretty cool eh?  As always feedback is more than welcome.  Also, if you have specific OpenStack / Cinder related topics you might want to see more about, drop me a line or post a comment.

Thanks!!!

Remember the whole point

I’ve been involved with OpenStack for a few years now and I’ve seen some pretty incredible changes.  OpenStack is now something that I think almost any medium to large sized company with a Data Center has some level of interest in.  The growth, adoption and increased participation in the project has been nothing short of amazing.

With that growth there’s an increasing number of IT Vendors and Software Companies that are contributing code.  I’ve given multiple talks at Conferences and other events stating that I feel this is a win for Open Source and more specifically a win for OpenStack.  Having multiple competing vendors sharing ideas and contributing to make the overall project better is extremely powerful and in my opinion good.  I don’t want to go into one my rants about balance of contributions (i.e. contribution to the overall OpenStack project versus the piece that just allows you to market and sell product), so I won’t… I’ll save that for another day.  I’ve been around a while, and I’m a realist, I’ll be the first to say that if the opportunity to market and prophet from OpenStack wasn’t there it wouldn’t have grown nearly as much as it has over the years.

I do want to try and remind people though that OpenStack and Open Source aren’t just about marketing and driving sales of some product.  There are various segments out there like Research Institutions, Universities and nonprofits that don’t have massive budgets and they rely on Open Source in order to solve real problems.  Cloud computing is no exception, there are tons of organizations out there that are trying to solve problems ranging from finding cures for disease, to understanding and predicting weather phenomena.  Whether it’s genetic sequencing, climate analysis or modeling viruses in order to help find a cure; there are numerous applications where platforms like OpenStack can and do make a difference.  I like to think that while I contribute to OpenStack as a career and to benefit my employer, that I’m also at the same time able to contribute something that makes a difference.  Who knows, maybe some day some group will be using OpenStack that they pulled down from Github on their cheap commodity hardware and find a way to predict earthquakes, or unlock the genetic mysteries of some fatal disease and find a way to cure it, or even better prevent it.

The point is that all of this is about more than selling an OpenStack distribution, or enabling the sale of more of “Vendor-A’s” kit.  We have an opportunity to contribute to and build something that actually matters and makes a difference.  We can innovate and provide something truly advanced that doesn’t include a giant price tag to go along with it.  I’d ask that everybody involved in OpenStack just take a step back every once in a while… think about some of those segments of the community that aren’t necessarily associated with dollars.  How can we help them?  How can we provide something that enables them to make a difference, and in turn make a difference ourselves?

That’s really what Open Source is all about to me, that’s why I love it.  A while back at the Folsom Summit when we first started talking about doing Cinder, Vish made a statement that I think was really important.  I don’t remember the exact words, but it was something like “Let’s make sure we don’t forget our friends and users in the research community”.  I think that’s a great point and we’d all do well to remind ourselves of that once in a while.  It’s not just about selling product and making money, it’s about enabling others to do things they couldn’t do before by Open Sourcing the software and tools.

The SDS or Abstraction driver in Cinder debate… I give up

So first, I’d like to thank all of the folks who have provided quite a bit of feedback on this topic both those from EMC that support the idea, and all the rest of you who for the most part don’t (interesting categorization of opinions I think).

The latest from Vendors like Huawei and EMC is an easy trick to skirt around my viewpoint on this.  The solution they’ve figured out is that if they have a device (or software package) that runs on commodity hardware that only works via VIPR (or whatever their abstraction layer is called), how could I possibly argue against it being submitted.  Well… I could, but quite frankly I’m tired of this topic and I have realized beyond a shadow of a doubt that the motivation for involvement in OpenStack is much much different for a lot of folks than what I may have hoped, or even than what I view my own motivation for involvement as being.

In the next few days I have some thoughts I’d like to get out there regarding Open Source, OpenStack, and my interpretation of what all of it means.  Stay tuned and if you’re remotely interested in my ramblings and silly opinions hopefully you’ll check it out and let me know what you think. 

In the meantime, you all win… I have better things to do (like finishing my patch to improve Cinder’s core data-path and control abstraction) than to continue arguing over something that in the end doesn’t really even matter.

The Problem with SDS under Cinder – Part 2

Ok… so there’s been all sorts of comments and follow up blogs from my initial posting.  The most recent of which came from my good friend Kenneth Hui The problem sds under openstack cinder solves.  I’m likely to say some “unpopular” things in this post and I really hope that Ken and the folks at EMC don’t take it the wrong way.  They’re a great group, I really enjoy debating with them, and even more, I enjoy talking about OpenStack and common interests that we have.  I also really value Kens viewpoint and friendship, he’s a great guy and have the utmost respect for him both personally and technically.

Ken makes some pretty good points (as does the rest of the EMC team).  Here’s the problem though, I don’t see VIPR (or any of the sudden influx of these storage abstraction solutions calling themselves SDS) really doing anything that unique or different.  Ken makes a great point about true SDS being a separation of control and data planes and most importantly this functionality has requirements on the storage platform itself.  I couldn’t agree more, and I don’t see how VIPR is offering me anything different here than what we’re already doing in Cinder; more importantly I don’t see how it could.

Another issue I’ve been having is mixed messages from EMC on what VIPR is and what it supports.  In Atlanta I was told I was incorrect and that VIPR was strictly for consolidation of EMC products, but again I see things like this: Which Storage Platforms are supported by VIPR SRM.  Many of those devices are in fact devices that already have drivers in Cinder and provide the same abstraction that VIPR would provide.  Are you saying that VIPR has some magic that none of us know about that actually changes what can be done with the device?  Or that EMC and the VIPR team have discovered some hidden programmatic interface to NetAPP, IBM, HDS and HP devices that the engineers who are full time contributors to Cinder currently simply didn’t know about?  I’m failing to see what the value add is here for OpenStack (or anybody for that matter).  What is VIPR actually providing that Cinder doesn’t or can’t in terms of these Block devices?

Ken also mentions things in his post like “exposing unique features” but I don’t understand how that is being done any differently with VIPR than it is in Cinder today?  My point here is that you’re using the same API so how is it different?  Seems to me you’d still use the same mechanism we use in Cinder today of Volume-Types or Capability scheduling.

Finally, one of the most common arguments I get from EMC on this topic is “Neutron does it this way”, well…  comparing block storage and Networking isn’t very fair in my opinion.  Networking is vastly different and in my opinion more complex.  That being said, Neutron has some challenges as a result of this model in my opinion (and others as well).  I’m not criticizing Neutron or the great folks that are working on it in any way at all; but I will say that it’s probably not a good argument to use here (at least not with me).

So, where are we… with all of my ranting and gibberish nonsense that frankly probably nobody really cares about anyway; I’ve come pretty close to just accepting the fact that the VIPR driver is likely going to be included in Cinder for the Juno release.  I’m struggling with this because on the tail of it comes everybody elses duplicate abstraction (Atlantis, ProphetStor and a list of others that may not be “public” yet).  I’m not sure how to handle this, I still would prefer that they aren’t shipped in OpenStack but provided as an option for folks to use with OpenStack as outside products if they so desire.  Alternatively, I don’t think they should be called drivers, I think they should probably be plugins that are designed higher up in the architecture (sit just below the API), so if you don’t want to use Cinder for anything other than an API that’s your choice, go for it.

At any rate, the only good thing is that they say imitation is the most sincere form of flattery; if that’s the case all of the Cinder team (and the original nova-volume team) should be pretty flattered because it seems that there are a vast number of vendors out there that are imitating Cinder and trying to productize it.

 

 

The problem with SDS under Cinder

It’s been a great Summit here in Atlanta, we’ve had one of the most exciting and energetic Summit’s that I’ve had the opportunity to participate in so far.  Now that we’ve cleared the Exhibit hall, the booths are gone and most of the marketing and sales folks are on a plane heading home, it’s time for the Developer Community to get to work.

 

On the Cinder side we have quite a bit of work to do and I expect some great brainstorming in the Design Sessions on Thursday and Friday.  One topic in particular that is on the agenda is “What is a Cinder Driver”.  This session in my opinion is perhaps the most critical session we’ll have this week and the decisions that come out of it will have a significant impact on the future of Cinder.

 

The reason for this session is to try and get some consensus around how to deal with the most recent trend that I’m seeing in the Cinder community.  The latest trend from a number of the larger storage vendors is to implement a Software Defined Storage “driver”.  The design here is to implement a generalized driver, that acts as an interface to an external Controller that implements an abstraction very similar to what Cinder is already intended to provide.  This model duplicates a significant amount of code that exists in Cinder.  The Controller handles scheduling, pooling and abstraction of whatever supported devices For those that aren’t familiar, there are a number of examples that can be found on the web, but in short if you’re familiar with Cinder, you’ll recognize it as almost a duplicate pretty quickly.

 

From the perspective of a Vendor that builds this Software Package and implements the driver in Cinder this has a number of advantages.  The primary advantage is of course they have a Software Defined Storage implementation that they can sell to customers.  In addition, from an OpenStack/Cinder perspective, this also provides an additional advantage.  These SDS implementations allow for a single driver implementation in Cinder to send requests to the SDS Controller as if it were a single backend device just like we do with a standard driver currently.  The difference however is that the SDS Controller then takes that command and routes it through it’s own scheduler and abstraction layer to any number of backend devices that are configured in it’s pool of back-end resources.  This means that a Vendor can implement multiple backend devices but only needs to submit and maintain a single Driver in OpenStack.

 

I have mixed views on this, I certainly appreciate and understand a desire to pool resources and abstract the to higher level, but given that this is the whole purpose of Cinder it seems odd to me.  It’s a somewhat strange and in my opinion less than optimal design, mainly because the only apparent benefit is for the Vendor that is selling the SDS Controller, or possibly for the Vendor whose product is supported by the SDS Controller.  The latter advantage is troubling, because I view it as nothing more than a short-cut.  It’s a fact that contributing a driver to Cinder requires significant effort.  That effort encompasses not only the development of the driver itself (frankly that’s the easy part), but more more importantly it requires significant effort in terms of support, testing and most of all community participation.  The required effort is actually by design, I believe that requiring this effort helps deliver a better end product, and acts as a sort of filter between providers of products that actually want to enhance and advance OpenStack/Cinder as opposed to just use it as a sales tool.

 

There are multiple Storage Vendors proposing this model and design to the Cinder project this week.  My initial reaction has been to “-2” such proposals and to “reject” proposed BluePrints submitted to LaunchPad.  There seems to be a good deal of debate on this topic however.  Some (particularly those that are proposing this model) obviously feel strongly that this is a great solution.  Others feel as though this is fine as long as they implement CI testing for every back-end they claim support for.  I don’t agree with either of these viewpoints.  I believe what makes the combination of Open Source and Vendor/Proprietary Software work, is when there’s a balance.  I define that balance as giving back at least an equal amount of improvement to the project as the benefit that you’re extracting from it.  Implementing your own abstraction is very much at odds with this philosophy.

 

Cinder and OpenStack are Open Source SOFTWARE projects.  I believe the key to the balance and relationship between community and Vendors in the Cinder project is in fact because there’s been a clear line between the Software and the Hardware (Storage Backend).  By having this clear separation we’ve fostered a unique and effective balance, Vendors can contribute open software to support their hardware products, which in turns also drives them to improve the overall project.  This model has worked extremely well over the last couple of years.  I’m very concerned that if you break this model and begin to merge not only proprietary hardware but also proprietary software in to the project, it causes significant damage to the Cinder project.  The result is that vendors can easily focus on their product, and ignore the core Cinder project by basicly just ignoring it.  There’s no longer any real incentive for them to contribute, and even worse, the consumer loses the freedom of choice and we create a tiered version of the Cinder software; a higher tier for those that pay for a vendors software, and a lower tier for those that utilize the Open Source version.
As PTL I’ll continue to vote that we don’t go down this path, and keep the current model we have in Cinder.  The beauty of Open Source however is that everybody has a voice and a vote, if the overwhelming opinion is that this new direction is the way to go, then so beit.  I do however believe this will prove to be damaging to OpenStack and in particular to Cinder.