You are here

Virtualization: Vagrant, and Docker Awesomeness

djdadm's picture
Submitted by djdadm on Mon, 06/06/2016 - 00:36

I am going through a bit of rigor with Docker, and I'm glad.VagrantDocker Layers

While I am fairly new to this technology, much of what I have learned from my experience in production and development and support and training environments in the industry also applies here.  Because I am in love with this technology, I'd like to share how I was snagged and why I am glad.


Years ago, I studied Advanced Systems and Databases at the graduate level at Stanford University.  Unfortunately, working off campus provided some unfair challenges.  Occasionally, we received our assignments on the day they were due and were forced to compete alone against teams of two, three, or four on-campus students who often had roommates who had completed the course in previous school quarters.

In my Operating Systems course, we were given 2 weeks to learn C++ and get busy developing parts of an operating system kernel--time sliced multi-processing, device drivers, virtual memory, and such.  And it turned out that this particular professor was unusually fair.  While I was unable to complete all the projects single-handedly, in lieu of one of them, he allowed me to do a thorough research paper on new, cutting edge operating systems.  One that I chose to focus on was Chorus, or at least it was cutting edge back in the 1990s.  It was an attempt to take the microkernel concept made popular by Mach of Carnegie Melon University and develop a distributed operating system with actors or a tiny nucleus.

I received the highest final exam score in the class but only a B or B+ for the quarter.  Yet the integrity and fairness of the instructor and the quality of the class and its content kept me intrigued for years.

Someone told me that the professor for this course, Mendel Rosenblum, was the one who invented VMware, and I grabbed a copy of the desktop version of it almost immediately.  I ran it under Linux and on Windows and was happy to blow people's minds with the fact I did not have to dual-boot my operating systems any more.  I was hooked.  Then Parallels came out, and I used it until they focused more exclusively on the Mac.  Then Oracle's VirtualBox became my vm of choice for my laptop.  I packed it out with 24GB of RAM and a 1TB SSD for speed with an i7 processor and installed VistA EHR onto it.  It ran great.


Vagrant Makes Develpment Easier

I saw some job postings that asked that candidates have Vagrant experience.  So, I had to find out what Vagrant was.  Interesting name.  So I downloaded it and went through some tutorials and thought this was going to be my awsome development environment.

With Vagrant, I could pull in a whole VM of my choosing from their hub.  They had VMs for Ruby on Rails.  (I had just gone through installing and configuring that myself and was going through tutorials on it.)

They had Tomcat.  I had just installed Tomcat for OpenEMR.  And I think they had VMs for PHP with Drupal and Wordpress and maybe Python with Django.

What excited me about this is that once I pulled in a VM image, I could fire it up.  But then with Vagrant, I could start up a second and third one and customize them.  Or throw them away and start up new ones.

What this meant to me is that I could take on consulting projects and fire up environments like those of my clients at will and not have to go through extensive gyrations trying to get everything set up.  Starting up a new VM would take minutes or seconds.

Playing with Docker

While I was playing with Vagrantfiles, building VMS and tearing them down, watching my list of VMs grow in VirtualBox, I noticed an ad on the side asking why you would want to use Vagrant if you had Docker or comparing Vagrant and Docker.

I watched YouTube videos on Docker as I had done with Vagrant, and I had to have it.  I first installed it onto Windows 10 on my laptop and went through some tutorials installing and running a hello-world example, then a ubuntu image and fired up some containers on it instantly.

I thought that since it was running on Linux anyway, it might be nice to install it onto my Cinnamon Linux VM.  But there was no version compatible with Cinnamon.  I downloaded the source written in golang, but since I was new to golang, porting Docker to Cinnamon did not seem like an interesting thing to do at this point.  So, I looked for a Vagrant image for another version of Linux and ended up going back to the Windows version of Docker.

Getting Deeper into Docker and Dev/Ops

I was not satisfied taking the easy road grabbing standard images for mysql and other services and using them.  I wanted to develop my own images.  I wanted more of a development environment, a set of Dockerfiles that would generate images of my own.

But why?  Once you have created an image, can't you just use it?

Well, yes and no.   A customer calls.  He says his widget is saying, "Bleep" instead of "Bloop" when he runs it.  You promise to look into it.  You log into the container and spot the problem right away.  You stop the container briefly, save it to a new image, and test it.  Everything is working great.

So now you take that image and send it up to the hub so it can be deployed to staging and production.  You deploy it to staging, and it works great, so you carefully schedule getting it moved into production.  You deploy it first.  You activate it and test it in place, and it works great.  So now you add it into the load balancer and remove the old container.

The user tells everyone you're a hero.  Everyone is happy.

Upgrade, and Problem Revisited

Now Ubuntu comes out with a new version of the OS and MySql does the same.  So you grab your old Dockerfile and you simply increases the version number for ubuntu and run your build script again which just does a docker build.  You're happy.  Apparently the new version of Ubuntu Linux happens to use the later version of MySQL anyway.  So now you run your image to create a new container and you test it.  It runs perfectly.

So, like before, you send this new image up to the hub with your new version number.  You deploy it to staging and to production, and everything is great for three or four weeks.

Then an irate customer returns to tell you that his problem came back.  His widget is going "Bleep" again, and this is embarrassing him before all his associates.  he wants to know why it got changed back and who changed it.

A different Dev/Ops person gets the problem and knows nothing about it.  He does the same thing.  He runs the container in Development, finds the problem, fixes it, exits, stops the container, and saves it as a new image.  He deploys it to the hub again and to production.  And the user is semi-happy and definitely guarded.

Then Ubuntu comes up with a new version of the OS, and the whole pain resurfaces.

If the problem had been fixed in the Dockerfile, the fix would have been carried forward to later versions of the image.

For that reason, I would advocate that one test for an image is that it has to be generated from a Dockerfile with all fixes in it or in scripts associated with the build process.

Migrating to Micro-services

I want to migrate my websites and most likely my email servers to Docker containers and split them up into microservices, each container accomplishing a single task whether it be nginx, mysql, postfix, dovecot, phpmyadm, and such.

I want it to be set up in a way that is optimal for production for my own benefit but also so I can help others in the near future as this technology is quickly becoming very popular for good reasons.

These containers are very light weight.  The underlying images remain fixed as a read-only file or a stack of read-only files or images.  Then when containers are created, transparent layers or overlays are placed above the read-only layers.  This way, when you create a new container, only a little disk space is used, and if you build five containers from the same image, first, it happens almost instantly.  It is not like starting a VM where an operating system has to boot up.  A container starts up about as fast as a normal process does.  Yet when you log in, you feel like you have a complete Linux installation at your fingertips.  But if you change anything in one container, the other containers do not see your changes even if they are built on the same foundational images.

For this reason, containers are much more light-weight than VMs.  They retain isolation from one another.  And they start immediately.  Furthermore, they can run atop different operating systems.  For instance, you can have nginx running on linux and mysql running on Centos and a firewall running on freebsd.

Anyway, I'm quite excited about this technology and am determined to master it.