Virtualization, Vagrant, and Docker for Development
How would you like to take virtualization to a new level? To create virtual machines, each perfect for whatever software you are trying to develop? What if you messed up big time and wished you could just get your computer back to the way it was before you did anything and start over? What if you could do that in a few seconds or minutes at most?
Vagrant can help you do that. That is, if you want to create whole virtual machines.
But what if you wanted to create several light-weight servers that shared an operating system image?
What if you wanted to start a new server as fast as a single process? And what if you wanted to save just the parts you changed into a new image and upload it into a repository where you can drop your whole virtual machine into a test system–just the parts you change–and then fire up an instance in a flash? How would you like to load these things up into your production environment or into multiple production systems for redundancy.
Docker can help you do that. It’s pretty cool when you can do development on a laptop, upload it into a Docker hub, and then download it into a test system and then to production.
Years ago, I studied Advanced Systems and Databases at the graduate level at Stanford University. In my Operating Systems course, we had 2 weeks to learn C++ and get busy developing parts of an operating system kernel. Someone told me that the professor for this course, Mendel Rosenblum, was the one who invented VMware, and I grabbed a copy of the desktop version of it almost immediately.
I ran it under Linux and on Windows and was happy I did not have to dual-boot my operating systems any more. Then Parallels came out, and I used it until they focused more exclusively on the Mac. Oracle’s VirtualBox later became my vm of choice for my laptop. I packed it out with 24GB of RAM and a 1TB SSD for speed with an i7 processor. In one VM, I installed VistA, an enterprise level EHR system for managing large health care systems. I ran the server software and database on Linux in a virtual machine while running the VistA client software on the host operating system. And it worked great.
Vagrant Simplifies Development
I saw job postings asking for Vagrant experience. I had to find out what Vagrant was. Interesting name. I downloaded Vagrant and went through some tutorials and knew this was my ultimate development environment.
With Vagrant, I could pull in a whole VM of my choosing from their hub. They had VMs for Ruby on Rails. They had Tomcat. I had just installed Tomcat for OpenEMR. And I think they had VMs for PHP with Drupal and WordPress and maybe Python with Django.
What excited me about this is that once I pulled in a VM image, I could fire it up. But then with Vagrant, I could start up a second and third one and customize them. Or throw them away and start up new ones.
What this meant to me is that I could take on consulting projects and fire up environments like those of my clients at will and not have to go through extensive gyrations trying to get everything set up. Starting up a new VM would take minutes or seconds.
Playing with Docker
While I was playing with Vagrant, building VMs and tearing them down, I noticed an ad asking why you would want to use Vagrant if you had Docker. I also saw an article comparing Vagrant and Docker.
I watched YouTube videos on Docker as I had done with Vagrant, and I had to have it. I first installed it onto Windows 10 on my laptop and went through some tutorials installing and running a hello-world example, then a ubuntu image and fired up some containers on it instantly.
I thought that since it was running on Linux anyway, it might be nice to install it onto my Cinnamon Linux VM. But there was no version compatible with Cinnamon. I downloaded the source written in Golang, but porting Docker to Cinnamon did not seem like an interesting thing to do at this point. So, I looked for a Vagrant image for another version of Linux and ended up going back to the Windows version of Docker.
Getting Deeper into Docker and Dev/Ops
I was not satisfied taking the easy road grabbing standard images for mysql and other services and using them. I wanted to develop my own images. I wanted more of a development environment, a set of Dockerfiles that would generate images of my own.
But why? Once you have created an image, can’t you just use it?
Well, yes and no. A customer calls. He says his widget is saying, “Bleep” instead of “Bloop” when he runs it. You promise to look into it. You log into the container and spot the problem right away. You stop the container briefly, save it to a new image, and test it. Everything is working great.
So now you take that image and send it up to the hub so it can be deployed to staging and production. You deploy it to staging, and it works great, so you carefully schedule getting it moved into production. You deploy it first. You activate it and test it in place, and it works great. So now you add it into the load balancer and remove the old container.
The user tells everyone you’re a hero. Everyone is happy.
Upgrade, and Problem Revisited
Now Ubuntu comes out with a new version of the OS and MySql does the same. So you grab your old Dockerfile and you simply increases the version number for ubuntu and run your build script again which just does a docker build. You’re happy. Apparently the new version of Ubuntu Linux happens to use the later version of MySQL anyway. So now you run your image to create a new container and you test it. It runs perfectly.
So, like before, you send this new image up to the hub with your new version number. You deploy it to staging and to production, and everything is great for three or four weeks.
Then an irate customer returns to tell you that his problem came back. His widget is going “Bleep” again, and this is embarrassing him before all his associates. he wants to know why it got changed back and who changed it.
A different Dev/Ops person gets the problem and knows nothing about it. He does the same thing. He runs the container in Development, finds the problem, fixes it, exits, stops the container, and saves it as a new image. He deploys it to the hub again and to production. And the user is semi-happy and definitely guarded.
Then Ubuntu comes up with a new version of the OS, and the whole pain resurfaces.
If the problem had been fixed in the Dockerfile, the fix would have been carried forward to later versions of the image.
For that reason, I would advocate that one test for an image is that it has to be generated from a Dockerfile with all fixes in it or in scripts associated with the build process.
Migrating to Micro-services
I want to migrate my websites and most likely my email servers to Docker containers and split them up into microservices, each container accomplishing a single task whether it be nginx, mysql, postfix, dovecot, phpmyadm, and such.
I want it to be set up in a way that is optimal for production for my own benefit but also so I can help others in the near future as this technology is quickly becoming very popular for good reasons.
These containers are very light weight. The underlying images remain fixed as a read-only file or a stack of read-only files or images. Then when containers are created, transparent layers or overlays are placed above the read-only layers. This way, when you create a new container, only a little disk space is used, and if you build five containers from the same image, first, it happens almost instantly. It is not like starting a VM where an operating system has to boot up. A container starts up about as fast as a normal process does. Yet when you log in, you feel like you have a complete Linux installation at your fingertips. But if you change anything in one container, the other containers do not see your changes even if they are built on the same foundational images.
For this reason, containers are much more light-weight than VMs. They retain isolation from one another. And they start immediately. Furthermore, they can run atop different operating systems. For instance, you can have nginx running on linux and mysql running on Centos and a firewall running on freebsd.
Anyway, I’m quite excited about this technology and am determined to master it.