Vagrant & CoreOS clusters and networking


Updated: October 23, 2015

Getting warmer. A bunch of days down the time slider, I showcased Vagrant, a virtualization solution that aims big by being a nice, tidy wrapper to other, supposedly more complicated software like VirtualBox, KVM and friends. Moreover, it plays in the senior league by offering Docker container support, and cloud server environments.

Expanding on this model, we will discuss a slightly different usecase, very much cloudy. CoreOS, which we have used in the previous exercise, is another player that tries to cash in on the cloud mania, and it offers some very neat cluster features and automation. Today, we will learn how to bring up a cluster and discuss the networking piece, which is somewhat neglected, and not very well explained even in the original documentation. So please follow me.

Discovery token setup

Unlike the last time, when I purposefully tried Vagrant on Windows through Powershell, just to give a sample that it can be done in many different way, our current exercise will take place in Linux, Xubuntu Vivid to be more precise.

Following the official guide, setting up CoreOS for Vagrant isn't very difficult, but it also isn't that trivial as the online reference would have you have it. There's obvious impatience to get started, and it comes through.

After you've cloned the repository, you'll have several files at your disposal, which you will need to edit a bit to be able to initialize and start Vagrant. The first you will need to do is edit the user-data.sample file and rename it to user-data. Most notably, the token piece:

#cloud-config

coreos:
etcd:
# generate a new token for each unique cluster from
https://discovery.etcd.io/new?size=3
# specify the intial size of your cluster with ?size=X
# WARNING: replace each time you 'vagrant destroy'
discovery: https://discovery.etcd.io/<token>

Basically, CoreOS uses unique identifiers - called tokens - to help running instances identify one another. All hosts with the same token belong to the same swarm, so they can be controlled in a centralized manner using the etcd shared configuration and discovery service.

https://discovery.etcd.io/new?size=<size>

You can use your own, or generate a new one, by going to the URL above, and specifying the desired cluster size. The default configuration is three hosts. Write down the generated token and add it to the configuration file, e.g.:

https://discovery.etcd.io/21e4099c23b52a8403640c2d48cdca6f

We will see why this is important later on.

Cluster setup

The second piece is to define how many instances of CoreOS we want to run when Vagrant starts up. This is a simple configuration overall. The change needs to be added to the config.rb.sample file, inside the cloned Git directory, and renamed config.rb. We will try with four instances, just to be unique and special.

# Size of the CoreOS cluster created by Vagrant
$num_instances=X

Start cluster

Now, you can run Vagrant. If you've cloned the online repository, you will already have a default Vagrant configuration file in the current directory, so if you try vagrant init, you should fail:

vagrant init
`Vagrantfile` already exists in this directory. Remove it before
running `vagrant init`.

Init error

If you don't have VirtualBox or another relevant virtualization software installed, you will get another error when you try to start the program. Make sure you resolve these tiny niggles beforehand.

vagrant up
The provider 'virtualbox' that was requested to back the machine
'core-01' is reporting that it isn't usable on this system. The
reason is shown below:

Vagrant could not detect VirtualBox! Make sure VirtualBox is properly installed. Vagrant uses the `VBoxManage` binary that ships with VirtualBox, and requires this to be available on the PATH. If VirtualBox is installed, please find the `VBoxManage` binary and add it to the PATH environmental variable.

Up error

If you've done this piece successfully, Vagrant should start and create the CoreOS instances one by one. This can take a while, and you will need good network bandwidth and a plenty of memory to accommodate larger clusters.

Bringing machines up

vagrant up
Bringing machine 'core-01' up with 'virtualbox' provider...
Bringing machine 'core-02' up with 'virtualbox' provider...
Bringing machine 'core-03' up with 'virtualbox' provider...
Bringing machine 'core-04' up with 'virtualbox' provider...
==> core-01: Box 'coreos-alpha' could not be found. Attempting to find and install...
core-01: Box Provider: virtualbox
core-01: Box Version: >= 308.0.1

==> core-04: Importing base box 'coreos-alpha'...
==> core-04: Matching MAC address for NAT networking...
==> core-04: Checking if box 'coreos-alpha' is up to date...
==> core-04: Setting the name of the VM: coreos-vagrant_core-04_1431799934824_55775
==> core-04: Fixed port collision for 22 => 2222. Now on port 2202.
==> core-04: Clearing any previously set network interfaces...
==> core-04: Preparing network interfaces based on configuration...
    core-04: Adapter 1: nat
    core-04: Adapter 2: hostonly
==> core-04: Forwarding ports...
    core-04: 22 => 2202 (adapter 1)
==> core-04: Running 'pre-boot' VM customizations...
==> core-04: Booting VM...
==> core-04: Waiting for machine to boot. This may take a few minutes...
    core-04: SSH address: 127.0.0.1:2202
    core-04: SSH username: core
    core-04: SSH auth method: private key
    core-04: Warning: Connection timeout. Retrying...
==> core-04: Machine booted and ready!
==> core-04: Setting hostname...
==> core-04: Configuring and enabling network interfaces...
==> core-04: Running provisioner: file...
==> core-04: Running provisioner: shell...
    core-04: Running: inline script

Once the systems have been all created, you can run vagrant status to get all the information you need on the existing machines and their current states.

Running four instances

Networking

Now comes the really interesting part. We have four virtual machines. But how do we connect to them? We had a similar dilemma when testing Docker, but then we found our way around, and here, the same logic applies.

First, it is possible to connect to each one of the running machines using vagrant ssh command, which will take care of all the keys and whatnot. You must execute this from the cloned directory, otherwise you'll have an error:

vagrant ssh core-02 -- -A
A Vagrant environment or target machine is required to run this
command. Run `vagrant init` to create a new Vagrant environment. Or, get an ID of a target machine from `vagrant global-status` to run this command on. A final option is to change to a directory with a Vagrantfile and to try again.

But then, in the right directory, it should work fine:

SSH works

If you run ifconfig, you will notice that all your machines supposedly have the same IP address, and that you cannot route between them. Moreover, the VirtualBox interface runs on the 172.0.0.1 segment, so this makes things a little more difficult. Again, similar to what we had with Docker.

Ifconfig

You can always use the VirtualBox internal networking range and adjust the firewall rules and routing accordingly. However, this method does not really tell you which one of your virtual machines uses which particular IP address, and it's not easily determined on the fly. In other words, you cannot just parse these numbers out of nowhere, and there are some more elegant ways of getting the right information.

Connect via localhost

If you remember, during the startup, Vagrant setup SSH for each virtual machine to run on localhost, using different ports, starting with 2200 (or similar). So you do need to SSH into your clients, you can use this option:

ssh 127.0.0.1 -p 2202
The authenticity of host '[127.0.0.1]:2202 ([127.0.0.1]:2202)' can't be established.
ED25519 key fingerprint is 32:a0:57:58:b3:55:fc:03:c8:89:7d:7c:cc:6f:85:9d.
Are you sure you want to continue connecting (yes/no)?

Connect via IP address anywhere

However, the above method does not work for VM to VM communication, and we need something else. This is where the discovery piece comes really handy. Once your cluster is running, you can navigate to that URL again, and now it will be populated with some ugly but useful JSON stuff:

Discovery data

Notice the keys and values. Each entry has its IP address, e,g, 172.17.8.103, and you can parse it from this output. Excellent. Now we know the internal addresses, and we can use them to directly connect to our clients, and more importantly, allow them to communicate with their peers.

Ping neighbors

SSH directly

The one small missing piece is that we do not know the core user password, so we will use the provided key, located inside the vagrant.d sub-directory. Add the key, and then you can connect seamlessly. We've seen this in our first guide.

Add SSH key

SSH, from one client to another

Port forwarding

We have not really dabbled much into the configuration piece for our boxes, but at this point, now that we know the IP addresses of our running instances, we can start getting really creative. For instance, port forwarding, which is of great value for virtual machines that run services.

The necessary changes need to be provided as directives inside the Vagrantfile. You can have multiple configurations for multiple setups - much like Dockerfiles. For example, forwarding the HTTPS port would be:

config.vm.forward_port 443, <host port>

If you've followed my Docker articles, then everything's very easy. Host port, client port, and Bob's your uncle. You have have multiple declarations for each box. Then, you can also manually configure firewall rules, if you need complete and utter control of your systems. Again, this is just a teaser, and we will spend more time fiddling with this in the future.

Now, of course, the next step is to plug Docker in, and with the similar default range, which is probably not by accident, you can start playing with clusters, containers, parallel execution, and other fancy concepts. This is why Vagrant comes with plugins, which we will discuss in a separate article. Anyhow, once you've completed your work, just destroy the instances.

Destroy instances

More reading

Some extra good stuff from Dedoimedo's forges:

Here's the CoreOS quickstart guide; might be a bit heavy

A supervisord tutorial, which offers somewhat similar capabilities to etcd

KVM & VirtualBox side-by-side configuration tweak

Conclusion

Vagrant, as well as CoreOS, seem like interesting, wild, rebellious ideas. I am not yet convinced how much values they have in the business environment, although wrapper technologies that hide away the gory details of actual work seems to be burgeoning and becoming more popular all the time. Everyone wants frontend and orchestration tools, the only problem is the market is so fickle, volatile, there are no standards, and people use ugly stuff like Python, Ruby, and JSON. But never mind.

We've conquered another little piece of terror incognita, and you are somewhat more familiar with both Vagrant and CoreOS. Good that, because our next piece will be to start playing with some of the these cluster services and whatnot. And that's just the beginning.

Remember, I must do it all, so in the coming months, we will explore pretty much everything, including but not limited to Fleet, Kubernets, Mesos, Etch, and many other cool projects. We will also tie in CoreOS and others into cloud providers, fiddle with distributed and parallel filesystems, automation tools like Jenkins, Ansible, still more name dropping and buzz to make you want to vomit, and then some. Stay tuned.

Cheers.

RSS Feed icon

del.icio.us del.icio.us stumbleupon stumble digg digg reddit reddit slashdot slashdot



Advertise!

Would you like to advertise your product/site on Dedoimedo?

Read more

Donate to Dedoimedo!

Do you want to
help me take early retirement? How about donating
some dinero to
Dedoimedo?

Read more

Donate