Kubernetes is a Container Cluster Manager from Google which basically means that Kubernetes is an orchestration of many services running on plenty of Docker containers. Google actually supports a several ways how to run Kubernetes and luckily Vagrant is one of them.
Note: The following is actually a short and fixed version of official getting started guide specifically written for Fedora host and libvirt provider.
You can run the installation scripts directly on your computer or inside a virtual machine which I highly recommend. I already wrote about setting up nested virtualization with Vagrant and KVM before.
First you need to install Vagrant with libvirt support and other dependencies on Fedora. You can also use the new Vagrant assistant to do install vagrant-libvirt and set up the polkit rules. I am including
wget as they might not be available in some boxes and the Google installation script needs them.
On Fedora 22 run:
# dnf install vagrant-libvirt vagrant-libvirt-doc which wget -y # cp /usr/share/vagrant/gems/doc/vagrant-libvirt-0.0.26/polkit/10-vagrant-libvirt.rules /usr/share/polkit-1/rules.d/ # systemctl start libvirtd
On Fedora 21 run:
# yum install vagrant-libvirt vagrant-libvirt-doc which wget -y # cp /usr/share/vagrant/gems/doc/vagrant-libvirt-0.0.24/polkit/10-vagrant-libvirt.rules /usr/share/polkit-1/rules.d/ # systemctl start libvirtd
Once you have that ready, we can just start the Google installation script:
export KUBERNETES_PROVIDER=vagrant && export VAGRANT_DEFAULT_PROVIDER=libvirt && curl -sS https://get.k8s.io | bash
First we set up the Vagrant provider for the installation script and libvirt provider for Vagrant. Then we run the script provided by Google. Unfortunately that failed me with something like Can’t find the necessary components for the libvirt vagrant provider, please fix and retry. Amusingly enough it does not tell you what’s actually wrong so I looked on the downloaded sources and found the issue:
$ vi ./kubernetes/cluster/vagrant/util.sh .. local providers=( # Format is: # provider_ctl_executable vagrant_provider_name vagrant_provider_plugin_re # either provider_ctl_executable or vagrant_provider_plugin_re can # be blank (i.e., '') if none is needed by Vagrant (see, e.g., # virtualbox entry) vmrun vmware_fusion vagrant-vmware-fusion vmrun vmware_workstation vagrant-vmware-workstation prlctl parallels vagrant-parallels VBoxManage virtualbox '' ) ..
No libvirt there. So I went and tried to add a line for it which seemed to fix the issue:
$ vi ./kubernetes/cluster/vagrant/util.sh .. local providers=( # Format is: # provider_ctl_executable vagrant_provider_name vagrant_provider_plugin_re # either provider_ctl_executable or vagrant_provider_plugin_re can # be blank (i.e., '') if none is needed by Vagrant (see, e.g., # virtualbox entry) vmrun vmware_fusion vagrant-vmware-fusion vmrun vmware_workstation vagrant-vmware-workstation prlctl parallels vagrant-parallels VBoxManage virtualbox '' virsh libvirt vagrant-libvirt ) ..
So let’s continue. But in order not to run the installation script again, we only run
kube-up.sh from the Kubernetes sources that got downloaded by the above installation script:
kube-up.sh is a script that replaces
vagrant up call for us and should spin and configure two virtual machines for us, one for Kubernetes master and one for our first minion. Once it finishes we can list the VMs with familiar Vagrant command:
$ vagrant status Current machine states: master running (libvirt) minion-1 running (libvirt)
Now we can
vagrant ssh master and explore our master server by running various
kubectl calls to list pods, nodes, services and other stuff. We can also use
docker ps command to see what containers are running. We can even find out what’s running on our minion-1 server without explicitly logging there with:
# salt '*minion-1' cmd.run 'docker ps'
Nice, everything seems to work and we can start playing with Kubernetes.