Synced folders lost when rebooting a Vagrant machine using the Ansible provisioner
Asked Answered
H

5

11

Vagrant creates a development environment using VirtualBox and then provisions it using ansible. As part of the provisioning, ansible runs a reboot and then waits for SSH to come back up. This works as expected but because the vagrant machine is not being started from a "vagrant up" command the synced folders are not mounted properly when the box comes back up from the reboot.

Running "vagrant reload" fixes the machine and mounts the shares again.

Is there a way of either telling vagrant to reload the server or to do all the bits 'n bobs that vagrant would have done after a manual restart?

Simply running "sudo reboot" when SSH-ed into the vagrant box also produces the same problem.

Heathenry answered 27/5, 2014 at 16:43 Comment(0)
H
8

There is no way for Vagrant to know that the machine is being rebooted during the provisioning.

If possible, the best would be to avoid rebooting here altogether. For example kernel updates should be already done when building the base box.

Another easy (but not very convenient) way is to handle it with log output or documentation, or with a wrapper script which invokes vagrant up && vagrant reload.

And finally, you could write a plugin which injects all the needed mounting etc. actions to Vagrant middleware stack after the provisioning, but you would still need to think how to let the plugin know that the machine has been booted. Other challenge is that this easily gets provider specific.

Hither answered 28/5, 2014 at 7:25 Comment(0)
T
4

You should be able to add the filesystems to /etc/fstab to mount on boot.

Here's my example:

vagrant /vagrant    vboxsf  defaults    0   0
home_vagrant_src    /home/vagrant/src   vboxsf  defaults    0   0
home_vagrant_presenter-src  /home/vagrant/presenter-src vboxsf  defaults    0   0

Your vagrant directory should have a .vagrant hidden directory in it, and in there you should find a path to the "synced_folders" file (in my case: /vagrant/.vagrant/machines/default/virtualbox/synced_folders).

That file should help you figure out what the labels are and their mount points:

{"virtualbox":{"/home/vagrant/src":{"guestpath":"/home/vagrant/src","hostpath":"/home/rkomorn/src","disabled":false,"__vagrantfile":true},"/home/vagrant/presenter-src":{"guestpath":"/home/vagrant/presenter-src","hostpath":"/home/presenter/src","disabled":false,"__vagrantfile":true},"/vagrant":{"guestpath":"/vagrant","hostpath":"/home/rkomorn/vagrant","disabled":false,"__vagrantfile":true}}}

It's not the easiest to read but, using python terminology, the labels appear to be the inner dictionary's keys, with / translated to _ (eg: the /home/vagrant/presenter-src key became the home_vagrant_presenter-src label).

I'm actually not sure why vagrant doesn't just use /etc/fstab for shared folders but I'm guessing there's a good reason.

Trakas answered 23/5, 2018 at 15:29 Comment(0)
R
2

In case anyone else runs into this issue and finds this question like I did here's how I worked around the issue:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
  config.vm.box = "..."

  # create a shared folder for the top-level project directory at /vagrant
  # normally already configured but for some reason it isn't on these boxes
  # https://www.vagrantup.com/docs/synced-folders/virtualbox.html#automount
  # http://www.virtualbox.org/manual/ch04.html#sf_mount_auto
  config.vm.synced_folder ".", "/mnt/vagrant", id: "vagrant", automount: true
  config.vm.provision "shell", inline: "usermod -a -G vboxsf vagrant"
  config.vm.provision "shell", inline: "ln -sfT /media/sf_vagrant /vagrant"

  # More settings omitted...
end

There's a few parts to this solution:

  1. The first line assigns a specific id of vagrant to the shared folder. This is important because the automatic mount functionality in VIrtualBox uses /mnt/sf_<id> by default. It also mounts the folder at /mnt/vagrant to keep it out of the way. Ideally you'd pick a more obscure location that's present on all of your VMs or just document not to use it there.
  2. The third line creates a symbolic link from the automatic mount location at /mnt/sf_vagrant to the usual place users expect the shared folder at /vagrant.
  3. The second line adds the vagrant user in the virtual machine to the vboxsf group. This is necessary to access files inside /mnt/sf_vagrant because the guest utilities mount the folder with root:vboxsf ownership. They also set appropriate file and directory modes so it works fine in practice but you do need to be a member of the vboxsf group.

This solution has the following benefits:

  • The mount at /mnt/sf_vagrant is automatically mounted by the virtualbox guest utilities after a reboot so /vagrant should always be available.
  • It does not require installing plugins or using any outside tools.

It has the following drawbacks:

  • Potential for unexpected behavior if users find and use the /mnt/vagrant mount. That mount will only be present if the virtual machine was most recently booted / rebooted through the vagrant console client otherwise it will not be present.
  • It requires a relatively recent version of VirtualBox and Vagrant.

EDIT: Added -T option to ln to avoid the corner case where it creates /vagrant/sf_vagrant as a symlink.

Rosenquist answered 16/4, 2020 at 23:23 Comment(0)
D
2

I had a same issue. This is what I had in my /etc/fstab.

#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not modify.
vagrant_data /vagrant_data vboxsf uid=1000,gid=1000,_netdev 0 0
vagrant /vagrant vboxsf uid=1000,gid=1000,_netdev 0 0
#VAGRANT-END

So if you see fstab entry is still there, all you have to do is run sudo mount -a to trigger mount again. Or you can copy this lines.

Delk answered 2/11, 2021 at 20:7 Comment(1)
I truly believe this should be the best acceptable answer, and you could easily run sudo mount -a.Hundredfold
C
1

Split your provisioners into two separate steps and use the vagrant-reload plugin as additional provisioner between.

Example Vagrantfile:

  config.vm.provision "Step 1 - requires reboot", type: "shell", path: "scripts/part1.sh"
  config.vm.provision :reload
  config.vm.provision "Step 2 - happens after reboot", type: "shell", path: "scripts/part2.sh"
Cutie answered 21/3, 2017 at 18:57 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.