Ansible - Automation remote or local?
Asked Answered
T

2

7

If running an automation tool like Ansible to build your infrastructure stack in the cloud (e.g. AWS), is it enough to have your automation tool and build stack in separate regions/VPCs in the cloud, or does it make more sense to have your automation tool and scripts locally (own datacenter/machine)?

Both seem to be used, but I was just wondering if there was a best practice standard.

Tricho answered 20/9, 2015 at 19:7 Comment(0)
F
4

As a contrast to xeraa's good answer we run as much as possible from inside AWS.

The real benefits we get from this is that it allows us to use centralised Jenkins servers that run Ansible (and Terraform in our case for the actual AWS provisioning with Ansible just used to configure EC2 instances and run ad-hoc playbooks for administrative tasks).

We can then control access to these Jenkins servers through credentials and/or security groups/NACLs.

Doing it this way means we can control the amount of people who have some form of credentials that would allow them to build anything they like and/or destroy anything they like.

Ideally we'd only provide credentials to the Jenkins servers via IAM EC2 instance roles but we're not quite there yet.

One real positive out of this is that our front line/second line support guys who use Windows pretty much exclusively can access a nice web GUI for managing things in the middle of the night and run Jenkins jobs that they specifically have access to run that will do things such as restarting a server/service or even rebuilding part of a VPC.

We have a separate "dev" account that developers have access to from their own machines and it's here that we build things out as we develop our Ansible (and Terraform) code base before that code base is then used in our test and production environments.

Fye answered 23/9, 2015 at 19:3 Comment(1)
There's a middle ground that I would like to add. You can still run everything from a centralised Jenkins, thus leveraging the benefits you described but instead of jenkins running remote playbooks, you could run SSH commands into target machines that trigger local playbooks (possibly using ansible-pull so you get a fresh copy every time). This way you get the best of both worlds.Becnel
A
4

We run everything locally.

Plus

  • We test all playbooks (and our software) in a local Vagrant box, thus we need it locally anyway.
  • We don't need additional machines. And you should configure them with Ansible as well, so at least someone needs to have Ansible installed. Otherwise you have a chicken vs egg problem.
  • Probably slightly faster, because you have one network hop less.

Minus

  • Everybody needs a local Ansible installation, which will only work on Linux and Mac, but not on Windows (can only be the target).

Other considerations

  • For our Windows users, a Linux / Mac user creates a VM with Ansible (everything set up) and exports it as a base box. Then the Windows users can import that base box in Vagrant and only need to start it — everything is already installed. This includes Ansible so you can run everything from the VM.
  • At first we planned to put Ansible on our NAT instances (for the private VPC subnets). But then we would need one configuration to set up the VPC, security groups, and NAT instances, and another one to run on the NAT instances and set up the rest of the infrastructure. However, we couldn't see any real benefit in that, so we are now having everything locally.

PS: Not sure if there is a definitive answer, but these are our arguments.

Adamek answered 20/9, 2015 at 23:35 Comment(2)
Nice answer. Out of curiosity, do you have any consideration for making sure that your code base is run against a dev/test environment before prod? Or do you just insist that people run things locally vs Vagrant before running it on live servers?Fye
We have a Jenkins job doing a dry run and ansible-lint against every push. Otherwise it's the developer's responsibility, but we are only 5 with Ansible access and 8 overallAdamek
F
4

As a contrast to xeraa's good answer we run as much as possible from inside AWS.

The real benefits we get from this is that it allows us to use centralised Jenkins servers that run Ansible (and Terraform in our case for the actual AWS provisioning with Ansible just used to configure EC2 instances and run ad-hoc playbooks for administrative tasks).

We can then control access to these Jenkins servers through credentials and/or security groups/NACLs.

Doing it this way means we can control the amount of people who have some form of credentials that would allow them to build anything they like and/or destroy anything they like.

Ideally we'd only provide credentials to the Jenkins servers via IAM EC2 instance roles but we're not quite there yet.

One real positive out of this is that our front line/second line support guys who use Windows pretty much exclusively can access a nice web GUI for managing things in the middle of the night and run Jenkins jobs that they specifically have access to run that will do things such as restarting a server/service or even rebuilding part of a VPC.

We have a separate "dev" account that developers have access to from their own machines and it's here that we build things out as we develop our Ansible (and Terraform) code base before that code base is then used in our test and production environments.

Fye answered 23/9, 2015 at 19:3 Comment(1)
There's a middle ground that I would like to add. You can still run everything from a centralised Jenkins, thus leveraging the benefits you described but instead of jenkins running remote playbooks, you could run SSH commands into target machines that trigger local playbooks (possibly using ansible-pull so you get a fresh copy every time). This way you get the best of both worlds.Becnel

© 2022 - 2024 — McMap. All rights reserved.