ansible ssh prompt known_hosts issue
Asked Answered
F

9

59

I'm running Ansible playbook and it works fine on one machine.

On a new machine when I try for the first time, I get the following error.

17:04:34 PLAY [appservers] ************************************************************* 
17:04:34 
17:04:34 GATHERING FACTS *************************************************************** 
17:04:34 fatal: [server02.cit.product-ref.dev] => {'msg': "FAILED: (22, 'Invalid argument')", 'failed': True}
17:04:34 fatal: [server01.cit.product-ref.dev] => {'msg': "FAILED: (22, 'Invalid argument')", 'failed': True}
17:04:34 
17:04:34 TASK: [common | remove old ansible-tmp-*] ************************************* 
17:04:34 FATAL: no hosts matched or all hosts have already failed -- aborting
17:04:34 
17:04:34 
17:04:34 PLAY RECAP ******************************************************************** 
17:04:34            to retry, use: --limit @/var/lib/jenkins/site.retry
17:04:34 
17:04:34 server01.cit.product-ref.dev      : ok=0    changed=0    unreachable=1    failed=0   
17:04:34 server02.cit.product-ref.dev      : ok=0    changed=0    unreachable=1    failed=0   
17:04:34 
17:04:34 Build step 'Execute shell' marked build as failure
17:04:34 Finished: FAILURE

This error can be resolved, if I first go to the source machine (from where I'm running the ansible playbook) and manually ssh to the target machine (as the given user) and enter "yes" for known_hosts file entry.

Now, if I run the same ansible playbook second time, it works without an error.

Therefore, how can I suppress the prompt what SSH gives while making ssh known_hosts entry for the first time for a given user (~/.ssh folder, file known_hosts)?

I found I can do this if I use the following config entries in ~/.ssh/config file.

~/.ssh/config

# For vapp virtual machines
Host *
  StrictHostKeyChecking no
  UserKnownHostsFile=/dev/null
  User kobaloki
  LogLevel ERROR

i.e. if I place the above code in the user's ~/.ssh/config file of a remote machine and try Ansible playbook for the first time, I won't be prompted for entring "yes" and playbook will run successfully (without requiring the user to manually create a known_hosts file entry from the source machine to the target/remote machine).

My questions: 1. What security issues I should take care if I go ~/.ssh/config way 2. How can I pass the settings (what's there in the config file) as parameters/options to ansible at command line so that it will run first time on a new machine (without prompting / depending upon the known_hosts file entry on the source machine for the target machine?

Farron answered 13/5, 2015 at 22:15 Comment(0)
W
61

The ansible docs have a section on this. Quoting:

Ansible has host key checking enabled by default.

If a host is reinstalled and has a different key in ‘known_hosts’, this will result in an error message until corrected. If a host is not initially in ‘known_hosts’ this will result in prompting for confirmation of the key, which results in an interactive experience if using Ansible, from say, cron. You might not want this.

If you understand the implications and wish to disable this behavior, you can do so by editing /etc/ansible/ansible.cfg or ~/.ansible.cfg:

[defaults]
host_key_checking = False

Alternatively this can be set by the ANSIBLE_HOST_KEY_CHECKING environment variable:

$ export ANSIBLE_HOST_KEY_CHECKING=False

Also note that host key checking in paramiko mode is reasonably slow, therefore switching to ‘ssh’ is also recommended when using this feature.

Willie answered 14/5, 2015 at 0:5 Comment(7)
It works. If I don't want to change ~/.ansible.cfg, I can still use that export variable. Only thing I'm looking at this point is: If my inventory file has a host as x.x.x.x i.e. IP, ansible is successfully able to run the playbook on the source machine (where ansible is running) BUT, if I use hostname.company.fqdn.or.short.hostname.com in the inventory file, then ansible is not able to resolve the hostname on the source machine. If I add my DNS server entry (nameserver 10.11.12.133) to /etc/resolv.conf, then it works fine. Any idea ? how can I pass DNS server entry to ansible at command line?Farron
One can try this too: ansible-playbook -e 'host_key_checking=False' yourplaybook.ymlFarron
Ansible uses your host's DNS resolution, as you noted. If the hostname resolves to an IP it will work, but Ansible doesn't (and shouldn't) have another mechanism for this. One hack I use for this sort of thing is to put an entry in /etc/hosts if the system is not in the DNS.Willie
Agree, but I have 20+ VMs in VCloud. I'll see if I can have someone update /etc/resolv.conf and add the nameserver x.x.x.x DNS server entry in it.Farron
Just for the record, see my answer (bellow) https://mcmap.net/q/327272/-ansible-ssh-prompt-known_hosts-issue which provides you with a way to update local known_hosts file.Kettledrum
another option is to set this in ansible.cfg: "ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" This has the added benefit of not bloating your host key file if you're, say, launching EC2 instances that won't exist for a long timeVarityper
yes u got it indeedTropology
K
51

To update local known_hosts file, I ended up using a combination of ssh-keyscan (with dig to resolve a hostname to IP address) and ansible module known_hosts as follows: (filename ssh-known_hosts.yml)

- name: Store known hosts of 'all' the hosts in the inventory file
  hosts: localhost
  connection: local

  vars:
    ssh_known_hosts_command: "ssh-keyscan -T 10"
    ssh_known_hosts_file: "{{ lookup('env','HOME') + '/.ssh/known_hosts' }}"
    ssh_known_hosts: "{{ groups['all'] }}"

  tasks:

  - name: For each host, scan for its ssh public key
    shell: "ssh-keyscan {{ item }},`dig +short {{ item }}`"
    with_items: "{{ ssh_known_hosts }}"
    register: ssh_known_host_results
    ignore_errors: yes

  - name: Add/update the public key in the '{{ ssh_known_hosts_file }}'
    known_hosts:
      name: "{{ item.item }}"
      key: "{{ item.stdout }}"
      path: "{{ ssh_known_hosts_file }}"
    with_items: "{{ ssh_known_host_results.results }}"

To execute such yml, do

ANSIBLE_HOST_KEY_CHECKING=false ansible-playbook path/to/the/yml/above/ssh-known_hosts.yml

As a result, for each host in the inventory, all supported algorithms will be added/updated in the known_hosts file under hostname,ipaddress pair record; such as

atlanta1.my.com,10.0.5.2 ecdsa-sha2-nistp256 AAAAEjZHN ... NobYTIGgtbdv3K+w=
atlanta1.my.com,10.0.5.2 ssh-rsa AAAAB3NaC1y ... JTyWisGpFeRB+VTKQ7
atlanta1.my.com,10.0.5.2 ssh-ed25519 AAAAC3NaCZD ... UteryYr
denver8.my.com,10.2.13.3 ssh-rsa AAAAB3NFC2 ... 3tGDQDSfJD
...

(Provided the inventory file looks like:

[master]
atlanta1.my.com
atlanta2.my.com

[slave]
denver1.my.com
denver8.my.com

)

As opposed to the Xiong's answer, this would properly handle the content of the known_hosts file.

This play is especially helpful if using virtualized environment where the target hosts get re-imaged (thus the ssh pub keys get changed).

Kettledrum answered 22/8, 2016 at 15:45 Comment(4)
You can add ssh port to ssh-keyscan which is available like {{ hostvars[item]['ansible_port'] }}Eleni
It's best to use quote filter for shell arguments.Tera
If anyone's wondering, I used this for ssh-keyscan: "ssh-keyscan {{ item.value }} -T 10 2> /dev/null | grep ssh-ed25519" and I also used no_log: "true" and changed_when: false to hide all the horrible output.Alliterate
Terrific! Works for me! I took out the dig part because we only use IPs on my network. I also added no_log: true to prevent printing the pubkey to the console. Thank you!Avron
P
24

Disabling host key checking entirely is a bad idea from a security perspective, since it opens you up to man-in-the-middle attacks.

If you can assume the current network isn't compromised (that is, when you ssh to the machine for the first time and are presented a key, that key is in fact of the machine and not an attacker's), then you can use ssh-keyscan and the shell module to add the new servers' keys to your known hosts file (edit: Stepan's answer does this a better way):

- name: accept new ssh fingerprints
  shell: ssh-keyscan -H {{ item.public_ip }} >> ~/.ssh/known_hosts
  with_items: ec2.instances

(Demonstrated here as you would find after ec2 provisioning.)

Photolithography answered 4/2, 2016 at 23:43 Comment(5)
Agree, Would be good to use this with: become_user "{{ someUser }}"Farron
Definitely better and safer way. There is also a module to do that docs.ansible.com/ansible/known_hosts_module.htmlMlawsky
See my answer https://mcmap.net/q/327272/-ansible-ssh-prompt-known_hosts-issue (bellow) where I use known_hosts module as opposed to a simple concatenation to the known_hosts file as suggested here.Kettledrum
running ssh-keyscan using this method or the method below didn't work for me until I added the retries keyword . I was using newly created EC2 instances and the first time always returned nothing and the second time worked as expectedVarityper
this is not idempotentRomo
S
11

Following @Stepan Vavra's correct answer. A shorter version is:

- known_hosts:
    name: "{{ item }}"
    key: "{{ lookup('pipe', 'ssh-keyscan {{ item }},`dig +short {{ item }}`') }}"
  with_items:
   - google.com
   - github.com
Schnabel answered 13/4, 2017 at 12:48 Comment(0)
V
11

Wouldn't doing something like this work for priming the known_hosts file:

ANSIBLE_HOST_KEY_CHECKING=false ansible all -m ping

This should connect to each hosts in the inventory, updating the known_hosts file for each host without having to enter "yes" for each prompt, then runs the "ping" module on each host?

A quick test (deleting my known_hosts file then running the above, done on an Ubuntu 16.04 instance) seemed to populate the known_hosts file with their current fingerprints.

@Stepan Vavra's solution didn't work for me as I was using aliased hosts (was connecting to internal IPs which didn't have DNS available for them, so I wanted more descriptive names to refer to each hosts in the inventory and having ansible_host variable point to the actual IP for each). Running the above was much simpler and primed my known_hosts file without having to disable host key checking in ansible or ssh.

Viewfinder answered 2/6, 2017 at 17:17 Comment(1)
I like this approach, it encapsulates the configuration just for that command/run.Austerlitz
Q
0

I've created this shell script (also works from Jenkins, btw)

my_known_hosts="$HOME/.ssh/known_hosts"
## housekeeping ##
if [ -f $my_known_hosts".old" ] 
    then rm -f $my_known_hosts".old"
fi
## housekeeping ##
## backup ##
if [ -f $my_known_hosts ] 
    then mv $my_known_hosts "$my_known_hosts.old"
fi
## backup ##
## query aws for active hosts and add to known_hosts
aws ec2 describe-instances --query 'Reservations[*].Instances[*].NetworkInterfaces[*].Association.PublicDnsName' --output text | xargs -L1 ssh-keyscan -H >> $my_known_hosts
## query aws for active hosts and add to known_hosts

https://admin-o-mat.blogspot.com/2020/09/ansible-and-aws-adding-hosts-to.html

Quintillion answered 6/9, 2020 at 0:11 Comment(0)
D
0

This is a little dated but I didn't see anyone talking about this option so I figured I add my 2 cents.

You can sign SSH keys with your own CA. Here are the instructions.

Steps:

  1. Create your own SSH Certificate Authority certificate and keys.
  2. Create your server or client keys.
  3. Use your CA certificate to sign the server or client keys.
  4. Add you CA to your known_hosts file on the client.
  5. Or Add your CA to your Authorized Keys file on the server.
  6. I would suggest using two different CAs for server and client side tasks.
  7. You will not be prompted to add server public key to known_hosts because you already have the certificate. And people do not need to use passwords to log into your servers as all your client keys will be signed with your CA.
  8. Count the money!!!!!
Dialysis answered 30/9, 2022 at 15:57 Comment(0)
E
0

Regarding to the accepted answer, the updated location of the documentation about host key checking is:

https://docs.ansible.com/ansible/latest/user_guide/connection_details.html#managing-host-key-checking

Estaminet answered 19/10, 2022 at 5:37 Comment(0)
R
0

(based on @Stepan Vavra answer)

  • which works with ini direct IP format.
  • What about yaml and more complicated format ?

known hosts for an inventory set with yaml and options

If your inventory is something like

test_servers:
  hosts:
    digitalOcean1:
      ansible_host: 135.251.62.54

The @Stepan Vavra answer should be modified to the following format

---
# - https://mcmap.net/q/327272/-ansible-ssh-prompt-known_hosts-issue
# -- You can add ssh port to ssh-keyscan which is available like {{ hostvars[item]['ansible_port'] }}
# -- This play is especially helpful if using virtualized environment where the target hosts get re-imaged (thus the ssh pub keys get changed).
# -- for yaml format we use extract and join
# ---- https://mcmap.net/q/278189/-ansible-get-all-the-ip-addresses-of-a-group
#
- name: Store known hosts of 'all' the hosts in the inventory file
  hosts: localhost
  connection: local

  vars:
    # ssh_known_hosts_command: "ssh-keyscan -T 10"
    ssh_known_hosts_file: "{{ lookup('env','HOME') + '/.ssh/known_hosts' }}"
    ssh_known_hosts: "{{ groups['test_servers'] | map('extract', hostvars, ['ansible_host']) | join(',') }}"

  tasks:
    - name: For each host, scan for its ssh public key
      ansible.builtin.shell: "ssh-keyscan {{ item }},`dig +short {{ item }}`"
      with_items: "{{ ssh_known_hosts }}"
      register: ssh_known_host_results
      ignore_errors: true
      changed_when: false

    - name: Add/update the public key in the '{{ ssh_known_hosts_file }}'
      ansible.builtin.known_hosts:
        name: "{{ item.item }}"
        key: "{{ item.stdout }}"
        path: "{{ ssh_known_hosts_file }}"
      with_items: "{{ ssh_known_host_results.results }}"

We use extract and join to extract the properties.

"{{ groups['test_servers'] | map('extract', hostvars, ['ansible_host']) | join(',') }}"

Execution

ansible-playbook server/ansible/playbooks/site_10/known_hosts.yml --extra-vars ANSIBLE_HOST_KEY_CHECKING=false

If you are using pipenv, you can do something like

[scripts]
ansibleSetKnownHosts = "ansible-playbook server/ansible/playbooks/site_10/known_hosts.yml --extra-vars ANSIBLE_HOST_KEY_CHECKING=false"

and run with

pipenv run ansibleSetKnownHosts

Note:

--extra-vars ANSIBLE_HOST_KEY_CHECKING=false

does help in environments where you can't directly set the env vars. Like with pipenv.

Raposa answered 19/9, 2023 at 13:19 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.