Ansible Playbooks, Roles, Templates and Variables

Jun 30, 2023 · 29 mins read
Ansible Playbooks, Roles, Templates and Variables

In the video below, we provide a crash course on Ansible, a great way to maintain and manage IT devices


Managing IT devices can be repetitive and boring, especially when it involves lots of computers

But they do need to be kept up to date, their configurations need to be kept in synch, a change may even need applying to multiple computers and so on

And these are things that Ansible can help with

It doesn’t require an agent to be installed on your computers and by creating what Ansible refers to as playbooks you can manage your IT devices much better

Useful links:
https://docs.ansible.com/ansible/2.9/modules/list_of_all_modules.html
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_privilege_escalation.html#become-directives
https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_reuse_roles.html

Overview and Assumptions:
Ansible involves using a central computer called the control node to manage other IT devices

Now if you’re the only person going to be using Ansible then you could install this on your Linux desktop, but in this video we’ll being by setting up a dedicated computer because at a later date I want to be able to automate Ansible

As the video is meant to focus on Ansible, I’m going to assume you already have a computer that has remote SSH access to all other devices on your network as well as a user account with Sudo rights on those computers

And I’m also going to assume you have a computer you can setup as a control node

Install Ansible On Mgmt PC:
Although the goal is to have a control node to run Ansible from, installing Ansible on a dedicated computer puts us in a chicken and egg situation

We want to manage remote computers from this control node using Ansible but it won’t have access to them to begin with

So for that reason we’ll install Ansible on a computer that already has access and use Ansible from there to set up the control node and give it the access that it needs

I’m going to be using my managment computer to do this from and the first thing we’re going to do is to install Ansible on it

To make sure we download the latest version, we’ll first update the list of packages available

sudo apt update
NOTE: If you don’t have sudo installed you can switch to the root account instead with “su -”

Next we’ll install Ansible

sudo apt install ansible -y

Inventories:
Ansible uses what’s known as an inventory file which contains a list of the devices you’re going to manage

We’ll first set up a folder in our home directory though called ansible and then switch to that folder

mkdir ansible
cd ansible

This is so we can keep track of everything

Next, we’ll create an inventory file

nano inventory

[nodes]
192.168.101.10
192.168.101.11

[control]
192.168.200.15

Now save and exit

This file uses the INI format, which I find easier, but you can also use YAML if you prefer

I’ve used IP addresses, but you can use FQDNs or hostnames

If you do use either of these though, the computer will ultimately need to connect to an IP address so name resolution must work

Because this is a one off job to provide access for Ansible from the node computer, I’ve split things into two groups, the nodes and the control node itself

This allows me to carry out different tasks for the different groups

Ad-Hoc Commands:
One thing you can do with Ansible is to run what are known as ad-hoc commands

So we’ll do this to test our that Ansible has access to the computers in the inventory

As an example,

ansible -i inventory --private-key ~/.ssh/david-key all -m ping

We’ve used the -i parameter to tell Ansible where to find the inventory file

We’ve used the --private-key parameter to tell Ansible where to find our private key and in this case I’ve pointed to the one for my user account which has access to these computers

The all parameter is to tell Ansible to run this command for all host entries in our inventory file

And the -m parameter is to tell Ansible which module to use, in this case ping

Bear in mind, Ansible will actually login to the computers and not just simply ping them

What you should get back is confirmation you have remote access to all of these computers, if not you’ll need to fix this before continuing

Now, if you are using password authentication, then provided the OpenSSH server allows password access, you could run a command like this instead

ansible -i inventory --user <username> --ask-pass all -m ping

Replacing <username> with your user account

This tells Ansible what user account to use and to prompt you for your password

TIP: If you get a warning about the Python interpreter you can disable this feedback by creating your own Ansible config file

nano ansible.cfg
[defaults]
interpreter_python=auto_silent

Now save and exit

Create Ansible SSH Keys:
Now while you can use your own account to run Ansible, we’re going to set up a dedicated account

This makes sense if you want to fully automate Ansible

The first thing we’re going to do is to create SSH keys for our new Ansible user

Now from a security perspective, you should really assign a passphrase to a private key in case it gets stolen, but that makes automation for Ansible difficult; Every time you want to run a playbook you’ll have to enter the passphrase which makes it impractical

In which case, we’re going to have to bend the rules and not enter a passphrase

In this example, we’ll create a key using the ed25519 algorithm but I would suggest using a less obvious name than ansible

mkdir files
cd files
ssh-keygen -t ed25519 -f ansible-key -C "ansible@homelab.lan"
cd ..

The -t parameter is used because we don’t want to use RSA

The -f parameter is to provide the filename

And the the -C parameter is to add a comment so we know who the key belongs to

Install Sudo Package:
Now we want Ansible to be able to login to our IT devices and make changes to them

And while it can use different methods to get root access, only one method will be used at a time

The problem is that Debian doesn’t install the Sudo package by default while Ubuntu doesn’t ask you to set the root password during the installation

So this is a problem if you have a mixture of computers

Now you shouldn’t be able to remotely login with the root password anyway, so to keep things simple we’re going to make sure that the Sudo package is installed on all computers

To do that we’ll create what Ansible calls a playbook

This is basically a list of tasks for Ansible to carry out, although in this case there is only one thing we want done

nano install_sudo.yml
- hosts: nodes
  become: true
  tasks:

  - name: install sudo package
    apt:
      name: sudo
      update_cache: yes
      cache_valid_time: 3600
      state: latest

In this playbook we’re telling Ansible to target the computers in the nodes group mentioned in the inventory file

And then to become the root user after it logs in

We then define the tasks to carry out

In this case there is only one task and that is to install the Sudo package, and we’ve given it a name as part of our auditing process

For the task we’re using the apt module and we’re telling this to install the sudo package

As part of this we want the repository cache updating, providing it’s less than a hour old

If Sudo is already installed, then while we’re here, we want this upgrading and we do that by saying we want the latest version in the state parameter

To run this playbook we’ll run the following command

ansible-playbook install_sudo.yml -i inventory --private-key ~/.ssh/david-key --become-method su -K

Here we use the ansible-playbook command and tell it to run the install_sudo.yml playbook we created

Similar to the previous ad-hoc command, we point it to our inventory file and our private key

The --become-method parameter is to to tell Ansible how to switch to root, and in this case we’re using su because Debian doesn’t have Sudo installed and sudo is the default method

The -K parameter is a shortended version of the --ask-become-pass parameter which in this case is to prompt us for the root password

Now because we’ve used the su option, the playbook will fail on Ubuntu computers if the root password wasn’t changed after installation and doesn’t match the one provided

Although it’s expected Ubuntu computers will have Sudo installed, we can run this playbook again for them, to make sure Sudo is up to date

ansible-playbook install_sudo.yml -i inventory --private-key ~/.ssh/david-key -K

This time the playbook will switch to root using sudo and it will be run on all computers

But as you’ll see, only Ubuntu computers will show any changes because any Debian ones should now be up to date anyway

And that’s one of the major benefits of Ansible, it only changes things when it’s necessary

Create Ansible Account:
Because the control node needs to login to other computers, we need to create a user account for Ansible on multiple computers

We also want to upload the Ansible public key to all the managed nodes, because we’re using SSH key authentication and we want to give the user account Sudo rights

On the the control node, however, we want to upload our own public key so we can login to it and upload the SSH keys for the Ansible account so it can use them

In which case we’ll create another playbook for this

nano ansible_bootstrap.yml
- hosts: all
  become: true
  tasks:

  - name: create user ansible
    user:
      name: ansible
      shell: '/bin/bash'

- hosts: nodes
  become: true
  tasks:

  - name: add Ansible ssh key
    authorized_key:
      user: ansible
      key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEzvbDzPmQPjb43yuA9/0mHauMv6qqNeJcMJmho47oSf ansible@homelab.lan"

  - name: add ansible to sudoers
    copy:
      src: sudoer_ansible
      dest: /etc/sudoers.d/ansible
      owner: root
      group: root
      mode: 0440

- hosts: control
  become: true
  tasks:

  - name: add personal ssh key for ansible
    authorized_key:
      user: ansible
      key: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICNpFKzUbWKmAvWtajir9POB1m6eQYFphXGGRLu2Do3N david@homelab.lan"

  - name: upload public key
    copy:
      src: ansible-key.pub
      dest: /home/ansible/.ssh/ansible-key.pub
      owner: ansible
      group: ansible
      mode: 0644

  - name: upload private key
    copy:
      src: ansible-key
      dest: /home/ansible/.ssh/ansible-key
      owner: ansible
      group: ansible
      mode: 0600

Now save and exit

So in this playbook we have 3 plays and in the first one we target all computers and create an Ansible user account using the user module and set the shell to /bin/bash

TIP: If you don’t specify the shell you’ll likely find that when you login to the control node, the prompt will be empty and you’ll have problems with things like autocompletion and command history

We don’t provide a password for the account though as we don’t want anybody to login as Ansible using a password

In the second play we target the computers being managed and use the authorized_key module to upload the public SSH key we created earlier for Ansible

Because Ansible needs elevated rights we use the copy module to copy a file to the /etc/sudoers.d/ folder but we set the ownership to root:root and restrict access to it

A reason for setting Sudo rights this way is we can remove them by simply deleting the file. But we also want to allow Ansible to become root without having to provide a password, which is best done by adding a file to the folder rather than altering the /etc/sudoers file

For the control node itself, we use the authorized_key module to upload our own public SSH key for the Ansible user account

The reasoning for this is that multiple users may need access to Ansible, but we want them to use the Ansible account to run Ansible

Each of them will login with their own SSH key, which requires a passphrase to use, and access can be removed by deleting their public key from the list

After that we upload the SSH keys we created earlier for the Ansible user

Speaking of files, we need to create one for Sudo rights

nano files/sudoer_ansible
ansible ALL=(ALL) NOPASSWD: ALL
Now save and exit

In that file we’re telling OpenSSH to give the Ansible user Sudo rights to all commands and that a password isn’t needed

At this stage it’s difficult to know what commamds Ansible needs access to but these rights can always be reduced at a later date

Now although this is bad from a security perspective, we don’t want a password prompt when switching to root as it would break automation

To run this playbook we use a command like the following

ansible-playbook ansible_bootstrap.yml -i inventory --private-key ~/.ssh/david-key -K

Install Ansible on Control Node:
The next thing to do is to install Ansible on the control node

So we’ll create another playbook to do that

nano install_ansible.yml
- hosts: control
  become: true
  tasks:

  - name: install ansible package
    apt:
      name: ansible
      update_cache: yes
      cache_valid_time: 3600
      state: latest

This playbook is similar to what was done for Sudo expect it installs Ansible

Then we’ll run this playbook with the following command

ansible-playbook install_ansible.yml -i inventory --private-key ~/.ssh/david-key -K

UFW Firewall Rules:
From a security perspective the control node should be behind a firewall to limit access, in which case the rules on that firewall may need updating to allow the control node to access other devices using SSH

And if the computers being managed are using a software firewall such as UFW, those rules will need updating to allow SSH access

For example

nano firewall.yml
- hosts: nodes
  become: true
  tasks:

  - name: allow SSH access
    ufw:
      rule: limit
      proto: tcp
      from_ip: '192.168.200.15'
      to_port: '22'
Now save and exit

In this playbook we target all of the nodes and use the ufw module to add a firewall rule to allow access from the control node

UFW allows you to limit access rather just allow access which is useful for SSH as we want to delay failed login attempts

To run this playbook we’ll run the following command

ansible-playbook firewall.yml -i inventory --private-key ~/.ssh/david-key -K

NOTE: This isn’t necessary and won’t work if UFW isn’t already installed and running

Ansible.cfg:
At this stage we should be able to test Ansible from our control node

To do that we’ll login as ansible, and in my case this would be as follows

ssh ansible@ansible

NOTE: Because the .ssh/config file on my computer has already been configured to use my SSH key to login to the control node, I only need to reference the ansible user to login with that different user account

It makes sense to set up some form of file structure for Ansible, for instance you might want to create a Projects folder and within that have a sub-folder for each project

In my case, this a lab and I’m going to create a Live folder and a Test folder

mkdir Live
mkdir Test

The first one is to manage the lab itself, the second will be for testing

As we’re doing testing, I’ll switch to that folder

cd Test

Then create an inventory file

nano inventory

[ntpservers]
192.168.101.10

[nameservers]
192.168.101.11
Now save and exit

The idea is to break hosts up into groups that reflect their purpose, for example, web servers, file servers, etc

This way you can run plays that are specific to that group as they’ll all be configured in the same way

I don’t have any application servers in my lab but I do have different infrastructure servers although in this case these are just test servers that I want to demo Ansible with

TIP: Unless you want to confirm the fingerprints of all hosts as you connect to them for the first time, you will want to disable host key checking

To do that we’ll override the default behaviour of Ansible by creating an ansible.cfg file in our working folder

nano ansible.cfg
[defaults]
host_key_checking=False
Now save and exit

Before we go any further we should make sure that Ansible can access these computers, so we’ll run an ad-hoc command

ansible -i inventory --private-key ~/.ssh/ansible-key all -m ping

And to simplify our commands going forward we’ll add some additional defaults

nano ansible.cfg

inventory = inventory
private_key_file = ~/.ssh/ansible-key

Now save and exit

This time, we can run that same command as follows

ansible all -m ping

NOTE: You can override these if you like by using parameters in the command line as we’ve been doing so far

Site Playbook:
How you use Ansible and playbooks is up to you

Maybe you want to use it for projects and so you create a different playbook for each project

A recommendation though is to setup a playbook for your entire site so that you can keep your computers up to date or role out changes more easily

Rather than having a very long playbook, you can break things up into separate files known as roles which the playbook then calls on

Though how you use roles is again up to you

Maybe you want a base role for all of your computers, another for your web servers, another for your file servers, etc

A recommendation is to create self contained roles that can be re-used

My goal is to carry out common tasks for all computers and then have more specific tasks depending on a computer’s purpose

In which case we’ll first create a site playbook

nano site.yml
- hosts: all
  become: true
  pre_tasks:

  - name: update repository cache
    apt:
      update_cache: yes
      cache_valid_time: 3600

  roles:
    - default_firewall

- hosts: all:!ntpservers
  become: true
  gather_facts: false
  roles:
    - role: chrony
      server: false

- hosts: ntpservers
  become: true
  gather_facts: false
  roles:
    - role: chrony
      server: true

Now save and exit

For this playbook we have three plays

The first play targets all computers in the inventory and uses the apt module to update their repository cache before we then run a role called default_firewall

The second play targets all computers except those in the ntpservers group. This will call on a role called chrony and pass it a variable called server that is set to false

The third play targets computers in the ntpservers group. This will call the same chrony role but this time server will be set to true

The pre_tasks section incidentally is used when you want to ensure something is done before anything else and goes before the tasks section

There’s also a post_tasks section as well that you can use. This makes sense when you want to do clean up jobs for instance and it goes at the end of the play

In this case, we want to update the repository cache on all computers as we don’t want to waste time and resources doing updates every time a package needs to be installed or updated

The roles section is similar to the tasks section except rather than having a list of tasks, it has a lists of roles to call upon which contain those tasks

In this example we’re calling on a role called default_firewall which we’ll use to make sure UFW is installed and up to date, has our default firewall rules configured and is enabled

The second and third plays exist because chrony is an unusual package in that it can be configured as both a server and a client, but it uses the same configuration file

I want chrony installed on all computers but any servers I have need to point to an Internet server and allow access from internal computers, while clients should point to the internal server

So to keep the chrony role self-contained, I’ve split this into two plays and will pass a variable that will let the role decide which configuration file to upload and if firewall rules should be applied

One thing to point out is how a role can be declared differently

In play one, there is a roles section and roles are defined as a series of listings under that

In plays two and three, however, when a role needs information, we have a roles section but each line now begins with roles: followed by the name of the role and then the information to pass to it

This makes sense because Ansible needs to know which role to target with the variables being listed

Roles:
Now roles aren’t simply a file that you include, there’s an entire folder structure behind them which looks like this

roles/
    common/               # this hierarchy represents a "role"
        tasks/            #
            main.yml      #  <-- tasks file can include smaller files if warranted
        handlers/         #
            main.yml      #  <-- handlers file
        templates/        #  <-- files for use with the template resource
            ntp.conf.j2   #  <------- templates end in .j2
        files/            #
            bar.txt       #  <-- files for use with the copy resource
            foo.sh        #  <-- script files for use with the script resource
        vars/             #
            main.yml      #  <-- variables associated with this role
        defaults/         #
            main.yml      #  <-- default lower priority variables for this role
        meta/             #
            main.yml      #  <-- role dependencies
        library/          # roles can also include custom modules
        module_utils/     # roles can also include custom module_utils
        lookup_plugins/   # or other types of plugins, like lookup in this case

    webtier/              # same kind of structure as "common" was above, done for the webtier role
    monitoring/           # ""
    fooapp/               # ""

To begin with you need to create a folder called roles

Below that you create individual folders named after the roles themselves

And within each one of those you create at least one folder called tasks

The YAML file for the role itself needs to be called main.yml and this will can contain a list of tasks to be carried out

For example, when we tell Ansible to use the role called chrony in the above playbook, it will be looking for the file roles/chrony/tasks/main.yml

Because we are just starting, we’ll create the folders with one command

mkdir -p roles/{default_firewall,chrony}/tasks

We’ll then create our file for the default_firewall role

nano roles/default_firewall/tasks/main.yml

- name: install UFW
  apt:
    name: ufw
    state: latest

- name: allow SSH access
  ufw:
    rule: limit
    proto: tcp
    from_ip: '{{ item }}'
    to_port: '22'
  loop:
    - 192.168.200.10
    - 192.168.200.15

- name: Enable UFW
  ufw:
    state: enabled

Now save and exit

For this role we make sure that UFW is installed and up to date

Then we make sure the relevant SSH access is allowed by using the ufw module to add some rules

TIP: Rather than having two tasks for each computer, the loop keyword is used to shorten this to one task

Finally we make sure UFW is enabled

Next we’ll create our file for the chrony role

nano roles/chrony/tasks/main.yml

- name: install chrony package
  apt:
    name: chrony
    state: latest

- name: template client config
  template:
    src: chrony_client.conf.j2
    dest: /etc/chrony/chrony.conf
    owner: root
    group: root
    mode: 0644
  when: server == false
  notify: restart_chrony

- name: template server config
  template:
    src: chrony_server.conf.j2
    dest: /etc/chrony/chrony.conf
    owner: root
    group: root
    mode: 0644
  when: server == true
  notify: restart_chrony

- name: ntp server firewall rules
  ufw:
    rule: allow
    proto: udp
    from_ip: '{{ item }}'
    to_port: '123'
  when: server == true
  loop:
    '{{ Subnets }}'

Now save and exit

For this role we make sure that chrony is installed and up to date

We then want to make sure the config file for chrony is in synch with what we want, but that depends on whether the computer is a client or a server so we use the when keyword and the value passed through from the site.yml playbook to determine if this is for an NTP client or an NTP server

Because things like IP addresses can change over time, we will also use the template module rather than copying a file as is

And because chrony uses a config file, if the file changes we’ll need to restart the service

Now there’s no need to restart the service all the time, so we use the notify keyword which calls a handler called restart_chrony, but only if a change was made

For the NTP server, we also need to update its firewall rules

As a demonstation, this time I’ve used a variable when defining the items in the loop as it should be easier to maintain a variables file than an entire YAML file

Templates for Roles:
Now although Ansible has a copy module that lets you copy files, we’re using templates in our chrony roles instead

The reason for that is because you can define variables in templates and if an IP address changes for instance it’s easier for me to update the variables file

The first thing we need to do is to create a folder called template for our chrony role

mkdir roles/chrony/templates

First we’ll create a template file for our servers

nano roles/chrony/templates/chrony_server.conf.j2
# Include configuration files found in /etc/chrony/conf.d.
confdir /etc/chrony/conf.d

# Use internal server
server {{ external_server }} iburst

# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys

# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift

# Save NTS keys and cookies.
ntsdumpdir /var/lib/chrony

# Uncomment the following line to turn logging on.
#log tracking measurements statistics

# Log files location.
logdir /var/log/chrony

# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0

# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.
rtcsync

# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3

# Get TAI-UTC offset and leap seconds from the system tz database.
# This directive must be commented out when using time sources serving
# leap-smeared time.
leapsectz right/UTC

# Allow NTP access
{% for Subnet in Subnets %}
allow {{ Subnet }}
{% endfor %}

Now save and exit

The file has a j2 extension to represent Jinja2 which Ansible uses for templates

The contents are pretty much the default config file although I’ve removed other methods of defining an NTP server

In addition, I’ve set the NTP server as {{ external_server }} where external_server is a variable

Because this is for an NTP server, I’m adding allow statements at the end to allow NTP access from the internal network

And as there will be multiple subnets I’ve use a FOR loop to add a line for every Subnet I declare

In this case, Subnets is the name of the variable, which will be a list and Subnet is a keyword I’ve created so that it can be referenced in the allow statement

What Ansible will then do is to create a line for every entry in the Subnets list

Similary, we’ll create a template file for our clients

nano roles/chrony/templates/chrony_client.conf.j2
# Include configuration files found in /etc/chrony/conf.d.
confdir /etc/chrony/conf.d

# Use internal server
server {{ internal_server }} iburst

# This directive specify the location of the file containing ID/key pairs for
# NTP authentication.
keyfile /etc/chrony/chrony.keys

# This directive specify the file into which chronyd will store the rate
# information.
driftfile /var/lib/chrony/chrony.drift

# Save NTS keys and cookies.
ntsdumpdir /var/lib/chrony

# Uncomment the following line to turn logging on.
#log tracking measurements statistics

# Log files location.
logdir /var/log/chrony

# Stop bad estimates upsetting machine clock.
maxupdateskew 100.0

# This directive enables kernel synchronisation (every 11 minutes) of the
# real-time clock. Note that it can’t be used along with the 'rtcfile' directive.
rtcsync

# Step the system clock instead of slewing it if the adjustment is larger than
# one second, but only in the first three clock updates.
makestep 1 3

# Get TAI-UTC offset and leap seconds from the system tz database.
# This directive must be commented out when using time sources serving
# leap-smeared time.
leapsectz right/UTC

Now save and exit

In this case our server entry refers to the variable {{ internal_server }}

And because this is for clients it doesn’t need any allow statements

Variables for Roles:
Because we’re using external variables, we need to create another folder for these which is called vars

mkdir roles/chrony/vars

We then need to create a file to store variables in

nano roles/chrony/vars/main.yml

external_server: time.cloudflare.com
internal_server: 192.168.101.10
Subnets:
 - 192.168.101.0/24
 - 192.168.102.0/24

Now save and exit

Here we’ve defined the external and internal servers

Then we have our list of subnets

Now since we’ve gone to this trouble of setting up variables for our chrony role, it would make sense for me to revisit the default_firewall role and use them there as well

But hopefully you’ll have a better of understanding of the different ways you can place information in roles

Handlers for Roles:
As mentioned earlier, chrony will need a restart if the configuration file is changed

While you can include that as a task, a preferred option is to use what Ansible refers to as handlers

This is to avoid situations where multiple tasks require a service restart and so the service is restarted a lot, or the last task doesn’t need a service restart and so the service doesn’t get restarted

By having a handler, it doesn’t matter how many times it gets notified, the service will only be restarted once and only if it’s necessary

For roles, handlers are found in a handlers folder so we need to create one for chrony

mkdir roles/chrony/handlers

Next we’ll create our file for the chrony role handler

nano roles/chrony/handlers/main.yml

- name: restart_chrony
  service:
    name: chrony
    state: restarted

Now save and exit

There is only one file needed and you can define multiple handlers in here

In this case we’re only concerned with chrony, but note how the name matches what notify references in the role file

What we do here though is to use the service module to make sure that chrony gets restarted

Test Site Playbook:
It’s now time to test this site playbook and we’ll do that with the following command

ansible-playbook site.yml

Now what we’ll expect from this playbook is that the firewall settings and chrony are updated on all computers, but only if necessary

NTP servers will be skipped when NTP clients are updated, but so far I haven’t come up with a better alternative without introducing extra admin work

Going forward though, I’ll create more roles and keep updating this playboook, to reach a point where I can maintain all of the devices on this network through Ansible

Dry Runs:
One final thing I’ll cover is how to test Ansible playbooks, or at least to a certain extent because you can do a dry run

To do this add the --check parameter to your Ansible commands, for example

ansible-playbook site.yml --check

Ansible will then tell you what would have happened but it won’t actually make any changes, even though it looks like it has

There are limits though because it might not be able to do things if an earlier task wasn’t carried out, in which case it will complain

All the same, it’s still a useful way to test your playbooks or ad-hoc commands

But with great power comes great responsibility

It’s best to test your playbooks in a lab envirornment first before rolling them out to a production environment

They should be more or less the same, save for maybe the IP addressing for instance

In which case, a playbook that worked on your lab servers should work on the production ones as well

Sharing is caring!