Ansible Dynamic Inventory + AWS EC2

Configuration Management using Ansible Dynamic Inventory

In this story, I am going to illustrate “Configuration management using Ansible Dynamic Inventory” in AWS EC2 instances.

What is Ansible Dynamic Inventory and what is the need for dynamic inventory?
There are two types of Inventories in Ansible.
1. Static Inventory
2. Dynamic Inventory

Static Inventory: In this inventory file where you define hostname, host ip’s, host variables of the slave's machines for the configuration management.

In real-time, there will be hundreds of machines where you need to do configuration management. So you cannot write those hostnames and host ip’s manually in the static inventory file. And also there might be a scenario where your resource (EC2) is created additionally at run time and you are unable to write in the inventory file.

To overcome this there is Dynamic Inventory in Ansible, where you can do configuration management dynamically without the need of hostname and host ip’s. For this Dynamic inventory, you need ec2.ini & ec2.py script which are attached in this link.

And also we have overcome the regular ssh-keygen part and copying into the slave machines.

Here I am going to explain practically how configuration management is done using Dynamic Inventory. So in this story, I am taking an example of the installation of apache (httpd) on the slave's machines.

Prerequisites:
Take three EC2 RHEL 8 instances (one is master and two are slaves) and ssh enabled in inbound rules of Security Groups.

Add the tags to the two slave EC2 instances in the Tags section as shown below. (You can maintain your own tags and change accordingly in the playbook)

Tags of slave instances

Here I used Tags, to pass it as the “host” in the playbook yml file. You might get a doubt that this set up of adding tags involved a manual step, but generally, we create resources using Cloud Formation or Terraform where you add tags dynamically at the time of resource creation.

Login to EC2 RHEL master instance.

[root@ansible-master ~]# sudo -i
[root@ansible-master ~]# hostnamectl set-hostname ansible-master
[root@ansible-master ~]# sudo -i
[root@ansible-master ~]# yum install git-all -y
[root@ansible-master ~]# git clone https://github.com/pvprasad257/ansible-example.git
[root@ansible-master ~]# cd ansible-example/

[root@ansible-master ansible-example]# ./installation.sh

Step 1:
Insert your aws access and secret keys in /etc/ansible/ec2.ini at the end of the file.

Step 2:
Copy the private key of instances that you want to do the configuration (here we need private key of slave machines) into the master machine.

For copying the .pem file to the AWS EC2 instance, goto the pem file path and follow the below command:

In my case:
-> Go to downloads path
Syntax:
# scp -i <private key of master instance> <private key of slaves> ec2-user@<public-ip of master>:/tmp

$ scp -i aws_vara.pem aws_vara.pem ec2-user@35.154.79.111:/tmp/

[root@ansible-master ~]# chmod 400 /tmp/aws_vara.pem

Step 3:
Edit the inventory and private key paths in ansible.cfg file at 14th line and 136th line respectively

inventory = /etc/ansible/ec2.py

private_key_file = /tmp/aws_vara.pem

Working with dynamic inventory:

Verify:

Verify the slave machines whether the apache(httpd) is installed or not

Github link: https://github.com/pvprasad257/ansible-example.git

Conclusion:
This medium story explains the process of configuration management using ansible dynamic inventory file. Here we are using ec2.ini & ec2.py scripts to achieve the installation of apache on slave machines.

Senior DevOps and AWS Engineer. I am an AWS Certified Solution Architect Professional.