How to Automate Linux Server Management with Ansible | Step-by-Step Demo Using GitHub Project
Jul 29, 2025
Master Linux automation using Ansible in this complete step-by-step guide! π
In this video, you will learn how to automate Linux server management tasks such as package installation, user creation, firewall setup, and much more using Ansible playbooks.
π§ This demo is based on the GitHub project:
π https://github.com/pradeepantil/linuxautomation
We will walk you through:
* Ansible installation & inventory setup
* Writing reusable and modular playbooks
* Automating tasks on Ubuntu and RHEL systems
* Common post-provisioning tasks on Linux servers
* Running playbooks using ansible-playbook commands
* Best practices for Ansible automation
π‘ Whether you're a DevOps engineer, Linux admin, or beginner learning automation, this tutorial is for you!
π Chapters:
00:00 Intro
00:45 Why Ansible
Show More Show Less View Video Transcript
0:00
Are you tired of manually managing your
0:02
Linux servers? Do you wish there was a
0:04
way to automate repetitive task, ensure
0:06
consistency and save countless hours? If
0:09
so, you have come to the right place.
0:11
Welcome to our comprehensive guide on
0:13
how to automate Linux server management
0:15
with Enible. Hey everyone, today we are
0:18
deep dive into the world of Enible, a
0:21
powerful open-source automation tool
0:23
that is a gamecher for system
0:25
administrators, DevOps engineers, and
0:27
anyone managing Linux servers.
0:30
Whether you are a beginner just starting
0:31
your automation journey or an
0:33
intermediate user looking to streamline
0:35
your workflows, this video is packed
0:37
with practical insights and step-by-step
0:40
demonstration. Before we jump into the
0:42
how, let's quickly understand the why.
0:45
Why choose Enible over other automation
0:47
tools. The first reason is its
0:50
simplicity. Enible is agentless, meaning
0:52
you don't need to install any special
0:54
software on the manage nodes. It
0:56
communicates over the standard SSH
0:58
making setup incredibly easy. Second
1:01
reason is readability. Playbooks are
1:04
written in YML format a human readable
1:06
data format. This means if you are new
1:08
to automation, you can quickly
1:10
understand what a playbook is doing.
1:12
Third reason is powerful and flexibility
1:15
from configuration management and
1:17
application deployment to the
1:18
orchestration and provisioning. Ensible
1:21
can handle a vast array of task across
1:24
various Linux distributions like Ubuntu
1:26
and RH. Itemp importance. This is a
1:28
fancy word. You can run your playbooks
1:31
multiple times. They will always bring
1:33
your system to the desired state without
1:36
unattended side effects. If a task is
1:39
already done, Enible won't redo it. To
1:42
follow along, you need few things. A
1:44
control node. This is where you will
1:46
install Enible. Run all your playbooks.
1:50
It's a Linux machine. It can be a RHL or
1:52
Ubuntu. Target servers. At least one
1:55
Ubuntu and one RHL server that you want
1:57
to manage. SSH access. Ensure your
2:00
control node can SSH to your target
2:02
servers using SSH keys for passwordless
2:05
authentication. Basic Linux knowledge.
2:08
Familiarity with the basic Linux
2:10
commands and conserve to understand the
2:12
task we will automate. For the sake of
2:14
time, I have already created the YML
2:17
files, anible.cfz, CFC and the inventory
2:19
file. I have uploaded them to my public
2:22
GitHub repository which I will link in
2:25
the video description for your
2:26
reference. All right, let's clone that
2:29
repository first on your controller
2:30
knowledge. This is my public GitHub
2:32
repository with the name Linux
2:34
automation where I have placed all the
2:36
YML files along with the inventory and
2:38
the enable. CFC.
2:41
Let's clone it using SSH.
2:44
I have only taken the SSH session of my
2:47
controller knowledge. Then the command
2:50
get clone.
2:56
It will create a folder with the name
2:58
Linux automation. Go to that folder and
3:01
do ls.
3:02
You will see all the EML files.
3:05
Next run the nible command. Nible space
3:09
version.
3:13
output confirms that we are running the
3:15
nible version of 216.3.
3:19
It is referring this nible.cfg file.
3:22
This cfg file is nothing but the one
3:24
that I have just clone it via the
3:26
repository. One important thing when you
3:29
clone this repo on your controller node.
3:31
Make sure you run the nible command from
3:33
this Linux automation directory only.
3:35
Otherwise, it will try to refer the
3:37
nible. CFC file from/ct cc folder. Now
3:41
set up the inventory file. As you can
3:44
see inventory file is already there in
3:46
my public repository. Let me get this
3:49
file first.
3:51
Here Ubuntu server and ARL servers are
3:54
our groups. Anible host specify the IP
3:58
address and the host name. Anible user
4:01
and anible private key file defines the
4:05
SSH user and key for connecting these
4:08
servers. Remember to replace these IPs,
4:12
your SSH user and key with your actual
4:14
server details and SSS key path. Let's
4:17
verify the connectivity from my
4:19
controller node towards this manage
4:21
nodes. For that we can run the command
4:24
anible
4:26
all-enodule
4:28
and then ping. It will perform the ping
4:30
pong test. All means it will try to ping
4:34
all the host mentioned in the inventory
4:36
infi. The centure output confirms that
4:40
controller node is able to make a
4:42
connectivity or is able to ping this
4:44
manage host Ubuntu server - 1 and RL -
4:49
server - 1. Now verify the NC.tfc file
4:53
as well. Just view this file once.
4:57
So under the default section I specify
4:59
the inventory. This is the path of my
5:01
inventory file. The remote user this
5:04
user will be used while connecting to
5:06
the manage host. Here in my case it's
5:08
Linux tech key and host key checking is
5:11
false. So it won't verify the host key
5:13
verifications. Under privilege
5:15
acceleration section I have mentioned
5:16
become is equals to true become method
5:19
is sudo and become user is root and
5:21
become ask is equal to false. While
5:24
becoming a root user using sudo it won't
5:26
prompt for the password. Exit the file
5:28
now. All right, we have already covered
5:31
how we we have set up our inventory
5:33
file. So we are using two Linux
5:36
distributions here. One is RHL, another
5:38
one is Ubuntu. We have defined under the
5:40
groups. In this video tutorial, I will
5:43
be covering basic server management task
5:45
like system upgrades. I will try to
5:47
install the system updates on R as well
5:50
as on Ubuntu system. I will try to
5:52
create a non root user with pseudo
5:55
privileges. I'll configure the NTP
5:57
configuration. We'll install the
5:58
necessary packages. Set up the host name
6:01
and time zone. And in the last I will
6:04
install a monitoring agent like node
6:07
exporter. Let's quickly see how the
6:09
ensible playbook structure will look
6:11
like. If you see we have key components
6:13
like name. Name will describe the title
6:16
for your playbook. Here host means
6:17
target server to run against become
6:19
means
6:21
that user anible user will become a root
6:24
user using sudo. Wars defines the
6:26
variables that we are going to use in
6:28
the playbook. Task actions to perform
6:31
and handlers trigger by notify
6:32
directives. So these are the sample OS
6:35
specific task that I will be doing it in
6:38
this demo. Let's quickly jump to our
6:41
first task. Installing all the upgrades.
6:44
Keeping your servers updated is crucial
6:46
for security and performances. Let's
6:49
create a playbook that will install all
6:51
the available upgrades on the Ubuntu as
6:54
well as Archel servers.
6:59
I have already created this
7:00
upgrade_playbook.l
7:02
file
7:07
host all means it will be implemented on
7:09
all the host become yes means it will
7:13
become a root user using sudu. In this
7:17
playbook, we have used when condition.
7:20
This allows us to apply the distribution
7:23
specific task within a single playbook.
7:25
For Ubuntu, we use apt module and for RL
7:28
and we use Y modules.
7:32
This is the syntax for when conditions
7:35
and we are using apt on Ubuntu and we
7:39
are using DNF or Yum on RO sent to us.
7:44
In order to run this playbook, it is
7:46
always recommended you first do a dry
7:48
run. For that, you can run the command
7:51
enable
7:52
playbook followed by the playbook name.
7:54
In our case, it's upgrade_playbook.
7:58
ML - check.
8:02
This will do a syntax validation
8:05
connectivity check.
8:08
Now we are good to run this playbook.
8:10
Run the command nible hyphen playbook
8:14
followed by the playbook name and remove
8:16
hyphen check this time and hit enter.
8:23
As we are installing all the available
8:24
updates. So it may take 5 to 10 minutes
8:27
depending upon the internet speed on
8:30
your manage host because it will be
8:32
downloading all the updates from the
8:34
internet repositories.
8:37
Output confirms that playbook has been
8:39
executed successfully. Let's validate
8:42
whether all the updates have been
8:44
installed on the manage host or not
8:46
using the anible adop commands.
8:57
Let's run first on Ubuntu.
9:07
upgrade
9:10
it's Ubuntu servers
9:16
all the updates have been installed now
9:19
verify on the RL server as well
9:22
change the group name
9:25
RL servers update the command
9:29
sudo
9:31
dnf update
9:39
on R2 server also all the updates have
9:41
been installed moving to the next task
9:44
creating a nonroot user with sudo
9:46
privileges for that I have a playbook
9:51
user_playbook
9:55
in this playbook
9:57
I am running this all the task on all
10:00
the host first I need to define assign a
10:03
username and its password under this
10:06
was variables here I'm using user
10:10
command or user module in fact to create
10:14
the user so based on the Linux
10:16
distribution let's say if it is Ubuntu
10:19
then we are adding that user to a sudo
10:22
group if it is RHL then add that user to
10:26
a V group based on the distributions and
10:30
we are using when conditions to identify
10:33
what is the Linux distribution.
10:37
All right. And in the last we are making
10:40
sure that this users are part of a V
10:44
group or sudo group depending upon the
10:47
Linux distribution. Let's try to run
10:49
this playbook. For that run the command
10:52
enible
10:54
playbook
10:55
user_playbook.y.
10:59
If you want to debug a playbook then you
11:04
can use hyphen v option. So this will
11:06
execute the playbook. It will give us
11:09
the output in more verbose mode.
11:16
Sorry it it will fail because we have
11:19
not defined a valid username. We need to
11:21
update the variable. Let's
11:25
modify the playbook first.
11:30
Let's say username is
11:34
sysops
11:37
password could be
11:44
save this file and
11:48
rerun it.
11:53
If you see the output, it has created a
11:55
home directory for the user sysops under
11:59
/home/sisops.
12:01
Username is sysops. Password is whatever
12:04
we have specified in the playbook.
12:08
Now validate whether this user is
12:10
available on both the manage host. For
12:13
that you can again run the nible
12:16
command.
12:20
We'll use all here
12:25
model name is cell
12:27
and command is
12:31
id
12:33
sysops
12:36
output confirms that on Ubuntu server
12:40
sysops user is there whose ID is this
12:43
one and it's a part of sudo group and on
12:47
rl server this ID is there and it is a
12:51
part of wheel group. It means user is
12:54
created successfully on both the nodes.
12:56
Moving to the third task where we will
12:58
configure the NTP sync. Accurate time
13:01
synchronization is vital for logging
13:03
security in many applications. Let's
13:06
configure this NTP sync.
13:11
So we already have this NTP_playbook.l
13:15
file. As the name suggest it is going to
13:18
install NTP package if it is not there
13:20
and will start its NTP service. So based
13:23
on the Linux distributions if it isu it
13:26
will install NTP and NTP date package.
13:29
If it is RHL or CentOS it will install
13:32
Crony and it will start and enable the
13:34
Crony services. Now execute this
13:38
playbook. Run the command nible
13:42
playbook
13:44
ntp. Hit enter.
13:50
Playbook has been executed successfully.
13:52
If you have see carefully, it has
13:53
executed only on the Ubuntu server
13:55
because on RL servers chrony is a part
13:59
of default OS installation. So chrony
14:01
was already installed during the
14:03
installation. Now we can verify the
14:05
service status.
14:07
Run the enable edio command again.
14:12
for Ubuntu servers.
14:16
It will be
14:22
this confirms that NTP is installed and
14:25
the service is up and running on the
14:26
Ubuntu servers. Now verify it on RL
14:30
servers.
14:31
Change the group.
14:34
Now it's RH servers command
14:38
would be in place of NTP we have chron.
14:46
This confirms that NTP client that is
14:49
chrony is up and running on RL servers.
14:53
Now moving to the next task. In this
14:55
task, we will install some commonly used
14:58
essential packages like whim, get, curl,
15:01
wget, net- tools, unjtop.
15:06
So for this task, I have a playbook with
15:09
the name packages_playbook.l.
15:12
Playbook will be executed on all the
15:13
host. We have defined a variable. One
15:17
variable is for Ubuntu that is common
15:19
packages Ubuntu. Another variable is for
15:22
RHL on Ubuntu. I want to install these
15:26
essential packages. On RHL I would like
15:28
to install these packages.
15:31
And then I define the task and I'm
15:33
installing these packages using the app
15:35
command on Ubuntu Linux distribution.
15:38
And on RHL I am using Y. You can use Y
15:42
or or ODNF. Now run this playbook to
15:45
install these essential packages.
15:54
Output confirms that playbook has been
15:56
executed successfully on the server RHL
16:00
server - 1 and Ubuntu server - 1. In the
16:03
next task, I will set the host name and
16:06
time zone. Proper host name and time
16:08
zone settings are important for server
16:10
identification and accurate time
16:12
keeping. In order to set the host name
16:15
on the manage host, I will be using the
16:17
inventory host name
16:21
system_config playbook file. This is the
16:25
playbook that I will use to configure
16:27
the host name and time zone. If you see
16:31
this,
16:33
it will be executed on all the host. I
16:35
have defined a variable with the name
16:38
desired time zone. Here I have specified
16:40
which time zone I would like to set on
16:42
my manage host. I'm using a host name
16:45
command in order to set the host name
16:47
and I'm picking the name of the I mean
16:50
host name from the inventory. Running a
16:52
time zone command in order to set this
16:55
time zone.
16:57
Now try to execute this playbook.
17:06
playbook has been executed on both the
17:09
manage host. We use inventory host name
17:12
to dynamically set the host name to the
17:14
name defined in our inventory file. So
17:17
if I show you the inventory file.
17:21
So this would be the host name for this
17:25
host and RHL - server-1 will be the host
17:30
name for this host.
17:33
Now we can verify it using the nible
17:37
adobo command.
17:42
This is the host name. Now similarly
17:45
verify it on Ubuntu servers.
17:52
This confirms that host name has been
17:54
set up properly. In the next step, we
17:56
will install Prometheus node exporter.
17:59
It's a popular agent for collecting the
18:01
system matrices. So monitoring is a key
18:04
to maintaining a healthy servers. In
18:07
order to install this node exporter, we
18:10
have a playbook with the name
18:12
node_exporter_playbook.
18:18
Using this playbook, we will install
18:20
this Prometheus node exporter on all the
18:23
host. We have defined that we will be
18:26
installing this node exposure.
18:30
We have uh multiple architecture. One is
18:33
AMD, another one is ARM. So depending
18:36
upon the architecture we are downloading
18:40
the tar file and then we are creating a
18:44
group
18:45
creating a node
18:48
downloading the tar file and then we are
18:52
installing the binary into a directory
18:57
and then we are creating a systemd
19:00
service for the node exporter. We are
19:02
using this uh ginger template node
19:05
exporter.service.j2.
19:08
It is already there in my repository and
19:10
then we are starting the service
19:19
and the last we are starting the
19:22
services. Let's try to run this
19:25
playbook.
19:33
Output confirms that node exporter
19:36
playbook has been executed on both the
19:38
host. Let's verify the service of node
19:41
exporter on both the host using the
19:44
nible edio command.
19:52
So this command will run systemct ctl
19:54
status node exporter on both the nodes.
20:00
Great output confirms that Prometheus
20:02
node exporter service is active up and
20:05
running on both the nodes. Validate the
20:08
installation of this node exporter will
20:10
try to get the matrices of these post
20:14
using the curl command.
20:22
Great. I can get the matrices of the uh
20:24
my manage host. Now verified with the
20:27
second host as well.
20:30
Okay, I'm able to get the matrices of
20:33
both the host. If you have noticed, I
20:36
have used separate playbooks for each of
20:38
the task. But it's not the recommended
20:41
way in order to perform the server
20:43
setup. So it always recommended to merge
20:46
all the task into a single playbook or a
20:49
comprehensive playbook for the complete
20:51
server setup. So I have merged all of
20:54
these task into a main comprehensive
20:57
playbook
20:59
that is main_playbook.
21:02
Mml file. If you see the content of this
21:05
playbook, I have included all the task
21:08
whatever I have done till this point. So
21:12
all the tasks are mentioned here. If you
21:15
would like to set up a server and you
21:16
would like to execute all these task in
21:19
one book, it's better to run this main
21:22
playbook EML file. Let's try to run this
21:25
playbook as well. Run the command nible
21:28
playbook
21:32
main playbook.
21:35
Hit enter.
21:45
All right, the output confirms that our
21:48
main playbook file has been executed
21:51
successfully on both the nodes. And
21:53
there you have it, a comprehensive guide
21:55
to automating Linux server management
21:57
with anible. We have covered everything
21:59
from setting up your inventory to
22:01
perform common tasks like system
22:03
upgrades, user creation, NTP sync,
22:06
package installation, host name
22:08
configurations and even deploying a
22:09
monitoring agent like Prometheus product
22:11
exposure. We have also show you how to
22:14
consolidate your task into a single
22:16
powerful playbook. Ensible is incredibly
22:19
versatile tool that can drastically
22:21
improve your efficiency and consistency
22:23
in managing infrastructure. Start small,
22:26
experiment with these playbooks,
22:28
gradually expand your automation
22:30
efforts.
22:32
If you found this video helpful, please
22:34
give it a thumbs up, share it with your
22:36
friends, subscribe to our channel for
22:38
more content on DevOps, system
22:40
administration, and automation. Let us
22:42
know in the comments what other anvil
22:44
topics you would like us to cover.
22:46
Thanks for watching. See you in the next
22:48
one. But
#Computers & Electronics
