In this tutorial, youโll learn how to install Kubernetes on RHEL 10 using CRI-O container runtime with one control plane node and one worker node.
This step-by-step guide is beginner friendly, production-aligned, and perfect for:
- Learning Kubernetes fundamentals
- RHEL-based Kubernetes labs
- DevOps & Cloud practice environments
๐ Setup Overview
OS: RHEL 10
Kubernetes Version: v1.34
Container Runtime: CRI-O
Cluster Type: 1 Control Plane + 1 Worker Node
๐ง Prerequisites
- Two RHEL 10 systems (VM or Physical)
- Root or sudo access
- Active Red Hat subscription
- Minimum: 2 vCPU, 2 GB RAM
- Internet connectivity between nodes
Show More Show Less View Video Transcript
0:00
Hello everyone, welcome back to the
0:01
channel. In today's tutorial, we will
0:04
show you how to install Kubernetes on
0:06
RHL10 using one control plane node, one
0:08
worker node, and cryo as a container
0:10
runtime. This setup is perfect for
0:13
learning Kubernetes fundamentals,
0:15
practicing enterprise style Kubernetes
0:16
on RHL, lab or PC environments. We will
0:20
follow a step-by-step approach. I will
0:23
also explain why each step is needed so
0:25
even beginners can follow along easily.
0:28
Before starting make sure you have two
0:29
RL system VM or physical server. One
0:33
would be control node. Another would be
0:34
worker node. We need sudo or root access
0:38
on these VMs. Need active red
0:40
subscription. Minimum resources of 2
0:42
vCPUs and 2 GB RAM and internet
0:45
connectivity on both the nodes.
0:47
Kubernetes uses multiple background
0:50
services. So enough CPU and memory are
0:52
important. What are the commands I use
0:55
in this tutorial? I will share them in
0:57
the description of the video for your
0:59
reference. Without any delay, let's jump
1:01
into the Kubernetes installation steps
1:03
on RL 10. I have already taken the SSS
1:06
session of both the VMs.
1:08
This is my first VM which I will
1:11
configure as control plane and this is
1:13
my second VM which I will configure as a
1:16
worker node for this Kubernetes cluster.
1:19
If if you run ip add space or IP space a
1:23
space s you will see the IP address of
1:25
this VM. Similarly on the worker node if
1:29
you run the same command you will get
1:32
the IP address of this VM.
1:35
First step is to install all available
1:37
updates. In order to install all
1:40
available updates on RHL10 system, we
1:42
need to run the command sudo space dnf
1:44
space update.
1:51
As you can see, all the updates are
1:53
already installed on my VM. Similarly,
1:56
run the same command on another VM.
2:06
On the second VM also all the updates
2:08
are already installed.
2:11
Set the host name on these VMs. We will
2:14
be using host name CL command in order
2:17
to set the host name. Run this command
2:19
on the control notch in order to set the
2:21
host name as K8S controller.
2:28
Similarly, run this command on the
2:31
worker notch or the second VM in our
2:33
case.
2:37
can also run this command exec pass in
2:40
order to see that host name is set
2:42
properly.
2:49
Next, update the etc host file. We need
2:52
to add the IP address and the host name
2:54
of these VMs so that nodes can resolve
2:57
each other without DNS issues. We will
3:00
be running this t command
3:02
which will append these entries
3:06
in the etc host file.
3:24
All right, the entries are working fine.
3:27
the swap on both the nodes using swap of
3:30
command.
3:39
In order to disable the swap across the
3:42
reboot as well, we need to comment out
3:44
the swap entry in the etc file. For
3:47
that, we need to run this set command.
4:01
This output confirm that swap has been
4:03
disabled from the FS tab as well. Load
4:06
required current modules and these
4:07
current modules are needed for the
4:09
proper container working. First we will
4:12
create a kon file and ATC/modules
4:16
load d folder. We will mention the which
4:20
modules we would like to load it at the
4:22
time of boot and we will use more probe
4:24
command in order to load the kernel
4:27
modules on the fly. Before that let's
4:30
set the SC Linux in permissive mode
4:31
using set enforce command.
4:35
Run this command on both the nodes.
4:42
Set this is permissive across the reboot
4:44
as well. For that we need to
4:48
change this SE Linux policy from
4:50
enforcing to the permissive in /c Linux
4:54
config file. Run this set command.
5:03
Now check the status of SL Linux. Run
5:07
get enforce.
5:13
Now load the modules. Before that let's
5:16
create KS conf file.
5:25
Now load these kernel modules using the
5:27
mode probe command.
5:35
Configure couple of kernel parameters
5:37
which are needed for our Kubernetes
5:39
cluster. Before that we need to create
5:41
one conf file. we will mention which
5:44
kernel parameters we would like to
5:46
configure and then we will run cctl
5:49
system command to load those uh kernel
5:52
parameters on the fly. So first create
5:54
this file
6:07
and run sudo
6:13
typo.
6:20
This output confirms that whatever the
6:23
kernel parameters we have defined in
6:24
this file have been applied
6:26
successfully. Similarly, we need to run
6:28
the same command on our control notch.
6:40
Next, we need to configure the firewall.
6:43
We need to allow couple of ports from
6:45
the control plane as well as from the
6:47
worker nodes. On the control plane, we
6:49
need to allow these ports and on the
6:52
worker nodes, we need to allow these
6:54
ports. So let's copy these firewall cmd
6:58
commands.
7:02
Now copy the commands for worker node.
7:12
Output confirms that these ports has
7:14
been whit listed in the OS firewall or
7:16
ports has been allowed in the firewall.
7:19
Cryo is a lightweight container runtime
7:21
built specifically for the Kubernetes.
7:23
It is faster, more secure and production
7:26
ready. First we need to define the cryo
7:28
version. Then we need to define the cryo
7:30
repository and using dnf command we will
7:33
install cryo and then we will enable and
7:36
check the service status of the cryo.
7:39
Let's first define the cryo version. So
7:41
we will be defining a cryo version using
7:45
a variable name cryo
7:48
version.
7:52
Set this cryo version on both the nodes.
7:57
Next, create the repository
8:00
for cryo.
8:04
In default repositories of RHL, we don't
8:07
have the cryo package. That is why we
8:08
are configuring the cryo repository
8:10
here.
8:12
Run the same command on walker.
8:38
system ctl space enable now space cryo
8:42
hit enter run the same command on walker
8:45
node
8:48
now verify the cryo service status
8:52
and the command system ctl space
8:55
status space space cryo
8:59
Output confirms that cryo service is up
9:01
and running on my control node. Let's
9:03
verify it from the worker node as well.
9:09
Cryo is also running on my worker node.
9:12
Install kubernetes components like cube
9:14
admlet and cubectl.
9:17
And again these components are not
9:19
available in the default repositories of
9:21
rl 10. We need to configure the
9:23
kubernetes repositories for them. And
9:25
also we need to make sure whatever the
9:28
cryo version we have set we have to set
9:30
the same Kubernetes version set the
9:32
Kubernetes version variable first
9:43
use this command to create the
9:44
Kubernetes repository.
9:53
Next, we need to run this dnf install
9:56
command in order to install cube latch,
9:58
cube ADM and cubectl. So, copy this
10:00
command and run it on both the nodes
10:17
component that bootstrap the cluster.
10:19
Cublet runs on every node. It's a
10:22
service and cubectl is a command line
10:25
utility to interact your Kubernetes
10:26
cluster.
10:31
We will be running this system ctl
10:34
enable now space cublet command
10:44
and that initializing the control plane
10:46
will always be from the control node. We
10:49
will be running cube adm space init
10:51
command to initialize our kubernetes
10:53
cluster or to initialize our control
10:56
plane
11:02
cube adm space in means we are
11:04
initializing the control plane while in
11:06
initializing the control plane we are
11:09
instructing that use this c for the port
11:12
network and cryo-en socket confirms that
11:15
we are using cryo as a container runtime
11:18
time. Hit enter. It will take couple of
11:21
minutes depending upon your internet
11:23
speed because it will be pulling couple
11:25
of Docker images in the back end.
11:33
In order to start interacting with the
11:35
Kubernetes cluster, we need to execute
11:37
these commands.
11:39
In order to join any worker node to this
11:41
Kubernetes cluster, we need to run this
11:43
cube admium space join command. So, make
11:46
a note of this command. you you can copy
11:48
this command somewhere safe because we
11:51
will be using the same command from the
11:53
worker node in order to join that node
11:55
to the cluster. Let's first configure
11:58
the cublet for our local user so that
12:01
our local user can start interact with
12:04
our Kubernetes cluster using the cubectl
12:06
commands.
12:12
Output shows that we are able to connect
12:14
to our Kubernetes cluster. As of now, we
12:17
have one node in this Kubernetes cluster
12:19
that is KS- control status is not ready.
12:23
And this is the Kubernetes version we
12:24
are currently using on this node. To
12:27
allow the communication between
12:29
different ports hosted on the different
12:32
nodes, we need to install CNI plug-in.
12:35
So in this case, we are going to install
12:37
Cube Flannel. This CNI plug-in stands
12:40
for Kubernetes network interface. So we
12:44
will be running a cubectl apply command
12:47
in order to install this Q flannel. Run
12:50
it on the control plane.
12:55
This command will automatically fetch
12:57
the required images for this flannel
12:59
controller. It will configure the
13:01
required name space, configure the
13:04
service accounts, arback and other
13:06
important things.
13:14
All right, this output confirms that
13:16
Flannel has been installed successfully
13:18
as the port status is in running state.
13:22
Now try to run the cubectl get nodes
13:24
command again. This time our control pin
13:27
node status should change to ready.
13:32
Try to join our worker node to this
13:34
Kubernetes cluster. For that we need to
13:37
run this cube adm space join command.
13:39
Copy this command and make sure you run
13:41
this command using sudo.
13:49
Great output confirms that this worker
13:52
node has joined our kubernetes cluster.
13:55
Let's verify it by running this cubectl
13:58
space get node command. As of now status
14:00
is not ready but in couple of seconds it
14:03
will change it to ready state. So let's
14:06
wait for some time.
14:09
So we can say that our Kubernetes
14:11
cluster is ready to host the application
14:13
workload. Let's try to deploy one sample
14:15
application based on engine X. Run the
14:18
command cubectl
14:20
create
14:22
deployment.
14:25
Deployment name let's say engine X
14:28
image would be engine X.
14:35
This deployment will be deployed in my
14:38
default name space. So you can run this
14:40
command cubectl space get deploy.
14:55
All right. Let's expose this deployment
14:58
with the type node port and we'll try to
15:01
access this application outside of our
15:04
Kubernetes cluster using the node port.
15:06
For exposing the deployment, run the
15:08
command cubectl
15:11
expose
15:14
deployment.
15:16
Deployment name is engine x space
15:21
- type is equals to node port
15:27
space- port
15:30
is equals to 80. Hit enter.
15:37
You will see a new service has been
15:40
created with the name engine x with a
15:42
type noteport.
15:48
So in my case it is 192.168
15:52
dot 1 97 colon and get the note port.
16:04
This output confirms that we are able to
16:07
see the default EngineX web server page.
16:10
It means we are able to access our
16:12
application outside of our Kubernetes
16:14
cluster. That's all from this video
16:16
tutorial. You have successfully
16:17
installed Kubernetes on RHL10 with cryo
16:19
using one control plane and one worker
16:22
node. This cluster is now ready for
16:23
application deployments, Kubernetes
16:25
learning and practice and CI/CD devop
16:28
labs. If you found this helpful, do not
16:30
forget to like, subscribe, and share.
#Software
#Computer Science
