Breaking

Showing posts with label kubernetes. Show all posts
Showing posts with label kubernetes. Show all posts
January 17, 2024

Installing Keycloak on Kubernetes: A Comprehensive Tutorial

Introduction

In the ever-evolving landscape of application security, Keycloak emerges as a beacon of reliability. This article dives into the realm of Keycloak, exploring its capabilities and guiding you through the essentials. Brace yourself for a journey into the heart of robust authentication and authorization.

Understanding Keycloak

At its core, Keycloak is an open-source identity and access management solution. Developed by Red Hat, it empowers developers to secure their applications with ease. The beauty of Keycloak lies in its flexibility—it seamlessly integrates with various platforms, making it a versatile choice for authentication and authorization needs.

Key Features that Make Keycloak Shine

Single Sign-On (SSO)

Bid farewell to the hassle of remembering multiple passwords. Keycloak introduces a Single Sign-On (SSO) experience, allowing users to access multiple applications with just one set of credentials. Say hello to convenience and goodbye to password fatigue.

User Federation

Keycloak opens the door to user federation, enabling seamless integration with existing user databases. Whether it's LDAP, Active Directory, or social media logins, Keycloak harmonizes diverse user sources under one roof.

Multi-Factor Authentication (MFA)

In a world where security is paramount, Keycloak steps up with Multi-Factor Authentication. Add an extra layer of protection by incorporating factors like SMS, email, or authentication apps. Your fortress just got stronger.

Authorization Services

Fine-tune access control with Keycloak's robust authorization services. Define policies, manage roles, and ensure that users have precisely the right level of access. Security tailored to your application's needs.

The Keycloak Installation Ballet on Kubernetes

Prerequisites: Setting the Stage

Before you dance with Keycloak on Kubernetes, ensure your stage is set. Check Kubernetes compatibility, create a dedicated namespace, and establish persistent storage. A well-prepared stage ensures a flawless performance.

Step-by-Step Deployment with Helm

Enter Helm, the virtuoso of Kubernetes deployment. Learn the steps to deploy Keycloak effortlessly using Helm charts. It's a symphony of simplicity that transforms installation into an art.

Configuring Realms and Clients

Now that Keycloak graces your Kubernetes ensemble, it's time to tailor its performance. Dive into configuring realms and clients, molding Keycloak to fit your application's unique contours.

Fortifying Security

No ballet is complete without a strong finale. Explore the best practices for securing your Keycloak installation—configure SSL, implement robust authentication, and fortify authorization mechanisms.

Troubleshooting Pas de Deux

Even the most graceful ballet encounters hiccups. Navigate through common issues like database connection glitches and configuration snags. Transform challenges into a dance of triumph.

Conclusion: Applause for a Secure Future

Congratulations! You've waltzed through the installation of Keycloak on Kubernetes. With Keycloak as your partner, step into a future where security is not just a feature but a masterpiece.

FAQs: Unveiling Keycloak's Secrets

  1. Can Keycloak integrate with various user databases?
    Absolutely! Keycloak's user federation capabilities harmonize diverse user sources seamlessly.

  2. Why is Multi-Factor Authentication crucial in Keycloak?
    MFA adds an extra layer of security, fortifying your fortress against potential threats.

  3. How does Keycloak simplify access control?
    Keycloak's authorization services allow you to define precise policies, ensuring tailored access for users.

  4. Is Keycloak compatible with Kubernetes?
    Indeed! Ensure a smooth performance by checking Kubernetes compatibility before installation.

  5. What are the prerequisites for Keycloak installation on Kubernetes?
    Prepare your stage by checking compatibility, creating a namespace, and setting up persistent storage.

Dive into the world of Keycloak—an orchestra of security, simplicity, and versatility. Install Keycloak now and step into a future where authentication and authorization are not just features but an art.

December 15, 2023

How to Install Kubernetes on Ubuntu 20.04

WHAT IS KUBERNETES? 



setup kubernetes on ubuntu



Kubernetes is an open-source platform for automating deployment, scalability, and management of containerized applications. 

Designed to handle applications packaged in containers, Kubernetes provides a set of tools and services that enable automation and orchestration of various tasks related to the deployment, scaling, and operation of containerized applications in a production environment.


Some key features of Kubernetes include:


Container Orchestration: Kubernetes allows you to compose and manage applications consisting of multiple containers, simplifying the deployment and maintenance of applications.


Resource Management: Kubernetes can manage the allocation of computing resources (CPU, memory, etc.) for each container, ensuring efficient resource utilization.


Deployment Management: Kubernetes provides tools to manage the lifecycle of applications, including new deployments, version upgrades, and scaling down.


Service Discovery: Kubernetes enables communication between containers within an application, either within or across clusters, through a managed network.


Self-Healing: In case of container or node failures, Kubernetes can detect and replace containers or reschedule applications to other nodes to ensure service continuity.


Configuration Management: Kubernetes allows you to centrally store and manage application configurations, facilitating deployment and maintenance of configurations.


Automatic Scaling: Kubernetes can perform automatic scaling based on specific demands or workloads, allowing efficient resource utilization.


Kubernetes has become the foundation for managing containerized applications in various environments, from local data centers to public and hybrid clouds. Its success is evident in widespread industry adoption and strong community support.


Prerequisites:

Before proceeding with the installation process, ensure that you have the following prerequisites:


  1. Ubuntu 20.04 / Ubuntu 22.04
  2. Minimum resources: 2 CPU cores, 8GB RAM, and sufficient disk space to accommodate Kubernetes components
  3. Static IP address
  4. SSH access
  5. Sudo privileges
  6. Stable internet connection


In this guide, you will learn how to install Kubernetes on Ubuntu 22.04 / 22.04 by following a step-by-step process.

 

LAB SETUPS  : 

 

Hostnames

IP Address

CPU

RAM

DISK

k8s-master1   

10.20.30.35

2 Core

8 GB

100 G for /

100 G for nfs

k8s-worker1

10.20.30.36

2 Core

8 Gb

100 G for /

K8s-worker2

10.20.30.37

2 Core

8 Gb

100 G for /

K8s-worker3

10.20.30.38

2 Core

8 Gb

100 G for /

 

Step 1 :  Set Hostname for node

Absolutely! You can use the hostnamectl command to tweak the hostname of a node. It's quite handy for managing system configurations. If you want to change the hostname, just throw in the following command:

sudo hostnamectl set-hostname "k8s-master1" for master-node

sudo hostnamectl set-hostnamce “k8s-worker1” for worker node

sudo hostnamectl set-hostnamce “k8s-worker2” for worker node

sudo hostnamectl set-hostnamce “k8s-worker3” for worker node

Sure thing! To add a hostname entry to the /etc/hosts file on each node, follow these steps:

10.20.30.35  k8s-master1

10.20.30.36  k8s-worker1

10.20.30.37  k8s-worker2

10.20.30.38  k8s-worker3

Step 2 :  Add sudo user all nodes.   

You can use the hostnamectl command to change the hostname of the node. Add the following hostname file to /etc/hosts on each node. 

root@k8s-master1:~# adduser devopsgol

root@k8s-master1:~# sudo usermod -aG sudo devopsgol

root@k8s-master1:~# groups devopsgol

devopsgol : devopsgol sudo

root@k8s-master1:~# sudo su devopsgol

To run a command as administrator (user "root"), use "sudo <command>".

See "man sudo_root" for details.

devopsgol@k8s-master1:/root$ sudo su

root@k8s-master1:~#

 add user devopsgol to  visudo each nodes 

devopsgol@k8s-master1:~$ sudo visudo

 

add entri file

 

# Allow members of group sudo to execute any command

%sudo   ALL=(ALL:ALL) ALL

devopsgol ALL=(ALL) NOPASSWD:ALL

  generate ssh-keys in user devopsgol

devopsgol@k8s-master1:~$ ssh-keygen -t rsa

Generating public/private rsa key pair.

 

Enter file in which to save the key (/home/devopsgol/.ssh/id_rsa): Created directory '/home/devopsgol/.ssh'.

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/devopsgol/.ssh/id_rsa

Your public key has been saved in /home/devopsgol/.ssh/id_rsa.pub

The key fingerprint is:

SHA256:XKNH9qI46PbWtXLwzPZ71oBOdr+V3/FFR2rIQveMM74 devopsgol@k8s-master1

The key's randomart image is:

+---[RSA 3072]----+

|                 |

|                 |

|          * .   .|

|       . * = = o |

|        S + B.= o|

|     . ..o.+++o.o|

|    . o..* =.. *o|

|   .. ..o B ..o X|

|   ..o.  + .E+ .=|

+----[SHA256]-----+

 Add permission to  file id_rsa.pub

devopsgol@k8s-master1:~$ sudo chmod 400 /home/devopsgol/.ssh/id_rsa.pub

 Copy ssh key ke all nodes

devopsgol@k8s-master1:~$ ssh-copy-id -i /home/devopsgol/.ssh/id_rsa.pub devopsgol@k8s-worker1

devopsgol@k8s-worker1's password:

 

Number of key(s) added: 1

 

devopsgol@k8s-master1:~$ ssh-copy-id -i /home/devopsgol/.ssh/id_rsa.pub devopsgol@k8s-worker2

devopsgol@k8s-worker1's password:

 

Number of key(s) added: 1

 

devopsgol@k8s-master1:~$ ssh-copy-id -i /home/devopsgol/.ssh/id_rsa.pub devopsgol@k8s-worker3

devopsgol@k8s-worker1's password:

 

Number of key(s) added: 1

 

 

 

Step 3 :  Install package requiretments all nodes  

 Install  packages requiretmens ke each nodes 

devopsgol@k8s-master1:~$ sudo apt-get update -y

devopsgol@k8s-master1:~$ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg lsb-release -y

 Step 4 :  Config NTP ,Disable SWAP & Fstab  All nodes  

 configure ntp all nodes

devopsgol@k8s-master1:~$ sudo timedatectl set-timezone Asia/Jakarta

 Set Swap to OFF each nodes

devopsgol@k8s-master1:~$ sudo swapoff -a

 Disable fstab and add kernel modules

devopsgol@k8s-master1:~$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

 

devopsgol@k8s-master1:~$ sudo cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf

overlay

br_netfilter

EOF

 

devopsgol@k8s-master1:~$ sudo cat << EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-ip6tables = 1

EOF

 

devopsgol@k8s-master1:~$ sudo modprobe overlay

devopsgol@k8s-master1:~$ sudo modprobe br_netfilter

devopsgol@k8s-master1:~$ sudo sysctl --system

 Step 5 :  Install Container and Configure All Nodes

Install containerd runtime on all the nodes, run following set of commands ;

devopsgol@k8s-master1:~$ sudo apt-get update && sudo apt-get install -y containerd

 Configure the contianerd using following command ;

devopsgol@k8s-master1:~$ sudo mkdir -p /etc/containerd

devopsgol@k8s-master1:~$ sudo containerd config default | sudo tee /etc/containerd/config.toml

devopsgol@k8s-master1:~$ sudo systemctl restart containerd

devopsgol@k8s-master1:~$ sudo systemctl status containerd

Step 6 :  Install Kubectl,Kubelet and Kubeadm all nodes

Install package Kubernetes cluster each node master and worker using following command.

devopsgol@k8s-master1:~$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add –

devopsgol@k8s-master1:~$ sudo echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

devopsgol@k8s-master1:~$ sudo apt-get update -y

devopsgol@k8s-master1:~$ sudo apt-get -y install kubelet=1.26.0-00 kubeadm=1.26.0-00 kubectl=1.26.0-00

Step 7 :  Kubeadm init cluster kubernetes

Create a sudo account for the Kubernetes cluster and grant sudo access to that account. Log in to the master node instance (k8s-master) and execute the command "kubeadm init" to deploy the Kubernetes cluster. 

devopsgol@k8s-master1:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint "an-k8s-master1:6443"  --upload-certs

 Once you have successfully deployed the Kubernetes cluster, you will see output like the one below.  

Your Kubernetes control-plane has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

Alternatively, if you are the root user, you can run:

 

  export KUBECONFIG=/etc/kubernetes/admin.conf

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:

 

  kubeadm join k8s-master1:6443 --token laivwy.65jz7mnhs5jsbim1 \

        --discovery-token-ca-cert-hash sha256:63f10fcfcce6b0c3d42d975cc8e368c595a9f186c84652f1e36f0860b5d4b7e0 \

        --control-plane

 

Then you can join any number of worker nodes by running the following on each as root:

 

sudo kubeadm join k8s-master1:6443 --token laivwy.65jz7mnhs5jsbim1 \

        --discovery-token-ca-cert-hash sha256:63f10fcfcce6b0c3d42d975cc8e368c595a9f186c84652f1e36f0860b5d4b7e0

 After the initialization process completes, you’ll see a message containing a ‘kubeadm join’ command. Save this command; we’ll use it later to add worker nodes to the cluster.

In order to interact with cluster as a regular user, let’s execute the following commands, these commands are already there in output just copy paste them.

devopsgol@k8s-master1:~$   mkdir -p $HOME/.kube

devopsgol@k8s-master1:~$   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

devopsgol@k8s-master1:~$   sudo chown $(id -u):$(id -g) $HOME/.kube/config

devopsgol@k8s-master1:~$ sudo su

root@k8s-master1:/home/devopsgol# cd

root@k8s-master1:~#   export KUBECONFIG=/etc/kubernetes/admin.conf

Step 8 :  Add  Worker Node to Kubernetes cluster

Now you can add worker nodes to the Kubernetes cluster using the kubeadm join command that you saved in Step 7. 

devopsgol@k8s-worker3:~$ sudo kubeadm join k8s-master1:6443 --token laivwy.65jz7mnhs5jsbim1 \

        --discovery-token-ca-cert-hash sha256:63f10fcfcce6b0c3d42d975cc8e368c595a9f186c84652f1e36f0860b5d4b7e0

Now, verify the nodes status from the master node,  run “kubectl get nodes” 

devopsgol@k8s-master1:~$ kubectl get nodes

NAME             STATUS     ROLES           AGE     VERSION

k8s-master1   NotReady   control-plane   5m17s   v1.26.0

k8s-worker1   NotReady   <none>          2m54s   v1.26.0

k8s-worker2   NotReady   <none>          78s     v1.26.0

k8s-worker3   NotReady   <none>          49s     v1.26.0

devopsgol@k8s-master1:~$

 

Step 9 :  Install CNI Flannel

 Now, install Flannel as the Container Network Interface (CNI) for pod communication.  

devopsgol@k8s-master1:~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

namespace/kube-flannel created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds created

 Ensure the Kubernetes cluster has a ready status.

devopsgol@k8s-master1:~$ kubectl get nodes

NAME             STATUS   ROLES           AGE     VERSION

k8s-master1   Ready    control-plane   2m56s   v1.26.0

k8s-worker1   Ready    <none>          110s    v1.26.0

k8s-worker2   Ready    <none>          76s     v1.26.0

k8s-worker3   Ready    <none>          43s     v1.26.0

 

devopsgol@k8s-master1:~$ kubectl get pods -A

NAMESPACE      NAME                                     READY   STATUS    RESTARTS   AGE

kube-flannel   kube-flannel-ds-8jbbx                    1/1     Running   0          36s

kube-flannel   kube-flannel-ds-gdrds                    1/1     Running   0          36s

kube-flannel   kube-flannel-ds-srmw4                    1/1     Running   0          36s

kube-flannel   kube-flannel-ds-w8sr7                    1/1     Running   0          36s

kube-system    coredns-787d4945fb-phmxq                 1/1     Running   0          2m44s

kube-system    coredns-787d4945fb-qrhdj                 1/1     Running   0          2m44s

kube-system    etcd-an-k8s-master1                      1/1     Running   0          3m

kube-system    kube-apiserver-k8s-master1               1/1     Running   0          2m57s

kube-system    kube-controller-manager-k8s-master1      1/1     Running   0          2m57s

kube-system    kube-proxy-5wlmd                         1/1     Running   0          115s

kube-system    kube-proxy-bxngh                         1/1     Running   0          81s

kube-system    kube-proxy-qd7q4                         1/1     Running   0          48s

kube-system    kube-proxy-strsl                         1/1     Running   0          2m44s

kube-system    kube-scheduler-k8s-master1               1/1     Running   0          2m59s

 

Step 10 : Dynamic Volume Provisioning (NFS)

 Kubernetes requires storage to store pod data; you can use local storage that will be stored in the path / on each worker node. 

For easier management and troubleshooting, use NFS as a dynamic persistent volume. You can use the following command to install the NFS server on k8s-master1: 

devopsgol@k8s-master1:~$ sudo apt install nfs-kernel-server -y

devopsgol@k8s-master1:~$ sudo mount /dev/vdb /nfs-devopsgol

devopsgol@k8s-master1:~$  sudo chown nobody:nogroup /nfs-devopsgol

devopsgol@k8s-master1:~$  sudo chmod -R 777  /nfs-devopsgol

devopsgol@k8s-master1:~$ sudo nano /etc/exports

/nfs-devopsgol 10.20.30.0/24(rw,sync,no_subtree_check)

devopsgol@k8s-master1:~$ sudo exportfs -a

devopsgol@k8s-master1:~$ sudo systemctl restart nfs-server.service

devopsgol@k8s-master1:~$ sudo df -h /nfs-devopsgol

Filesystem      Size  Used Avail Use% Mounted on

/dev/vdb         98G   61M   93G   1% /nfs-devopsgol

 

 After successfully setting up the NFS server, install the NFS client on all worker nodes.

devopsgol@k8s-worker1:~$ sudo apt install nfs-common -y

devopsgol@k8s-worker1:~$ sudo mkdir -p /mnt/devops

devopsgol@k8s-worker1:~$ sudo  mount an-k8s-master1:/nfs-devopsgol /mnt/devops

devopsgol@k8s-worker1:~$ df -h /mnt/devops

Filesystem                     Size  Used Avail Use% Mounted on

k8s-master1:/nfs-devopsgol   98G   60M   93G   1% /mnt/devops

 

Konfig nfs storage class  via  helm kubernetes  

Configure the NFS storage class via Helm for Kubernetes. After configuring the NFS server on master nodes, you can now configure the storage class in Kubernetes using Helm.

Install  helm v3 on Kubernetes cluster using command bellow:

devopsgol@k8s-master1:~$ sudo wget -O helm.tar.gz https://get.helm.sh/helm-v3.13.0-rc.1-linux-amd64.tar.gz

sudo tar -zxvf helm.tar.gz

linux-amd64/

linux-amd64/LICENSE

linux-amd64/helm

linux-amd64/README.md

 

devopsgol@k8s-master1:~$ sudo mv linux-amd64/helm /usr/local/bin/helm

 Install Helm v3 on the Kubernetes cluster using the command below:

devopsgol@k8s-master1:~$ helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

"nfs-subdir-external-provisioner" has been added to your repositories

 

devopsgol@k8s-master1:~$ helm repo list

NAME                            URL

nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

 

devopsgol@k8s-master1:~$ helm install nfs-client nfs-subdir-external-provisioner/nfs-subdir-external-provisioner  --set nfs.server=10.20.30.40  --set nfs.path=/nfs-devopsgol

NAME: nfs-client

LAST DEPLOYED: Fri Dec 15 21:57:53 2023

NAMESPACE: default

STATUS: deployed

REVISION: 1

TEST SUITE: None

 

devopsgol@k8s-master1:~$ helm  list -A

NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                                   APP VERSION

nfs-client      default         1               2023-12-15 21:57:53.554090884 +0700 WIB deployed        nfs-subdir-external-provisioner-4.0.18  4.0.2

 

devopsgol@k8s-master1:~$ kubectl get  pods -A

NAMESPACE      NAME                                                          READY   STATUS    RESTARTS   AGE

default        nfs-client-nfs-subdir-external-provisioner-67c68fc688-2cm2p   1/1     Running   0          11s

 

devopsgol@k8s-master1:~$ kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

storageclass.storage.k8s.io/nfs-client patched

 

devopsgol@k8s-master1:~$ kubectl get  sc -A

NAME                   PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

nfs-client (default)   cluster.local/nfs-client-nfs-subdir-external-provisioner   Delete          Immediate           true                   111s

devopsgol@an-k8s-master1:~$

 

 

Step 11 : Ingress Nginx Controller

 You can install the NGINX Ingress using the following command.

devopsgol@k8s-master1:~$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml

namespace/ingress-nginx created

serviceaccount/ingress-nginx created

configmap/ingress-nginx-controller created

clusterrole.rbac.authorization.k8s.io/ingress-nginx created

clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created

role.rbac.authorization.k8s.io/ingress-nginx created

rolebinding.rbac.authorization.k8s.io/ingress-nginx created

service/ingress-nginx-controller-admission created

service/ingress-nginx-controller created

deployment.apps/ingress-nginx-controller created

ingressclass.networking.k8s.io/nginx created

validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

serviceaccount/ingress-nginx-admission created

clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created

clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created

role.rbac.authorization.k8s.io/ingress-nginx-admission created

rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created

job.batch/ingress-nginx-admission-create created

job.batch/ingress-nginx-admission-patch created

 

 

devopsgol@k8s-master1:~$ kubectl get all -n ingress-nginx

NAME                                           READY   STATUS      RESTARTS   AGE

pod/ingress-nginx-admission-create-xm5bs       0/1     Completed   0          27s

pod/ingress-nginx-admission-patch-vrdwh        0/1     Completed   0          27s

pod/ingress-nginx-controller-8b5588f6c-zjx7m   1/1     Running     0          27s

 

NAME                                         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE

service/ingress-nginx-controller             LoadBalancer   10.102.159.229   <pending>     80:32023/TCP,443:31268/TCP   27s

service/ingress-nginx-controller-admission   ClusterIP      10.106.78.153    <none>        443/TCP                      27s

 

NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE

deployment.apps/ingress-nginx-controller   1/1     1            1           27s

 

NAME                                                 DESIRED   CURRENT   READY   AGE

replicaset.apps/ingress-nginx-controller-8b5588f6c   1         1         1       27s

 

NAME                                       COMPLETIONS   DURATION   AGE

job.batch/ingress-nginx-admission-create   1/1           6s         27s

job.batch/ingress-nginx-admission-patch    1/1           7s         27s

devopsgol@k8s-master1:~$

 

Step 12 : Create Pods for Testing Dynamic Provisioning NFS

 Create volumes dynamic for nfs storage class 

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: devops-volume

  annotations:

    # specify StorageClass name

    volume.beta.kubernetes.io/storage-class: nfs-client

spec:

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      # volume size

      storage: 5Gi

 applied manifest using command  kubectl

devopsgol@an-k8s-master1:~/test-pods-volumes$ kubectl apply -f devops-pv.yaml

persistentvolumeclaim/devops-volume created

devopsgol@an-k8s-master1:~/test-pods-volumes$ kubectl get  pv -A

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS   REASON   AGE

pvc-556c23cb-17d6-40d5-a139-9d80db9cba45   5Gi        RWO            Delete           Bound    default/devops-volume   nfs-client              8s

devopsgol@an-k8s-master1:~/test-pods-volumes$ kubectl get  pvc -A

NAMESPACE   NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE

default     devops-volume   Bound    pvc-556c23cb-17d6-40d5-a139-9d80db9cba45   5Gi        RWO            nfs-client     18s

devopsgol@an-k8s-master1:~/test-pods-volumes$

 Create pods  for testing dynamic storage persistents 

apiVersion: v1

kind: Pod

metadata:

  name: my-devops

spec:

  containers:

    - name: my-devops

      image: nginx

      ports:

        - containerPort: 80

          name: web

      volumeMounts:

      - mountPath: /usr/share/nginx/html

        name: nginx-pvc

  volumes:

    - name: nginx-pvc

      persistentVolumeClaim:

        # PVC name you created

        claimName: devops-volume

 

 apply using command kubectl :

devopsgol@an-k8s-master1:~/test-pods-volumes$ kubectl apply -f my-devops.yaml

pod/my-devops created

devopsgol@an-k8s-master1:~/test-pods-volumes$ kubectl get  pods -o wide

NAME                                                          READY   STATUS    RESTARTS   AGE   IP           NODE             NOMINATED NODE   READINESS GATES

my-devops                                                     1/1     Running   0          64s   10.244.1.4   an-k8s-worker1   <none>           <none>

nfs-client-nfs-subdir-external-provisioner-67c68fc688-2cm2p   1/1     Running   0          15m   10.244.2.2   an-k8s-worker2   <none>           <none>

devopsgol@an-k8s-master1:~/test-pods-volumes$

 

devopsgol@an-k8s-master1:~/test-pods-volumes$ kubectl exec my-devops -- df /usr/share/nginx/html

Filesystem                                                                                   1K-blocks  Used Available Use% Mounted on

an-k8s-master1:/nfs-devopsgol/default-devops-volume-pvc-556c23cb-17d6-40d5-a139-9d80db9cba45 102687744 61440  97367040   1% /usr/share/nginx/html

devopsgol@an-k8s-master1:~/test-pods-volumes$

 

 

 

 

Step 12 : Create Services for Testing Ingress Nginx

 Create pods for testing ingress nginx

devopsgol@k8s-master1:~/test-pods-volumes$ kubectl run http-web --image=httpd --port=80

 

devopsgol@k8s-master1:~/test-pods-volumes$ kubectl expose pod http-web --name=http-service --port=80 --type=LoadBalancer

 

devopsgol@k8s-master1:~/test-pods-volumes$ kubectl expose pod http-web --name=http-service --port=80 --type=LoadBalancer

 

devopsgol@an-k8s-master1:~$ kubectl get pods,service | grep http

pod/http-web                                                      1/1     Running   0          14m

service/http-service   LoadBalancer   10.96.95.114   <pending>     80:32334/TCP   14m

devopsgol@an-k8s-master1:~$

 Create Manifest Ingress for services http-web

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: my-devops1

  namespace: default

spec:

  ingressClassName: nginx

  rules:

  - host: devopsgol.com

    http:

      paths:

      - path: /

        pathType: Prefix

        backend:

          service:

            name: http-service

            port:

              number: 80

 applied manifest ingress, using for command  kubectl

devopsgol@an-k8s-master1:~$ kubectl apply -f  test-pods-volumes/ingress-devops.yaml

ingress.networking.k8s.io/my-devops1 created

 Verify  that you can access application using nodeport service ingress nginx

devopsgol@k8s-master1:~$ kubectl get  ingress -A

NAMESPACE   NAME         CLASS   HOSTS               ADDRESS          PORTS   AGE

default     my-devops    nginx   web.devopsgol.com   10.102.159.229   80      112m

default     my-devops1   nginx   devopsgol.com       10.102.159.229   80      8s

 

devopsgol@k8s-master1:~$ curl 10.20.30.41:32023 -H 'Host: devopsgol.com'

<html><body><h1>It works!</h1></body></html>

 

Artikel  Related 


Setup Jenkins On Kubernetes 

How to Configuration Notification Email in jenkins 

How  to configuration Notification Slack in Jenkins

Conclusion

Alhamdulillah! It's great to hear that you've successfully set up a Kubernetes cluster on Ubuntu 20.04/22.04.

Kubernetes indeed provides a robust foundation for deploying applications in containers, tailored to your specific needs.

If you have any further questions or if there's anything else I can assist you with regarding Kubernetes or any other topic, feel free to ask. Good luck with your containerized applications!