Integration of Prometheus and Grafana and making their data persistent

Image for post
Image for post
Prometheus and Grafana on Kubernetes (Gif Credits: Daksh Jain)

Let us start by knowing what is Prometheus and Grafana:

Prometheus:

Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.Prometheus’s main features are:

  • a multi-dimensional data model with time series data identified by metric name and key/value pairs
  • PromQL, a flexible query language to leverage this dimensionality
  • no reliance on distributed storage; single server nodes are autonomous
  • time series collection happens via a pull model over HTTP
  • pushing time series is supported via an intermediary gateway
  • targets are discovered via service discovery or static configuration
  • multiple modes of graphing and dashboarding support

To know more about prometheus:

Grafana:

The Grafana project was started by Torkel Ödegaard in 2014 and has the last couple of years become one of the most popular open source projects on GitHub.It allows us to query, visualize and alert on metrics and logs no matter where they are stored.

Grafana’s main features:

  • Grafana has pluggable data source model and comes bundled with rich support for many of the most popular time series databases like Graphite, Prometheus, Elasticsearch, OpenTSDB and InfluxDB.
  • It also has built-in support for cloud monitoring vendors like Google Stackdriver, Amazon Cloudwatch, Microsoft Azure and SQL databases like MySQL and Postgres. Grafana is the only tool that can combine data from so many places into a single dashboard.

To know more about grafana:

Rest of the concepts required we will learn as we proceed.

Let’s start by creating the Dockerfile for prometheus:

Image for post
Image for post
Prometheus Dockerfile

You can use the same Docker Image created by me :

docker pull shirsha30/prometheus:v1

Or create your own Docker image using the commands given below:

docker build -t imagename:v1
docker images | grep imagename:v1
docker login -u username -ppassword
docker tag imagename:v1 username/imagename:v1
docker push username/imagename:v1

Now lets create services for the same, to expose the pod to the to the outside world we have to use NodePort and specify the port we want to expose. Do not forget to mention the Selector, as the same will be matched and used to create Deployment too.

Image for post
Image for post
svcprom.yml

Now, we will create PVC(Persistent Volume Claim)- is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).

Image for post
Image for post
prompvc.yml

We have requested specific size of 20Gi and access modes as ReadWriteOnce as per our requirement.

Now lets move on towards creating the ConfigMap: A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.

We have to pass the configuration file for Prometheus. Here in the data tag we have to pass the configuration file details of Prometheus. In the targets we have to specify the system IPs we want to monitor for the metrics.

Image for post
Image for post
configmapprom.yml

Next we have to create the Deployment file for the Prometheus pods. Since in the image I created, I have specified an argument : “args” it is mandatory to pass “ — — config.file= path_of_config_file”.

Keep in mind to add both PVC and configMap in the Volume Mounts.

Image for post
Image for post
deploymentprom.yml

Lets us move towards creating a kustomization file .

Kustomize is a standalone tool to customize Kubernetes objects through a kustomization file. This deploys everything for us in one go without having to create separetly for deployments, services, pvc, etc.

Image for post
Image for post
kustomization.yml
kubectl create -k .

. is used because the file is in the same directory . If not you can provide the kustomization_directory

Image for post
Image for post

Initially it will take some time for the pods to get ready, as they need to download the image from the dockerhub.

Since it is exposed, therefore we can access it through the minikubeip:nodePort

We can check the targets as shown below and verify it from our configmapprom.yml file.

Image for post
Image for post

Now to check if our data is persistent or not:

We update our configMap and add new targets and then delete the older pods.

>kubectl replace -f .\configmapprom.yml
configmap/prom-script replaced
>kubectl delete pods --all
pod "prometheus-69b6b75b7f-svrwj" deleted
>kubectl get all
Image for post
Image for post
Image for post
Image for post

We see that new targets come up but the old one also remains, this shows persistency. Therefore our requirements are fulfilled.

Moving on towards Grafana:

Lets start by creating the Docker Image for grafana:

In the Docker image I have mentioned the path /var/lib/grafana explicitly so that the data gets stored here and I can mount the PVC in this folder. Even if you didn’t understand stay firm, and read till the end to make your understand the concepts.

Image for post
Image for post

You can use the same Docker Image created by me or create your own docker image as shown in the beginning.

docker pull shirsha30/grafana:v1

Now, lets create the service file with NodePort and specify the port we want to expose to the outside world. Be careful about the selector as it will be used in for creating Deployments.

Image for post
Image for post
svcgrafana.yml

Then we will create the PVC for persistent storage, so that the dashboards created by Grafana, are there even if the pods are deleted because of some reason.

Image for post
Image for post
pvcgrafana.yml

Now, we will create the Deployment file which will manage the pods and ReplicaSet maintains the no of pods.

The volume is mounted to the folder in which Grafana dashboards will be saved i.e. /var/lib/grafanawhich is mentioned in the Dockerfile that we have already created. We can also do the same using ConfigMap as we did in the case of Prometheus.

I hope now your doubt of as to why we created the /var/lib/grafana path explicitly in the Dockerfile.

Image for post
Image for post
deploygrafana.yml

Next finally we create the kustomization file for Grafana in which we mention the files in order we want to run.

Image for post
Image for post
kustomization.yml

Run the kustomization file by

kubectl create -k .
Image for post
Image for post

You can go to the minikubeip:nodeport and create a Dashboard of your choice. For your reference, I have created one.

Image for post
Image for post
The login page
Image for post
Image for post
Creating Dashboard

Now , we see that even after the pods are deleted, it is launched automatically. This is because we have used Deployments to launch the pods. Lets check if our dashboard is there or is it deleted, we refresh the page and see that the dashboard created by us before is there.

Image for post
Image for post
Showing consistency of the dashboard.

We have finally achieved our goal of making the data of prometheus and grafana persistent , by using Kubernetes.

If you have any doubts, you can comment in the post or connect with me on LinkedIn.

The github repo for all the codes:

Written by

I am a DevOps Enthusiast and recently taken to Cloud Computing. Learning Flutter App Development currently. In my free time I engage in competitive coding.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store