Setting up Loki & Grafana in Kubernetes

Loki Kubernetes

Loki & Grafana are quickly becoming some of the best open source logging/dashboarding solutions out there. With the help of Helm and Kubernetes you can get this stack up and running in just a few minutes.

Prerequisites

  • A functional Kubernetes Cluster or Minikube instance. If you want to set this up locally and don’t have Minikube setup yet checkout this guide to get Minikube installed quickly.
  • Helm installed. Again if you need an article to help you, check out this article on installing helm.

Installing/Setting up Loki

  1. The first step is to add the Grafana repository to Helm. This is necessary since Grafana is the current maintainer for the Loki Helm chart.
$ helm repo add grafana https://grafana.github.io/helm-charts && helm repo update
"grafana" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

2. Create a namespace for Loki to run in.

$ kubectl create ns loki
namespace/loki created

3. Create a values.yaml file to set the options we want for the Loki Helm chart. There are several options you can set in the official Loki Helm Chart values file. For simplicity we’re just going to make sure that Loki gets a 2GB persistent volume to survive restarts, and only retains logs for 1 month (720 hours). If you’re truly running Loki in production you will want to increase the amount of storage and tune the retention period to your needs.

persistence:
  enabled: true
  size: 2Gi
  
config:
  table_manager:
    retention_deletes_enabled: true
    retention_period: 720h

4. Helm install Loki in the Loki namespace using your loki-values.yaml file to override the chart’s default values.

$ helm upgrade --install loki --namespace=loki -f loki-values.yaml grafana/loki

Loki should now be installed and running. To verify it is let’s setup a temporary port forwarding rule and try to hit Loki on port 3100. If you see that `{“values”:[“name”]}` block as a response Loki is working as expected.

$ kubectl --namespace loki port-forward service/loki 3100 &
Forwarding from 127.0.0.1:3100 -> 3100
Forwarding from [::1]:3100 -> 3100
$ curl http://127.0.0.1:3100/api/prom/label
Handling connection for 3100
{"values":["__name__"]}

5. Helm install Promtail. Promtail is the agent that ships the container’s logs to Loki. The one value we have to override while installing is Loki’s service name to tell Promtail where to ship the logs. You might want to spin up another values file in the future, but for now we’ll just use helm’s `–set` flag to override that setting.

$ helm upgrade --install --namespace=loki promtail grafana/promtail --set "loki.serviceName=loki"

Setting up Grafana

  1. Since we already set the Grafana repository up in the Loki section we can skip right to the creation of our values.yaml file. You’re welcome to explore the other Helm Chart options that exists in the official Grafana helm values file, but just like with Loki we’ll just add some rules to make sure we have persistent storage.
persistence:
  enabled: true
  size: 1Gi

2. Time to Helm install Grafana referencing our values file to ensure persistent storage is enabled.

$ helm upgrade --install grafana --namespace=loki -f grafana-values.yaml grafana/grafana

3. With Grafana installed we can run the following commands to get the admin password for the Grafana instance.

$ kubectl get secret --namespace loki grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
4XQg333btghDj8zi7qNssb4fJenwa0zhykMNW

4. Forward port 3000 locally to the Grafana instance.

$ export POD_NAME=$(kubectl get pods --namespace loki -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=grafana" -o jsonpath="{.items[0].metadata.name}"); kubectl --namespace loki port-forward $POD_NAME 3000 &
Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

5. Login to Grafana by visiting ‘http://localhost:3000/‘ in your browser. On the login page the user will be ‘admin’ and the password will be the output you got from step 3. Note that these can be configured to easier to remember values by setting the admin username and password in your grafana-values.yaml file.

Connecting Loki and Grafana

Both services are running individually, but don’t know how to talk to each other yet. Let’s set Loki up as a data source to correct this.

1. In the Grafana UI on the left hand side you should see the icon of a ‘cog’. Hover over this icon and click ‘Data Sources’.

2. Click the large ‘Add Data Source‘ button on this new page.

3. Find and click ‘Loki‘ from the list of data sources.

4. Add the Loki url `http://loki.loki.svc.cluster.local:3100` and click ‘Save and Test‘ at the bottom of the page.

5. Click the ‘Explore‘ button in the left hand column – it’s the icon that looks like a compass.

6. Throw a Loki query into the explore field ( {namespace=”loki”} ) and logs should start appearing.

Conclusion

Loki provides an easy path to get distributed logs into a central location. To maximize Loki the next steps I’d recommend is to learn the Loki Query language better and start building some dashboards! Also if you are using Kubernetes consistently you might find these articles useful as well.

2 thoughts on “Setting up Loki & Grafana in Kubernetes”

  1. Pingback: (Hands on) Setup a Kubernetes Infrastructure Dashboard with Grafana + Prometheus - Swiss Army DevOps

  2. Pingback: Kubernetes Debug Init Containers – Swiss Army DevOps

Leave a Reply

Your email address will not be published. Required fields are marked *