Setting up Loki & Grafana in Kubernetes

Loki Kubernetes

Loki & Grafana are quickly becoming some of the best open source logging/dashboarding solutions out there. With the help of Helm and Kubernetes you can get this stack up and running in just a few minutes.


  • A functional Kubernetes Cluster or Minikube instance. If you want to set this up locally and don’t have Minikube setup yet checkout this guide to get Minikube installed quickly.
  • Helm installed. Again if you need an article to help you, check out this article on installing helm.

Installing/Setting up Loki

  1. The first step is to add the Grafana repository to Helm. This is necessary since Grafana is the current maintainer for the Loki Helm chart.
$ helm repo add grafana && helm repo update
"grafana" has been added to your repositories
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "grafana" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈Happy Helming!⎈

2. Create a namespace for Loki to run in.

$ kubectl create ns loki
namespace/loki created

3. Create a values.yaml file to set the options we want for the Loki Helm chart. There are several options you can set in the official Loki Helm Chart values file. For simplicity we’re just going to make sure that Loki gets a 2GB persistent volume to survive restarts, and only retains logs for 1 month (720 hours). If you’re truly running Loki in production you will want to increase the amount of storage and tune the retention period to your needs.

  enabled: true
  size: 2Gi
    retention_deletes_enabled: true
    retention_period: 720h

4. Helm install Loki in the Loki namespace using your loki-values.yaml file to override the chart’s default values.

$ helm upgrade --install loki --namespace=loki -f loki-values.yaml grafana/loki

Loki should now be installed and running. To verify it is let’s setup a temporary port forwarding rule and try to hit Loki on port 3100. If you see that `{“values”:[“name”]}` block as a response Loki is working as expected.

$ kubectl --namespace loki port-forward service/loki 3100 &
Forwarding from -> 3100
Forwarding from [::1]:3100 -> 3100
$ curl
Handling connection for 3100

5. Helm install Promtail. Promtail is the agent that ships the container’s logs to Loki. The one value we have to override while installing is Loki’s service name to tell Promtail where to ship the logs. You might want to spin up another values file in the future, but for now we’ll just use helm’s `–set` flag to override that setting.

$ helm upgrade --install --namespace=loki promtail grafana/promtail --set "loki.serviceName=loki"

Setting up Grafana

  1. Since we already set the Grafana repository up in the Loki section we can skip right to the creation of our values.yaml file. You’re welcome to explore the other Helm Chart options that exists in the official Grafana helm values file, but just like with Loki we’ll just add some rules to make sure we have persistent storage.
  enabled: true
  size: 1Gi

2. Time to Helm install Grafana referencing our values file to ensure persistent storage is enabled.

$ helm upgrade --install grafana --namespace=loki -f grafana-values.yaml grafana/grafana

3. With Grafana installed we can run the following commands to get the admin password for the Grafana instance.

$ kubectl get secret --namespace loki grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

4. Forward port 3000 locally to the Grafana instance.

$ export POD_NAME=$(kubectl get pods --namespace loki -l "," -o jsonpath="{.items[0]}"); kubectl --namespace loki port-forward $POD_NAME 3000 &
Forwarding from -> 3000
Forwarding from [::1]:3000 -> 3000

5. Login to Grafana by visiting ‘http://localhost:3000/‘ in your browser. On the login page the user will be ‘admin’ and the password will be the output you got from step 3. Note that these can be configured to easier to remember values by setting the admin username and password in your grafana-values.yaml file.

Connecting Loki and Grafana

Both services are running individually, but don’t know how to talk to each other yet. Let’s set Loki up as a data source to correct this.

1. In the Grafana UI on the left hand side you should see the icon of a ‘cog’. Hover over this icon and click ‘Data Sources’.

2. Click the large ‘Add Data Source‘ button on this new page.

3. Find and click ‘Loki‘ from the list of data sources.

4. Add the Loki url `http://loki.loki.svc.cluster.local:3100` and click ‘Save and Test‘ at the bottom of the page.

5. Click the ‘Explore‘ button in the left hand column – it’s the icon that looks like a compass.

6. Throw a Loki query into the explore field ( {namespace=”loki”} ) and logs should start appearing.


Loki provides an easy path to get distributed logs into a central location. To maximize Loki the next steps I’d recommend is to learn the Loki Query language better and start building some dashboards! Also if you are using Kubernetes consistently you might find these articles useful as well.

Leave a Reply

Your email address will not be published.