EFK — Log Monitoring Solution for Kubernetes Cluster Deployments

Cloud Guy
5 min readMar 29, 2021

EFK solution — ElasticSearch, Fluentd & Kibana, yes that right. Together these trio form an awesome combination & powerful stack. Individually all these tools are awesome & when they come together their powers increase many folds & just follow the guide to experience the it.

There are multiple ways we can deploy the EFK solutions as a log monitoring framework for Kubernetes deployments. In this guide we have followed an unique combination of VM & K8s both. The ElasticSearch & Kibana will be deployed on a VM & the Fluentd will be deployed (as a daemonset) into the Kubernetes Cluster.

Fluentd will be responsible to collect logs from the applications (container logs) deployed into the Kubernetes cluster & will push those automatically to the ElasticSearch cluster (VM) & finally the Kibana (VM) will be configured to visualize the logs.

Deploy ElasticSearch & Kibana into VM

For this we just need a cloud VM (public cloud or private cloud) or server or desktop or some docker container. The guide has been design to consider the CentOS (7 or 8 anything will be okay) as the base OS. It just requires active internet connection to download the required packages.

Prerequisites

Update the base OS to get the required base packages. However these steps can be skipped if the OS is already updated.

yum -y update
yum -y upgrade
yum -y install epel-release
reboot ## can be skipped if SELINUX need to be done

Disable SELINUX

Update /etc/selinux/config file

reboot

Disable IPV6

Edit/create file “/etc/sysctl.d/70-ipv6.conf” with below content:

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

Run below command to update the above config:

sysctl --load /etc/sysctl.d/70-ipv6.conf

Set Timezone

This is also optional, however better set in the local timezone for better visibility.

timedatectl set-timezone Asia/Delhi

Install & Start Chronyd

yum -y install chrony
systemctl enable — now chronyd
systemctl restart chronyd
systemctl status chronyd
source chrony

Install OpenJDK

yum install java-1.8.0-openjdk

ElasticSearch

Add required repository (Elasticsearch)

cat <<EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md— import https://artifacts.elastic.co/GPG-KEY-elasticsearch
yum clean all
yum makecache
yum repolist

Install Elasticsearch

yum -y install elasticsearch

Configure Elasticsearch

To configure the ElasticSerach, the required config file need to be updated. The default config file (/etc/elasticsearch/elasticsearch.yml) is a heavily commented one. So special editing required. Below are the basic configs need to be enabled for this activity.

It is also better to keep a copy of the original config file.

cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.original

Edit file /etc/elasticsearch/elasticsearch.yml & add the below contents at the top of the file.

node.name: efk-node
discovery.zen.minimum_master_nodes: 1
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1"]
cluster.initial_master_nodes: ["efk-node"]

Here the node.name should be the local FQDN of the host where the installation is being done.

Enable & Start Elasticsearch Service

systemctl daemon-reload
systemctl enable --now elasticsearch
systemctl restart elasticsearch
systemctl status elasticsearch

Verify Elasticsearch service & status

[root@efk-node ~]# netstat -tulpn | grep 9200
curl http://<NODEIP or HOSTNAME>:9200/_cluster/health?pretty
curl http://<NODEIP or HOSTNAME>:9200/_cat/nodes?v

Example Output:

ElasticSearch cluster health
Node Health of nodes of the ES cluster

We now can also find few default indices (as no data have been pushed into the ES cluster) into the ES cluster.

[root@efk-node ~]# curl http://10.91.11.155:9200/_cat/indices?
green open .apm-custom-link gfWJq7axQ_SjZ2A473C2fQ 1 0 0 0 208b 208b
green open .kibana_task_manager_1 oRoC9ENOQ0yHTDTDlYTYMQ 1 0 5 17919 1.6mb 1.6mb
green open .apm-agent-configuration e6TLnDANSlmxaTqtDTwwkg 1 0 0 0 208b 208b
green open .kibana-event-log-7.10.2-000001 PfY7uiTpQgi82Y4O7NX_rA 1 0 2 0 11kb 11kb
green open .kibana_1 Kr3yasAITbOe6F4lJ0SYgA 1 0 13 6 2.1mb 2.1mb

Kibana

Install kibana

yum -y install kibana-7.9.2

Kibana has been re-platformed after 7.9 release & as on date multiple plugins are not yet fully functional for the newer Kibana releases. So it is recommended to use 7.9.2 version if Kibana version is not hard bound.

Configure Kibana

To configure the Kibana, the required config file need to be updated. The default config file (/etc/kibana/kibana.yml) is a heavily commented one. So some special editing will be required. Below are the basic configs need to be enabled for this activity.

It is also better to keep a copy of the original config file.

cp /etc/kibana/kibana.yml /etc/kibana/kibana.yml.original

Edit file /etc/kibana/kibana.yml & add the below contents at the top of the file.

server.host: "efk-node"
elasticsearch.hosts: ["http://localhost:9200"]

Enable & Start Kibana service

systemctl enable — now kibana
systemctl restart kibana
systemctl status kibana

Verify Kibana service & status

[root@efk-node ~]# curl http://10.91.11.155:5601/app/home#/
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/><meta name="viewport" content="width=device-width"/><title>Elastic</title><style>
@font-face {
font-family: 'Inter UI';
font-style: normal;
font-weight: 100;
src: url('/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff2') format('woff2'), url('/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff') format('woff');
}

Once it is found both the ElasticSearch & Kibana services are up & running fine (based on the curl results) we can consider these part (ElasticSearch & Kibana setup) done.

Deploy FluentD into Kubernetes

We will now be deploying the Fluentd service into the Kubernetes cluster. The preassumption is that there should be some application already deployed into the Kubernetes cluster. This fluentd will start pushing the application logs into the previously setup ElasticSearch-Kibana cluster once deployed.

A detailed guide on how to deploy Fluentd as a service into the Kubernetes cluster has been covered here:

One just need to follow the readme guide of the github link mentioned above.

Once the fluentd is deployed as a daemonset into the Kubernetes cluster, it will start pushing the logs into the ElasticSearch cluster (the VM having the ElasticSearch) & the same can be visualized using the Kibana.

One has to configure the Kibana with proper log indices to visualize the logs.

Kibana log visualization

In this above example we have just use the Kubernetes controlplane logs for Kibana visualization.

--

--