EFK — Log Monitoring Solution for Kubernetes Cluster Deployments

EFK solution — ElasticSearch, Fluentd & Kibana, yes that right. Together these trio form an awesome combination & powerful stack. Individually all these tools are awesome & when they come together their powers increase many folds & just follow the guide to experience the it.

There are multiple ways we can deploy the EFK solutions as a log monitoring framework for Kubernetes deployments. In this guide we have followed an unique combination of VM & K8s both. The ElasticSearch & Kibana will be deployed on a VM & the Fluentd will be deployed (as a daemonset) into the Kubernetes Cluster.

Fluentd will be responsible to collect logs from the applications (container logs) deployed into the Kubernetes cluster & will push those automatically to the ElasticSearch cluster (VM) & finally the Kibana (VM) will be configured to visualize the logs.

Deploy ElasticSearch & Kibana into VM


yum -y update
yum -y upgrade
yum -y install epel-release
reboot ## can be skipped if SELINUX need to be done



Disable IPV6

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

Run below command to update the above config:

sysctl --load /etc/sysctl.d/70-ipv6.conf

Set Timezone

timedatectl set-timezone Asia/Delhi

Install & Start Chronyd

yum -y install chrony
systemctl enable — now chronyd
systemctl restart chronyd
systemctl status chronyd
source chrony

Install OpenJDK

yum install java-1.8.0-openjdk


Add required repository (Elasticsearch)

cat <<EOF | sudo tee /etc/yum.repos.d/elasticsearch.repo
name=Elasticsearch repository for 7.x packages
type=rpm-md— import https://artifacts.elastic.co/GPG-KEY-elasticsearch
yum clean all
yum makecache
yum repolist

Install Elasticsearch

yum -y install elasticsearch

Configure Elasticsearch

It is also better to keep a copy of the original config file.

cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml.original

Edit file /etc/elasticsearch/elasticsearch.yml & add the below contents at the top of the file.

node.name: efk-node
discovery.zen.minimum_master_nodes: 1
http.port: 9200
discovery.seed_hosts: [""]
cluster.initial_master_nodes: ["efk-node"]

Here the node.name should be the local FQDN of the host where the installation is being done.

Enable & Start Elasticsearch Service

systemctl daemon-reload
systemctl enable --now elasticsearch
systemctl restart elasticsearch
systemctl status elasticsearch

Verify Elasticsearch service & status

[root@efk-node ~]# netstat -tulpn | grep 9200
curl http://<NODEIP or HOSTNAME>:9200/_cluster/health?pretty
curl http://<NODEIP or HOSTNAME>:9200/_cat/nodes?v

Example Output:

ElasticSearch cluster health
Node Health of nodes of the ES cluster

We now can also find few default indices (as no data have been pushed into the ES cluster) into the ES cluster.

[root@efk-node ~]# curl
green open .apm-custom-link gfWJq7axQ_SjZ2A473C2fQ 1 0 0 0 208b 208b
green open .kibana_task_manager_1 oRoC9ENOQ0yHTDTDlYTYMQ 1 0 5 17919 1.6mb 1.6mb
green open .apm-agent-configuration e6TLnDANSlmxaTqtDTwwkg 1 0 0 0 208b 208b
green open .kibana-event-log-7.10.2-000001 PfY7uiTpQgi82Y4O7NX_rA 1 0 2 0 11kb 11kb
green open .kibana_1 Kr3yasAITbOe6F4lJ0SYgA 1 0 13 6 2.1mb 2.1mb


Install kibana

yum -y install kibana-7.9.2

Kibana has been re-platformed after 7.9 release & as on date multiple plugins are not yet fully functional for the newer Kibana releases. So it is recommended to use 7.9.2 version if Kibana version is not hard bound.

Configure Kibana

It is also better to keep a copy of the original config file.

cp /etc/kibana/kibana.yml /etc/kibana/kibana.yml.original

Edit file /etc/kibana/kibana.yml & add the below contents at the top of the file.

server.host: "efk-node"
elasticsearch.hosts: ["http://localhost:9200"]

Enable & Start Kibana service

systemctl enable — now kibana
systemctl restart kibana
systemctl status kibana

Verify Kibana service & status

[root@efk-node ~]# curl
<!DOCTYPE html><html lang="en"><head><meta charSet="utf-8"/><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"/><meta name="viewport" content="width=device-width"/><title>Elastic</title><style>
@font-face {
font-family: 'Inter UI';
font-style: normal;
font-weight: 100;
src: url('/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff2') format('woff2'), url('/ui/fonts/inter_ui/Inter-UI-Thin-BETA.woff') format('woff');

Once it is found both the ElasticSearch & Kibana services are up & running fine (based on the curl results) we can consider these part (ElasticSearch & Kibana setup) done.

Deploy FluentD into Kubernetes

A detailed guide on how to deploy Fluentd as a service into the Kubernetes cluster has been covered here:

One just need to follow the readme guide of the github link mentioned above.

Once the fluentd is deployed as a daemonset into the Kubernetes cluster, it will start pushing the logs into the ElasticSearch cluster (the VM having the ElasticSearch) & the same can be visualized using the Kibana.

One has to configure the Kibana with proper log indices to visualize the logs.

Kibana log visualization

In this above example we have just use the Kubernetes controlplane logs for Kibana visualization.

Cloud enthusiast, DevOps