ELK Stack Docker

ELK is an acronym for Elasticsearch, Logstash, and Kibana. ELK consists of various software tools such as Elasticsearch (log searching tool), Logstash (data router and data processing tool), and Kibana(data visualization tool). All these three tools constitute a full analytical tool. Elasticsearch is a NoSQL database that uses the Lucene search engine. Logstash is a transportation pipeline that is used to populate Elasticsearch with data and Kibana is a dashboard working on top of Elasticsearch and provides data analysis through visualizations and dashboards. This blog gives you a glance at the ELK stack in Docker topics that covers Docker compose, Elasticsearch container, Logstash container, Kibana container, NGINX container, filebeat configuration. Before getting through these concepts, let us comprehend the basic terms of Elasticsearch, Logstash, and Kibana in more detail.

What is ElasticSearch?

Elasticsearch allows you to search, analyze, and store extensive volume data. This is utilized as a basic engine to authorize the applications that fulfill the search stipulations. Elasticsearch also acts as a NoSQL database, and it is based on the Lucene Search Engine. It provides easy management, simple deployment, and maximum reliability. It also offers sophisticated queries for performing detailed analysis and stores the data. 

What is Logstash?

Logstash acts as a data collection pipeline tool. It collects the data inputs and stores them into ElasticSearch. It gathers various kinds of data from various data sources and makes it accessible for future reference. Logstash can combine the data from distinct sources and standardize the data into your essential destinations. Following are the three elements of Logstash:

Input: Sending the logs for processing them into the machine-understandable format.

Filter: It is a group of conditions for performing a specific action or an event.

Output: It acts as a decision-maker to a processed log or event.

What is Kibana?

It is the data visualization tool that completes the ELK Stack. Kibana is used to visualize Elasticsearch documents, and it assists the developers in analyzing them. The Kibana Dashboards provide various responsive geospatial data, graphs, and diagrams for visualizing the difficult queries. The Kibana is used to view, search, and interact with the data saved in the Elasticsearch directories. Through Kibana, advanced data analysis can be performed, and the data is visualized in various charts, maps, and tables.

Installation of Docker

You may face impediments while installing/configuring elastic stack if you are a new user. The automation of the installation procedure using Docker will reduce the time and complexities of the installation procedure to a great extent. So let’s start the procedure right from installing Docker to visualizing Apache logs in Kibana Dashboard.

Install Docker Compose

Apply the following set of commands to install Docker. Ignore this step if you have already set up the environment for Docker. 


$ yum install wget

$ wget -qO- https://get.docker.com/ | sh

$ systemctl enable docker.service

$ systemctl start docker.service

$ systemctl status docker.service

ElasticSearch Container

Begin the containerizing elastic stack starting with elasticsearch. Create a root folder where each component of an elastic stack will be clubbed together.

$ mkdir ~/docker-elk


Navigate to the root folder of an elastic stack and create folders for elasticsearch and associated configurations/storage for elasticsearch.


$ cd ~/docker-elk

$ mkdir -p elasticsearch/{config,storage}

$ chown -R 1000:1000 elasticsearch/storage/


The config and storage folders will be used to define “docker volumes” in the docker-compose file at a later stage. The docker volume will enable a folder in the host machine to remain attached with the folders in the containers and will always remain in sync with each other.


Create a Dockerfile for elasticsearch which is a way to place all the commands needed to assemble an image.


$ cd ~/docker-elk/elasticsearch

$ vi Dockerfile

ARG ELK_VERSION

FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}


Define the version of elasticsearch that you want to containerize in the environment file.


$ vi ~/docker-elk/.env

ELK_VERSION=7.5.1


Now proceed by creating an elasticsearch configuration file in the config folder. Having an ES configuration file in the host machine will enable us to easily tweak the settings and mount it in the container using docker volume.


$ vi ~/docker-elk/elasticsearch/config/elasticsearch.yml

cluster.name: "docker-cluster"

network.host: 0.0.0.0

discovery.zen.minimum_master_nodes: 1

discovery.type: single-node

logger.level: DEBUG


All the files and folders needed for defining elasticsearch containers through docker-compose is now ready. Proceed with creating a docker-compose file in the root of the project which is ~/docker-elk.


$ vi ~/docker-elk/docker-compose.yml

version: '3'

services:

 elasticsearch:

    container_name: elasticsearch

    build:

       context: elasticsearch

       args:

          ELK_VERSION: $ELK_VERSION

    volumes:

      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml

      - ./elasticsearch/storage:/usr/share/elasticsearch/data:rw


    ports:

      - "9200:9200"

      - "9300:9300"

Elk Stack Training

  • Master Your Craft
  • Lifetime LMS & Faculty Access
  • 24/7 online expert support
  • Real-world & Project Based Learning

environment:

      - ELASTIC_PASSWORD="changeme"

      - ES_JAVA_OPTS=-Xmx256m -Xms256m

      - discovery.type=single-node

      - bootstrap.memory_lock=true

      - http.cors.allow-origin=*


    ulimits:

      memlock:

        soft:  -1

        hard:  -1


networks:

  elk:

    driver: bridge


Build the container using the following commands:


$ cd ~/docker-elk

$ docker-compose bui

ld elasticsearch


Start the container in detached mode by using the following command:

$ docker-compose up -d elasticsearch


List the container using the following command:

$ docker ps -a


Ping the ES container from your host machine with the following instructions:

$ curl http://127.0.0.1:9200/

{

 "name" : "a85bd40e10de",

 "cluster_name" : "docker-cluster",

 "cluster_uuid" : "cnGc-4uLSIWS-bFwr8ywug",

 "version" : {

   "number" : "7.5.1",

   "build_flavor" : "default",

   "build_type" : "docker",

   "build_hash" : "3ae9ac9a93c95bd0cdc054951cf95d88e1e18d96",

   "build_date" : "2019-12-16T22:57:37.835892Z",

   "build_snapshot" : false,

   "lucene_version" : "8.3.0",

   "minimum_wire_compatibility_version" : "6.8.0",

   "minimum_index_compatibility_version" : "6.0.0-beta1"

 },

 "tagline" : "You Know, for Search"

}


To find any useful information, you can navigate to the shell of the elasticsearch container, execute the following command to get the result:


$ docker exec -it elasticsearch /bin/bash

Logstash Container

Like before, create the following folders for logstash and its configuration settings in the root of the project.

  • config: This folder will hold logstash system-wide configuration settings.
  • pipeline: This folder will hold logstash configuration settings for each log file that you want to process.
  • logfile: This contains the log file which includes network logs, Apache logs, etc.

$ cd ~/docker-elk

$ mkdir -p logstash/{config,pipeline,logfile}


Create Dockerfile for Logstash by applying the following instructions.


$ vi ~/docker-elk/logstash/Dockerfile

ARG ELK_VERSION

FROM docker.elastic.co/logstash/logstash-oss:${ELK_VERSION}

RUN logstash-plugin install logstash-input-beats

USER root

RUN mkdir -p /home/logstash/logfile

RUN chown -R logstash:logstash /home/logstash/logfile/


Let us now create a Logstash configuration file by implementing the following instructions.


$ vi ~/docker-elk/logstash/config/logstash.yml

http.host: "0.0.0.0"

path.config: /usr/share/logstash/pipeline


The Apache access and error log are used to test the Logstash pipeline configuration. Filebeat is used for streaming the apache log events to logstash in real-time.


Create a pipeline configuration for logstash that will accept apache log events in port number 5000 and after applying appropriate filters push them to elasticsearch container.


$ vi ~/docker-elk/logstash/pipeline/01.apache.conf

input {

       beats {

               port => 5000

               type => apache

       }

}

filter {

      if [type] == "apache" {

       grok {

              match => { "message" => "%{COMBINEDAPACHELOG}" }

            }

       }

}

output {

        if [type] == "apache" {

        elasticsearch {

                        hosts => ["http://elasticsearch:9200"]

                        index => "apache-combined-%{+YYYY.MM.dd}"

                      }

        stdout { codec => rubydebug }

        }

}


All the necessary configuration settings for logstash are in place to define services for it in the docker-compose. Edit the docker-compose file and add the following content to it.

Ops Trainerz

Subscribe to our youtube channel to get new updates..!

$ vi ~/docker-elk/docker-compose.yml

...

...

 logstash:

    container_name: logstash

    build:

       context: logstash

       args:

         ELK_VERSION: $ELK_VERSION

    volumes:

      - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml

      - ./logstash/pipeline:/usr/share/logstash/pipeline

    ports:

      - "5000:5000"

    networks:

      - elk

    depends_on:

      - elasticsearch

...

...


Build a Logstash container using the following command.


$ docker-compose build logstash


Run the Logstash container without detached mode to view the logstash startup logs in the terminal with the following command.


$ docker-compose up logstash


If everything works correctly, Press CTRL+C, and run the logstash container again in detached mode. Use the following command.


$ docker-compose up -d logstash


List the container by applying the following command.

$ docker ps -a

Kibana Container

To start with containerizing Kibana, create folders meant for it by the name config that will hold Kibana configurations. Implement the following commands:


$ cd ~/docker-elk

$ mkdir -p kibana/config


Create a Dockerfile to assemble an image for the latest kibana. Implement the following instructions.


$ vi ~/docker-elk/kibana/Dockerfile

ARG ELK_VERSION

FROM docker.elastic.co/kibana/kibana:${ELK_VERSION}


Edit and create a Kibana Configuration file by implementing the following instructions.


$ vi ~/docker-elk/kibana/config/kibana.yml

server.name: kibana

server.host: "0"

server.basePath: "/kibana"

elasticsearch.hosts: http://elasticsearch:9200

apm_oss.enabled: true

xpack.apm.enabled: true

xpack.apm.ui.enabled: true

logging.dest: stdout


Finally, append the Kibana service in the docker file by implementing the following instructions.


$ vi docker-compose.yml

 kibana:

    container_name: kibana

    build:

       context: kibana/

       args:

         ELK_VERSION: $ELK_VERSION

    volumes:

      - ./kibana/config/:/usr/share/kibana/config

    ports:

      - "5601:5601"

    environment:

      - ELASTICSEARCH_PASSWORD="changeme"

    networks:

      - elk

    depends_on:

      - elasticsearch


Build and run kibana with the following commands:

$ docker-compose build kibana

$ docker-compose up -d kibana


List the Kibana container by applying the following command:

$ docker ps -a

NGINX Container

The primary cause to add an NGINX container is to provide password-protected access to the Kibana interface through the reverse proxy. Create folders for the NGINX container in the root of the docker project and subsequently create two subfolders by the name public and data.


$ cd ~/docker-elk

$ mkdir -p nginx/{public,data,etc}


Create a simple index file for NGINX. Implement the following instructions.

$ vi nginx/public/index.html

It Works


Create an NGINX configuration file by implementing the following instructions.


$ vi ~/docker-elk/nginx/etc/nginx.conf

worker_processes 4;  

events {

             worker_connections 1024;

}

http {

server {

      listen 80;

      server_name IP_OR_DOMAIN ;

      location / {

      root /usr/share/nginx/html;

      index index.html;

      }

      location /elastic/ {

      proxy_pass http://elasticsearch:9200/;

      proxy_set_header X-Real-IP $remote_addr;

      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      proxy_set_header Host $http_host;

      auth_basic "Restricted Content";

      auth_basic_user_file /etc/nginx/.htpasswd.user;

      }

      location /kibana/ {

      proxy_pass http://kibana:5601/;

      proxy_set_header X-Real-IP $remote_addr;

      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

      proxy_set_header Host $http_host;

      rewrite ^/kibana/(.*)$ /$1 break;

      auth_basic "Restricted Content";

      auth_basic_user_file /etc/nginx/.htpasswd.user;                    

      }

    }

}

Elk Stack Training

Weekday / Weekend Batches

For password-protected access to kibana, you need to install httpd-tools and create the user/password using htpasswd. Use the following commands:


$ yum install httpd-tools

$ cd ~/docker-elk/nginx/etc

$ htpasswd -c .htpasswd.user admin


Define docker service for NGINX with the following instructions.


$ vi ~/docker-elk/docker-compose.yml

nginx:

   image: nginx:alpine

   container_name: nginx

   volumes:

     - './nginx/etc/nginx.conf:/etc/nginx/nginx.conf:ro'

     - './nginx/public:/usr/share/nginx/html:ro'

     - './nginx/etc/.htpasswd.user:/etc/nginx/.htpasswd.user:ro'

   links:

     - elasticsearch

     - kibana

   depends_on:

     - elasticsearch

     - kibana

   ports:

     - '80:80'

   networks:

     - elk


Run the NGINX container with the following command:

$ docker-compose up -d nginx


List the NGINX container by using the following command:


$ docker ps -a


Once the NGINX container is up and running, access the kibana interface by using http://SERVER_IP/kibana


Check the logs of any container using the following command:

$ docker logs container_name_or_id

e.g. docker logs elasticsearch


List all the running containers using the following command:

$ docker ps -a

IMAGE

Test elasticsearch container:

Open your favorite web browser and point it to http://SERVER_IP/elastic

IMAGE

Filebeat Configuration(Client-Side):

ELK stack is now up and running, and begins to test the setup by streaming Apache logs from any remote system using filebeat. You could also prefer some other log events such as NGINX, CISCO syslog, and more. But make sure you have the correct logstash pipeline configuration in place.


Install filebeat implementing the following instructions:

$ wget [https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.0-amd64.deb](https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.5.0-amd64.deb)

$ dpkg -i filebeat-6.5.0-amd64.deb


Configure filebeat with the following instructions:

$ vi /etc/filebeat/filebeat.yml

...

...

filebeat.inputs:

- type: log

 enabled: true

 paths:

   - /var/log/apache2/*.log

...

...

output.logstash:

 # The Logstash hosts

 hosts: ["123.45.67.89:5000"]

...

...


Run filebeat by using the following command:

$ filebeat -e -d "*"


Find the ES index in the kibana dashboard by navigating to management -> index patterns -> Create Index patterns and type apache-combined-* in the text box to finish the process.

IMAGE

Conclusion:
Thus by implementing the instructions that are discussed so far, you have now successfully dockerized the ELK stack. You can also proceed further by creating a few more logstash pipelines to stream log events from various sources.

Krishna
Krishna
AWS Lambda Developer
I am working as AWS Lambda Developer since 2014. and I have good knowledge skills on AWS & DevOps Platform. TO share my Knowledge through Blogs from OpsTrainerz is Good Opportunity to Me.

Request for more information