ELK Stack with nginx on CentOS 7

Preface

I wrote this up months ago but never published. In the near future I will be setting up ELK with Kibana 4 and depending on the differences, may publish an update here.

Logging & Data Oh My!

If you’ve ever had to deal with many server/services you’ve probably run into the problem of logs: Each service wants to dump out logs to for example /var/log/ somewhere in it’s own format, and in a file-per-node. Finding errors or patterns means SSHing into each box and grepping through files. Yuk!

There are quite a few solutions to this problem of course. A short while back I decided to try out the ELK stack and have been very happy with the results. This is a hopefully mostly complete log of what I did to get it going under CentOS 7 and nginx for the frontend. Note that this guide documents an experimental/dev setup in which Elasticsearch & Kibana are running on the same machine. In the real production world you’ll likely not want this – and may want something like Redis in the mix as well.

What’s this ELK Thing?

ELK stands for Elasticsearch, Logstash, and Kibana. I won’t go into much depth here, but in basic terms you get the following:

  • Elastisearch: Indexed, search/queryable/filterable log data
  • Logstash: Centralized logging, parsing, and transformation
  • Kibana: Data visualization and charting

Prepare a System

I won’t go into detail on a CentOS 7 install here. So K.I.S.S. in mind:

  1. Install and update CentOS 7
  2. Ensure you set a hostname. This will be used in SSL certs later!

Package & Service Installation

Java

Other versions may work – YMMV:

sudo yum install -y java-1.7.0-openjdk  

Elasticsearch

Install the public GPG key:

sudo rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch  

Create a new repo @ /etc/yum.repos.d/elasticsearch.repo with the following contents:

[elasticsearch-1.1]
name=Elasticsearch repository for 1.1.x packages  
baseurl=http://packages.elasticsearch.org/elasticsearch/1.1/centos  
gpgcheck=1  
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch  
enabled=1  

Install it: bash sudo yum install -y elasticsearch-1.1.1

Change some configuration preferences by editing /etc/elasticsearch/elasticsearch.yml:

  1. Find ‘script.disable_dynamic’ and set it to true
  2. Find ‘discovery.zen.ping.multicast.enabled’ and set it to false
  3. Find ‘network.host’ and set it to 127.0.0.1 (We don’t want to expose Elasticsearch data outside of this box; “localhost” should also work here)

Start Elasticsearch & enabled it in future boots:

sudo systemctl start elasticsearch.service  
sudo systemctl enable elasticsearch.service  

By default, CentOS 7’s SELinux policy will block out port 9200. Let’s change that. After starting ES once as above, we should have some logs for audit2allow to process.

sudo yum install policycoreutils-python  
sudo cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M nginx  
sudo semodule -i nginx.pp  

Kibana

Download and extract Kibana (Note: Using v3.x here while 4.x is probably a better way to go at this point):

cd ~  
mkdir build  
cd build  
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.0.1.tar.gz  
tar xvf kibana-3.0.1.tar.gz  

Modify kibana-3.0.1/config.js in order to use/serve on port 80 by finding the ‘elasticsearch’ line and changing it as such:

elasticsearch: "http://"+window.location.hostname+":80",

Install Kibana such that nginx will find it:

sudo mkdir -p /usr/share/nginx/kibana3  
sudo cp -R kibana-3.0.1/* /usr/share/nginx/kibana3/  

nginx

Time for our nginx installation. First, add the EPEL repo and install:

sudo yum install epel-release  
sudo yum install nginx  

Assuming you have the firewall running (e.g. CentOS 7 defaults), this is a good place to open up some ports we’ll need:

sudo firewall-cmd --permanent --zone=public --add-service=http  
sudo firewall-cmd --permanent --zone=public --add-service=https  
sudo firewall-cmd --permanent --zone=public --add-port=5000/tcp  
sudo firewall-cmd --reload  

The first couple entries for HTTP/HTTPS are for our nginx web frontend while port 5000 is to accept incoming log entries.

Create or modify /etc/nginx/conf.d/default.conf. Below is what my configuration looks like:

#
# Nginx proxy for Elasticsearch + Kibana
#
# In this setup, we are password protecting the saving of dashboards. You may
# wish to extend the password protection to all paths.
#
# Even though these paths are being called as the result of an ajax request, the
# browser will prompt for a username/password on the first request
#
# If you use this, you'll want to point config.js at http://FQDN:80/ instead of
# http://FQDN:9200
#
server {  
  listen                *:80 ;

  server_name           the_host_name;
  access_log            /var/log/nginx/kibana.myhost.org.access.log;

  location / {
    root  /usr/share/nginx/kibana3;
    index  index.html  index.htm;
  }

  location ~ ^/_aliases$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_aliases$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/_nodes$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_search$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }
  location ~ ^/.*/_mapping {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
  }

  # Password protected end points
  location ~ ^/kibana-int/dashboard/.*$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
    limit_except GET {
      proxy_pass http://127.0.0.1:9200;
      auth_basic "Restricted";
      auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
    }
  }
  location ~ ^/kibana-int/temp.*$ {
    proxy_pass http://127.0.0.1:9200;
    proxy_read_timeout 90;
    limit_except GET {
      proxy_pass http://127.0.0.1:9200;
      auth_basic "Restricted";
      auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
    }
  }
}

Create a .htpasswd file for Kibana @ /etc/nginx/conf.d/kibana.htpasswd:

  1. Install htpasswd if needed: sudo yum install httpd-tools
  2. Create the .htpasswd file: sudo htpasswd -c /etc/nginx/conf.d/kibana.htpasswd TheUserName replacing TheUserName with the user name of your liking

Start nginx now as well as configure it to start on boot:

sudo service nginx restart  
sudo systemctl enable nginx.service  

Logstash

Time to move on to installing Logstash!

Create a Logstash repo @ /etc/yum.repos.d/logstash.repo with the following contents:

[logstash-1.4]
name=logstash repository for 1.4.x packages  
baseurl=http://packages.elasticsearch.org/logstash/1.4/centos  
gpgcheck=1  
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch  
enabled=1  

Install: bash sudo yum -y install logstash-1.4.2

We need to create a SSL cert/key pair in order for logstash-forwarder to connect to our server. I won’t just send logs to anyone! There are a couple options here:

  1. Using a FQDN. This requires your hostname and CN in the cert to match. Fairly simple stuff.
  2. Using a IP Address. This option requires subjectAltName to be set which requires creating a openssl configuration file with a v3_ca section. See the logstash-forwarder documentation for additional details.

With that said, I’m going with #1:

cd /etc/pki/tls  
sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.pem -subj /CN=the_host_name  

Two mentions from above:

  1. the_host_name should be replaced with the proper hostname you’ve been using thus far i.e. your FQDN.
  2. logstash-forwarder.pem is what we will put on all box/nodes that will forward logs to our new service here.

There is a lot more to go here! For example, you’ll want to configure logstash-forwarder on various nodes to send logs to your new ELK stack!