ELK: Filebeat – AWS ElastiCache Redis – Logstash

In this brief guide, we’ll cover how you can use an AWS Elasticache Redis node as a buffer betweek Elastic Filebeat and Logstash. This is a great way to help buffer log entries hitting Logstash in the event of large spikes in data being generated.

AWS Elasticache allows you to deploy a managed Redis node or cluster of nodes (in the newer versions). We’ll not get in to the detail of clustering Redis here, but it’s something you should take a look at if you’re running this solution in a production environment.

Required Resources

For this demo, you’ll need a working version of Filebeat and Logstash, as well as having deployed a Redis cache in to AWS Elasticache. We won’t cover the creation / installation of these resources here.

The resources should be allowed to communicate as follows:

filebeat --tcp:6379--> aws elasticache <--tcp:6379--> logstash

Once you have this set up, take a look at the configuration that you’ll need below:

Filebeat

We can use the Filebeat ‘output.redis’ to export log entries in to our AWS Elasticache redis node. This is what the configuration looks like for a simple Filebeat setup that collects the syslog and sends it to Elasticache:

---

filebeat.inputs:
- type: log
  paths:
    - /var/log/syslog
  fields:
    type: syslog  # Useful for grok parsing
  fields_under_root: true

output.redis:
  hosts: ["*****.0001.euw1.cache.amazonaws.com:6379"]
  loadbalance: true  # Enable this if using a redis cluster
  key: filebeat  # Default is "filebeat" if not specified

Logstash

We now need to configure Logstash to ingest our log entries from the Elasticache Redis cache. We can do that using the ‘redis’ input in our Logstash configuration. A simple example of this is as follows:

input {
  redis {
    host => ["*****.0001.euw1.cache.amazonaws.com"]
    port => 6379
    data_type => "list"
    key => "filebeat"
  }
}

You can test your new configuration by running Logstash in a configuration test mode, to ensure it works and is valid. Run the following command:

bin/logstash -f /path/to/above/configuration_file.conf --config.test_and_exit


When the Filebeat and Logstash services are re-launched, you should now be able to see log entries flowing from Filebeat through AWS Elasticache and in to Logstash, ready for processing and forwarding on to whatever outputs you are using, typically Elasticsearch

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.