We can all agree that logging is key in every setup: having useful logs from the components in your environment is your best tool in diagnosing issues and keeping track of the health of your applications. Of course, Docker-based deployments are no exception to this rule. In this post we will go through the options we have evaluated for configuring logging in Rails-based docker stacks, upon reaching an ELK stack.
You are doing it wrong
First of all, a piece of advice: regardless of the kind of solution you are looking forward to implement, if you have done nothing to tweak your application's logging, you are doing something wrong .
Rails apps will log to a local /log
folder within the container. This means that logs will not be rotated, and will not be kept if the container is recreated. Access to logs is also cumbersome as it requires accessing the running container.
Mounted volume
The easiest way to work around this is to mount a host volume in your /app/log
container folder, in order to persist log files and access them directly from the host. Rotation can be handled externally through logrotate configurations, using a copytruncate
strategy to avoid having to signal the running Rails process.
Docker logging drivers
But we can do better. Docker provides logging drivers for your containers right out of the box; though the default is a JSON file, it has support for syslog
, journald
, gelf
, fluentd
and awslogs
.
As a side note, keep in mind that unless explicitly configured, the json-file
default logging driver will not rotate your log files , which will grow indefinitely.
Docker will manage whatever your applications log to STDOUT
; this means that we need to route the Rails logs to standard output rather than to a local file.
Tuning your Rails app
The most straightforward option is to simply set your Rails logger to write to STDOUT, so all log entries are redirected to the Docker logging driver.
config.logger = Logger.new(STDOUT)
However, if you want to post-process your log entries to search through them or extract metrics, as we will be doing later, you will find the typical Rails logger format a bit cumbersome to parse, especially considering that each request spans several lines.
Luckily, there is already a gem out there named lograge, which will take care of this by automatically reformatting your logs, and redirecting them to STDOUT as well.
So, instead of this:
Started GET "/" for 127.0.0.1 at 2012-03-10 14:28:14 +0100
Processing by HomeController#index as HTML
Rendered text template within layouts/application (0.0ms)
Rendered layouts/_assets.html.erb (2.0ms)
Rendered layouts/_top.html.erb (2.6ms)
Rendered layouts/_about.html.erb (0.3ms)
Rendered layouts/_google_analytics.html.erb (0.4ms)
Completed 200 OK in 79ms (Views: 78.8ms | ActiveRecord: 0.0ms)
You get this:
method=GET path=/jobs/833552.json format=json controller=jobs action=show status=200 duration=58.33 view=40.43 db=15.26
Just add the lograge
gem to your app and set:
config.lograge.enabled = true
Going through logstash
Since we are now successfully running our log entries through Docker, we have to decide where to send them. Here we will set up a simple ELK stack, in which we will send the log entries to Logstash to be processed, which will in turn store them in an ElasticSearch, and use Kibana for visualisations.
First step is to have our Rails app output log entries in a logstash-compatible format. Since we are using lograge, we can easily add support for logstash by adding the logstash-event
gem and configuring:
config.lograge.formatter = Lograge::Formatters::Logstash.new
Then we need to actually set up the ELK stack. We'll use the following docker-compose
configuration:
elasticsearch:
image: elasticsearch:2.0
command: elasticsearch -Des.network.host=0.0.0.0
logstash:
image: logstash:2.0
command: logstash -f /etc/logstash/conf.d/logstash.conf
ports:
- "12201:12201/udp"
volumes:
- "./logstash:/etc/logstash/conf.d"
links:
- elasticsearch
kibana:
image: kibana:4.2
links:
- elasticsearch
ports:
- "5601:5601"
Note that we are mounting a logstash
configuration file, which instructs logstash to receive entries via gelf (Graylog Extended Log Format, one of the docker-supported drivers), parse our JSON-formatted log entries to extract all the metadata contained, and store the result in the ElasticSearch host:
input {
gelf {}
}
filter {
json {
source => "short_message"
remove_field => "short_message"
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
}
}
Last step is to instruct our docker containers to actually use this logging drivers. Simply use the following config in the docker-compose for your apps:
log_driver: gelf
log_opt:
gelf-address: udp://IP_TO_LOGSTASH_HOST:12201
You can also specify these options as defaults in the DOCKER_OPTS
for your docker daemon, so they apply to all your containers.
Show me the code
We have uploaded a fork of a sample Rails app with the lograge configuration and a simple Dockerfile, as well as the configuration for the ELK stack.
Together with Juan, with whom we set up this configuration, we gave a presentation on the 7th Buenos Aires Docker Meetup. You can find the slides on Slideshare or watch the recorded presentation in Spanish on YouTube.