Or how to publish to Splunk from any Docker environment

WSO2 products follow a standard structure when it comes to configuration, data, artifacts, and logging. Configuration files are found in <CARBON_HOME>/repository/conf folder, data in <CARBON_HOME>/repository/data, artifacts in <CARBON_HOME>/repository/deployment (or in <CARBON_HOME>/repository/tenants folder if you’re in to multi-tenancy). All the log files are written into <CARBON_HOME>/repository/logs folder.

Log Aggregation

All log events are output as entries to files through Log4J. Because of this, when it’s time to attach WSO2 logging to a log aggregator, it’s a matter of incorporating a tailing file reader agent and directing it towards <CARBON_HOME>/repository/logs folder. For an example, for ELK this could be something like FileBeat.

- input_type: log
    - /mnt/wso2am-2.6.0/repository/logs/*.log
  hosts: ["logstash.private.deployment.local:5044"]

When it comes Splunk, the traditional approach for a story like this is to use the Universal Forwarder. It is an agent process that will read the contents of a specified log file and push events to a specified Splunk receiver port.

However, this approach is not a suitable one when it comes to dynamic deployments like Docker, K8s, or ECS where instances are prone to come and go down without prior planning and notice. The gap exists when part of the Universal Forwarder configuration has to be done in the Splunk side the deployment. This will not be able to register new instances that were spawned by Container Orchestration systems both for auto healing and scaling. Furthermore, a somewhat heavy process like the Universal Forwarder running along side a JVM inside the same Container is not exactly conforming to Containerization best practices.

Fortunately the proper way to register log events from a Docker based deployment to Splunk is a much simpler approach.

Docker Logging

Docker logging design enables multiple logging drivers to be enabled at the Docker daemon level. There are multiple logging drivers that are enabled OOTB, in addition to the ability to plugin custom logging drivers. Splunk is one of these supported logging drivers that are available OOTB, where the log events will be pushed to an HTTP Event Collector endpoint at Splunk.

Therefore, any Docker based deployment only has to enable splunk as the logging driver name, and specify the required parameters in order to start publishing the events produced by the Docker Containers.

When it comes AWS ECS, enabling this is a bit tricky.

Managing the ECS Cluster Instances can be done in two ways. EC2 is when you manage your own EC2 instances and register them in a specific ECS cluster as worker nodes. FARGATE is when that workload is handled by AWS Fargate.

The compromise in AWS Fargate is that you lose the detailed level of control you have over the Cluster Instances. At the moment, if you go for a Fargate managed ECS Cluster, you will only have awslogs (which publishes logs to AWS CloudWatch) and none as the logging options.

To enable OOTB logging drivers in the Docker daemon through the ECS Agent running on the Cluster Instances, you have to go with the EC2 option for managing the Cluster.

Now, the ECS Agent should be notified so that the Docker daemon is configured to support the list of Docker logging drivers. To do this, add the following entry to the ECS Agent configuration file and restart the ECS Agent.


As you probably have guessed by now, any number of supported Docker logging drivers can be specified here.

When creating a Task Definition for the WSO2 deployment, in the Container definition, go down Storage and Loggingsection. Select the drop down list for Log Configuration to view the list of OOTB supported logging drivers.

While there are a list of available logging drivers, the entries in this list will not be edited depending on the values added for above mentioned ECS_AVAILABLE_LOGGING_DRIVERS agent configuration. These are only a static list of options. The existence of an option here does not necessarily mean that Docker daemon on the Cluster Instances may support them without the necessary configuration.

Select Splunk as the Log Driver and add the following Log Options.

  • splunk-url — The Splunk HTTP Event Collector endpoint e.g.: https://input-<splunk-cloud-url>:8089
  • splunk-token — The access token generated at Splunk end to enable Docker to access Splunk without having to provide credentials
  • splunk-insecureskipverify — Set to true if TLS verification is to be skipped for test reasons

This will enable the logs created at the Docker daemon as part of the stdout of the running Containers to be pushed to the specified HEC endpoint at Splunk. Additional enrichment like tags could also be added as a **Log Option.**For an example, tag-{{.Name}} will add the name of the Container to the log event to be filtered out later. The full list of tags that can be added can be found here.


As this approach is tested and adopted for suitability, it may be better to create the EC2 Cluster Instances from a custom AMI that has all the ECS Agent configurations in place. That will help in managing the proper logging options in an autoscaling deployment.

In addition to the reduced complexity of configuring paths and registering instances when using the Universal Forwarder, this approach also conforms with the natural way of log aggregation when it comes to Docker based deployments. That will be useful to maintain a uniform log analysis strategy across deployments since log aggregation is now part of the Containerization stack rather than the application.

It may be possible that with future developments, AWS Fargate will also enable splunk as an option for Log Optionentry. However for now, this is only possible with EC2 Clusters.

Written on November 28, 2018 by chamila de alwis.

Originally published on Medium