[Java] Push based JMX reporting to logstash/elastic with jmx-trans

In this post I will show, how to report jmx metrics to logstash via TCP on a push based way, without changing java code from an existing application.

Sometimes it happens that we need to report metrics data different from the common way of pulling data with the elastic search jmx plugin. Maybe we have a cloud environment and the application instances can’t be discovered easily. There are different way of doing so and the most way would somehow change the existing java code (e.g. dropwizard metrics library). A different approach is offered by the jmx-trans-agent library. This library uses an agent to instrument the jvm and uses a simple configuration to get what we are looking for.



Jmxtrans Configuration

The first thing to do is to select the endpoint type, we want to use. There are several supported protocols and frameworks. One way would be using elastic search directly (which is at this moment, only available as a fork and does not support authentication) another way is to use a simple plaintext TCP endpoint (GraphitePlainTextTcpOutputWriter). In Logstash we can also define tcp inputs, so this is the easiest solution. We can also provide environment variables from properties, so we can divide production and development and other settings.

(..) 
<outputWriter class="org.jmxtrans.agent.GraphitePlainTextTcpOutputWriter">
 <host>targethost</host>
 <port>1337</port>
 <namePrefix>my-service ${environment} </namePrefix>
 </outputWriter> 
 <collectIntervalInSeconds>10</collectIntervalInSeconds>

Logstash Configuration

To consume the data, we need a logstash instance. In this simple example we don’t use different shippers and indexers, just one simple instance. The instance is listening to port 1337 and parses the space separated data and pushes it to elastic search.

input {
    tcp {
        port => 1337
        type => "my-type"
    }

}

filter {
	 if [type] == "my-type" { 
		grok {
			match => {
				"message" => "%{NOTSPACE:service} %{NOTSPACE:environment} %{NOTSPACE:metric} %{NUMBER:metric_value} %{NUMBER:log_timestamp}"
			}
		}
	}
}

output {
    elasticsearch { 
        hosts => ["my-host"]
        path => "/elastic"
        index => index%{+YYYY.MM}"       
        timeout => 120
    }
}

Now that’s it, we can now monitor the data in Kibana/Grafana:

September 23, 2017
||||||

Leave a reply