[ELK Series] Parsing the TEA logs to visualize right in ELK & Kibana

Today, I come back with another step in our connection between our TIBCO BW 6.x environment and the ELK stack to extract all the power about the log information we have inside our TIBCO BW components. And we started with the TEA component. In the previous post we get a sucessfull connection to ELK stack with the information inside the TEA. But we use the default message formt so, we cannot use the custom parts the TIBCO log file have inside. 

So, today we need to fix this gap and get all the data available to our use. If you remember the ELK stack, it has contains three pieces. The first one is Logstash the ope who receive the raw log traces and do any kind of transformation and submit them to the Elasticsearch engine. ElasticSearch engine index and stores the information and allows Kibana to query this data to show it to use in a great graphical way.

Ok, with this concepts in mind, it is clear if we need to do any kind of transformation we have to face it to the logstash component. Ok, so we have to change our logstash configuration… but how? Ok, that’s easy.

The logstash manage all its behavior with one file, logstash.conf file. In this file you have something similar to this:

input { stdin { } }
 filters { }
 output {
 elasticsearch { }
 }

Ok, the file is split in thre different parts:

  • Input section: In this section we declare where we are getting the data.
  • Filter section: In this section we declare any transformation or mapping we need to do to our data.
  • Output section: in this section we declare the output target of our data.

Ok, so we have to add a filter to do this transformation but how? Ok, once again, easy peasy :). Logstash have a component to do this kind of stuff. It is called Grok. And Grok is so easy to use:

We only have to define the Grok filter and the expected pattern. In our case the pattern for TEA logs is something like this:

%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:logLevel}  %{WORD:component}%{SPACE}\[%{DATA:subcomponent}\]%{GREEDYDATA:message}

And we need to put it together inside the grok filter:

grok {
 match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:logLevel}  %{WORD:component}%{SPACE}\[%{DATA:subcomponent}\]%{GREEDYDATA:message}" }
 }

So, our file logstash.conf file will be like this:

input {
 tcp {
 port => 5000
 }
 }
## Add your filters here
 filter {
 grok {
 match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:logLevel}  %{WORD:component}%{SPACE}\[%{DATA:subcomponent}\]%{GREEDYDATA:message}" }
 }
 }
 output {
 elasticsearch { }
 stdout {}
 }

And now, if we come back to our Kibana index page, we are going to see like the data have new fields to use inside its graphs and filters:

msg_1

 

So, now we can start doing filters with these data. For example, we can start filtering the logs that belongs to the ‘lifecycle’ component. So in the query input box, we write something like this: component:lifecycle .

msg_2

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s