The idea of this post comes from one comment in my last post, as you can see here. In that post we were talking about different log capabilities that you can do now in the new TIBCO TEA Administrator tool. But, this user, ask why talking about the log capabilities the product OOTB provides you when you could integrate your traces with a great component stack to squeeze all the information these logs have. This stack was the ElasticSearch (ELK) stack, and I think: “That is a great idea”. So, I’m going to create a post series to explain how to integrate your logs with this stack.
First of all, I need to explain what is the ELK stack and which components conform it. Ok, The ELK stack is a set of components build by the ElasticSearch company as you can see here. They are free software components and you could take a look and collaborate with them from their github repositories.
The ELK stack have mainly three components:
These three components working together gives you the opportunity to extract the information that you need with dashboards and querying capabilities. But, we are going to go step by step. First of all we are going to draw this architecture we are going to build in the next weeks:
Today, we are not going to talk about the TIBCO stack because we already describe it few times and you also have the dockerfiles to create them and to start it. So, ware going to focus in the ELK stack. We are going to start talking about the different components:
- Logstash: Logstash is the first component of the stack that you are going to know because is the one who is going to be linked to your TIBCO stack. Logstash is a “ingestion system” and its goal is to enable different sources to integrate its data inside the Elasticsearch engine. It is similar to other products like Apache Flume or Oracle Golden Gate. You are going to define different workflows to gather all the information (in this case, the log traces) and to inserting them in the Elasticsearch engine.
- ElasticSearch: It is the core of this stack, it is a search engine built on top with Apache Lucene that allows you to do quick queries to your data indexed using Logstash. It has a very good performance in high-volume case scenarios so you don’t have to worry to index a lot of traces because it has built to have a lot of data and delivery good time response to your queries as well.
- Kibana: It is the visualization layer to the elastic search engine. It is a web application that is linked to your elastic search engine and could present all the information stored in the engine in a beautiful and useful way. You can create ElasticSearch queries to create graphs from it and dashboards to take a look of all the relevant information with a few seconds instead of go through all the log files using ER in your text editor (vi, emacs, nano, SublimeText…) You can take a look of how Kibana looks like:
So, the idea with these post series is to create a dashboard similar to the one you could see in the picture above but with the information inside your log files from your BW processes. How cool is that? I’m sure that all of you want to have this kind of tools instead of using your text-editor and we are here to make possible that you could do this jump from your editor to this new dashboard.