In the previous post we talked about how is going to be our architecture to squeeze all the information that’s inside our log file using the ELK stack. We also exaplined the main components involved and their role in this architecture. Now, we are going to start to build it, and we are going to start launching our ELK stack.
As you could guess from the title or event for the last post series we are going to use Docker to do this. But, this time we are not going to create any dockerfile. We are going to use a community image to do that. As we said in the last post, the ELK is an open-source stack so there are several images that could do the work for us.
In my case, I’m going to use one of them (there is no official published but I think it’s going to do the job without problem). So, the image details are in this GitHub repository. This image was created by Anthony Lapenna (deviantony) and I encourage you to take a look to the other repositories that he has in his GitHub profile and his blog
Ok, now after the ads break we can come back to our task: To launch the ELK stack. This docker was created using docker-compose, so you only have to run this command to get everything working:
And if you have enough disk space and normally system configuration you could see traces like this while the stack is launching:
After that we have to check that everythings is up and running without a problem. So, we have to check the different UI interfaces have been started successfully.
To check the status of the ElasticSearch node we have to go to the <DOCKER_CONTAINER_URL>:9200 and we can get this response:
To check the status of the Kibana node we have to go to the <DOCKER_CONTAINER_URL>:5601 and we check if we have this response: