The Elastic Stack
The Elastic Stack is Elasticsearch, Kibana, Beats, and Logstash. It is also known as the ELK Stack. These tools are quite useful to search and analyze data, also used in many large companies.
Suppose you have a blog site. You have complex post types such as article, page, catalog, event, news, gallery, etc. You want to let users search for any of them. It will merge many tables then causing poor performance for a relational database. That task should be run on Elasticsearch because it is good at it.
How about if your blog site has already run without Elasticsearch and has many data. How to implement it on the current architecture?
- Create a script or CLI command to import the data from the database to Elasticsearch.
- Communicate the application to Elasticsearch instead of the database for searching.
- Keep updating data in Elasticsearch when the data in the database updated.
Kibana is a real-time data visualization and management tool for Elasticsearch. You can use Kibana instead of creating a custom dashboard for Elasticsearch. It is a user interface to let you analyze data from Elasticsearch. It is one of the benefits of using Elasticsearch rather than any NoSQL databases.
Beats is a lightweight data sending platform. It is lightweight because it is using a little amount of space and system resources and has no dependency on running. It sends data from machines or servers to Logstash or Elasticsearch. There are many kinds of shippers called the Beats family; Filebeat, Metricbeat, Packetbeat, Winlogbeat, Auditbeat, Heartbeat, and Functionbeat. The most common beats are Filebeat and Metricbeat.
Filebeat collects logs from many virtual machines or servers to centralize them. Metricbeat collects information about CPU, memory, and disk usage, traffics, and many more from systems. They come with many modules that support Apache, NGINX, MySQL, and many more. You don't need to login to every single server you have, to check the logs or system information, you can check them on Kibana.
Logstash is a server-side data processing pipeline. It collects data from Beats or events of Kafka or RabbitMQ, and many more, transforms them based on the format that you make, then sends them to Elasticsearch, email, HTTP endpoint, and many more.
Not only searching optimization. You can collect any data of your application that you want then put them on Kibana to analyze. That is important for your business.