Fluentd collects log events from each node in the cluster and stores them in a centralized location so that administrators can search the logs when troubleshooting issues in the cluster. The process that fluentd uses to parse and send log events to Elasticsearch differs based on the formatting of log events in each log file.
tail plugin to read logs and determine the end of a log event. Each
log event is sent to Elasticsearch when the next log event is written to the log file.
This mechanism is often used when each log event starts with a timestamp and then
includes a stack trace.grok plugin to parse logs events
using complex expressions. This mechanism is often used to parse logs events that have
non-uniform log formatting.| Tag | Description |
|---|---|
| level | The message level of the log entry. For example, info, warning, or error. |
| class | Java or C++ process name associated with the log entry. |
| message | The log message. |
| event_time | The time, with millisecond precision, when the log entry was written to the log file. |
| service_name | The name of the service that generated the log entry. |
| @timestamp | The time, with second precision, when fluentd read the message. |
| fqdn | The node on which the log entry was written. |
| clusterid | The clusterid of the cluster on which the log was written. |
For more information about Elasticsearch, see the Elasticsearch website.