Browse Source

man poc

master
zeus 3 years ago
parent
commit
8ecef6ea2a
  1. 36
      poc-datacollector/modules/ROOT/pages/doc-dummy_service.adoc

36
poc-datacollector/modules/ROOT/pages/doc-dummy_service.adoc

@ -37,6 +37,7 @@
</parse>
</source>
# define the source which will provide log events
<source> // <1>
@type tail // <2>
@ -60,13 +61,14 @@
</parse>
</source>
## match tag=log.* and write to mongo
<match log.*> // <9>
@type copy
copy_mode deep // <10>
<store ignore_error> // <11>
@type mongo
@type mongo // <12>
connection_string mongodb://mongo.poc-datacollector_datacollector-net:27017/fluentdb // <12>
connection_string mongodb://mongo.poc-datacollector_datacollector-net:27017/fluentdb // <13>
#database fluentdb
collection test
@ -74,8 +76,8 @@
#port 27017
num_retries 60
capped // <13>
capped_size 100m // <14>
capped // <14>
capped_size 100m
<inject>
# key name of timestamp
time_key time
@ -105,20 +107,20 @@
</match>
----
<1> *<source>* directives determine the input sources. The source submits events to the Fluentd routing engine. An event consists of three entities: *tag*, *time* and *record*.
<2> *localhost* xref:doc-dummy_service.adoc#tailmongo[see image]
<3> *localhost* inside of a container will resolve to the network stack of this container
<4> *localhost* inside of a container will resolve to the network stack of this container
<5> *localhost* inside of a container will resolve to the network stack of this container
<6> *localhost* inside of a container will resolve to the network stack of this container
<7> *localhost* inside of a container will resolve to the network stack of this container
<8> *localhost* inside of a container will resolve to the network stack of this container
<2> The *tail Input plugin* allows Fluentd to read events from the tail of text files. Its behavior is similar to the tail -F command. xref:doc-dummy_service.adoc#tailmongo[see image: type tai]
<3> The *path(s) to read*. Multiple paths can be specified, separated by comma ','. '*' format can be included to add/remove the watch file dynamically. At the interval of refresh_interval, Fluentd refreshes the list of watch files.
<4> *pos_file* handles multiple positions in one file so no need to have multiple pos_file parameters per source. Don't share *pos_file* between *tail* configurations. It causes unexpected behavior e.g. corrupt pos_file content.
<5> The *tag* of the event.
<6> Enables the additional inotify-based watcher. *Either of enable_watch_timer or enable_stat_watcher must be true*
<7> Enables the additional watch timer. *Either of enable_watch_timer or enable_stat_watcher must be true*
<8> The *none parser plugin* parses the line as-is with the single field. This format is to defer the parsing/structuring of the data.
<9> *<match>* directives determine the output destinations. The match directive looks for events with matching tags and processes them. The most common use of the match directive is to *output events to other systems*.
<10> *localhost* inside of a container will resolve to the network stack of this container
<11> *localhost* inside of a container will resolve to the network stack of this container
<12> *localhost* inside of a container will resolve to the network stack of this container
<13> *localhost* inside of a container will resolve to the network stack of this container
<14> *localhost* inside of a container will resolve to the network stack of this container
<15> *localhost* inside of a container will resolve to the network stack of this container
<10> Chooses how to pass the events to <store> plugins. *deep* copied events to each store plugin. This mode is useful when you modify the nested field after out_copy, e.g. Docker Swarm/Kubernetes related fields.
<11> Specifies the *storage destinations*. The format is the same as the <match> directive. This section is required at least once.
<12> The *mongo Output plugin* writes records into , the document-oriented database system.
<13> The *MongoDB connection string* for URI.
<14> This option enables the *capped collection*. This is always recommended. https://docs.mongodb.com/manual/core/capped-collections/[Capped collections^] are fixed-size collections that support high-throughput operations that insert and retrieve documents based on insertion order. Capped collections work in a way similar to circular buffers: once a collection fills its allocated space, it makes room for new documents by overwriting the oldest documents in the collection.
<15> *Flushing* Parameters:
[#tailmongo]
.type tail

Loading…
Cancel
Save