![]() I've tried adding the bulk_max_size, still getting those errors. Make sure that you correctly install and configure your YAML config file. Install Filebeat on your source Amazon Elastic Compute Cloud (Amazon EC2) instance. ![]() Update your Filebeat, Logstash, and OpenSearch Service configurations. INFO Error publishing events (retrying): read tcp :45244->:5044: i/o timeout Set up your security ports, such as port 443, to forward logs to OpenSearch Service. INFO Non-zero metrics in the last 30s: _files=1 =1 =1 _count.PublishEvents=3 _errors=2 _bytes=1022 _but_not_acked_events=32 _events=16ĮRR Failed to publish events caused by: read tcp :45244->:5044: i/o timeout INFO Error publishing events (retrying): read tcp :45242->:5044: i/o timeout INFO Error publishing events (retrying): read tcp :45240->:5044: i/o timeoutĮRR Failed to publish events caused by: read tcp :45242->:5044: i/o timeout INFO Harvester started for file: /var/log/tiveyes/visitors.csvĮRR Failed to publish events caused by: read tcp :45240->:5044: i/o timeout INFO Loading and starting Prospectors completed. INFO Starting spooler: spool_size: 2048 idle_timeout: 5s INFO Prospector with previous states loaded: 0 INFO Loading registrar data from /var/lib/filebeat/registry INFO Registry file set to: /var/lib/filebeat/registry INFO Activated logstash as output plugin. Here is the Filebeat log I am getting: INFO Setup Beat: filebeat Version: 5.3.0 Here is my logstash input config: #tcp domono stream via 5044 ![]() Here is my Filebeat.yml: - input_type: log My objective here is to send CSV from Filebeat to Logstash-Elasticsearch-Kibana Please bear with the noobness of this thread. Another advantage of using filebeat, even on the logstash machine, is that if your logstash instance is down, you won't lose any logs, filebeat will resend the events, using the file input you can lose events in some cases.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |