I'm running influxdb version 1.4.2 on ubuntu 18.04. Everything was going well till I started to inject via NIFI heavy data stream and could see on the influxdb repl weird things happen. After I couldn't log in the repl anymore. I saw on the Internet some issues with the port 8086 & connection refused but it didn't help me, can help me to solve this problem ? Here are some more information : ![20|690x217](upload://zSY1gETV4hZvbRLTZWXq4Ckr4FF.png)
Based on your screenshot it looks that influx failed under the heavy load and caused an OOM, restarted and is processing and opening up your data. Until that is finished you won't be able to connect to the influx CLI
I involuntarily reproduced my problem with influxdb 1.7.5 on a vm (ubuntu 18.04) and apache nifi to inject the data into influx. This is the message I got from the vm : ![177|690x57](upload://iccjC6NcVHQLF4Lo7fj7jF5voht.png)
Sorry fo rthe delay. That first message with the OOM is the issue as far as i can see. Influx can't cope with the amount of data you're trying to write and runs out of memory. When this happens the service will restart but you will need to wait for it to go through the whole start up process which can take a while. Once Influx has caught up you should then be able to connect to the CLI as normal. If Influx carries on trying to insert the data once back online, it will only OOM again.
I think the nifi errors are related to the OOM. If the influx service is still loading up then you won't be able to send data to it.
1) Switching the index version to TSI1 - this should alleviate some of the memory issues (if you're using SSD) 2) breaking downt the data you want to import into smaller batches. There is an influx bench marking tool that could help [InfluxDB INCH](https://github.com/influxdata/inch) - It might be worth testing inserting your data with that to get a good idea of the maxmium you could send into your Influx instance.
The TICK stack works in production, but you need to scale it to your needs. You'll need to size your hardware to suit the amount of data you're sending and figure out what you want to keep as a tag or field.
I think the recommended sizing for large data > 750,000 writes per second you're looking at 8 cores, 32gb RAM. Beyond that you're probably looking at an enterprise solution with multiple nodes.