hi Andrew,
The kafka broker is hosted on a single node and this particular topic has
just 1 partition. The consume kafka processor is scheduled to run only on
primary node with 1 concurrent processor. Everything works well with about
50 consumers consuming from 50 topics of the same nature. When we start
consuming from Over 100-200 consumer all these errors come. Because of the
back pressure alot of consumers have to wait so to get around that i set
each of the processor with additional property of timeout.ms set to 70000
but that did not work.  Quite strangly the consumer also sometimes starts
consuming those messages which it has already consumed in the past so i
think there is some thing also wrong with the commit configuration. There
is some other additional property that guess i need to setup on the broker
side which will make it scalable. But i'm unable to find it. Kindly let me
know if you have faced a similar situation.

On Thu, 14 Jun 2018, 11:10 p.m. Andrew Psaltis, <[EMAIL PROTECTED]>
wrote: