>> single requests.
It is definitely feasible. My concern with the approach is about the way our
current API works. Let me try to illustrate it with an example.
When the admin client sends a CreateTopicsRequest to the controller, the
request goes to the purgatory waiting until all the topics are created or
timeout specified in the request is reached. If the timeout is reached, a
RequestTimeoutException is returned to the client. This is used to fail the
future of the caller. In conjunction, the admin client fails any pending
with a TimeoutException after the request timeout is reached (30s by
In the former case, the caller will likely retry. In the later case, the
will automatically retry. In both cases, the broker will respond with a
Having a huge backlog of pending operations will amplify this weird
Clients will tend to get TopicExistsException errors when they create
the first time which is really weird.
I think that our current API is not well suited for this. An asynchronous
with one API to create/delete and another one to query the status of
be better suited. We can definitely involve our API towards this but we
figure out a compatibility story for existing clients.
Another aspect is the fairness among the clients. Imagine a case where one
continuously creates and deletes topics in a tight loop. This would
flood the queue
and delay the creations and the deletions of the other clients. Throttling
time mitigates this directly. Throttling at execution would need to take
this into account
to ensure fairness among the clients. It is a little harder to do this in
the controller as
the controller is completely agnostic from the principals and the client
These reasons made me lean towards the current proposal. Does that make
On Wed, May 13, 2020 at 10:05 AM David Jacot <[EMAIL PROTECTED]> wrote:
> Hi Jun,
> Coming back to your question regarding the differences between the token
> bucket algorithm and our current quota mechanism. I did some tests and
> they confirmed my first intuition that our current mechanism does not work
> well with a bursty workload. Let me try to illustrate the difference with
> example. One important aspect to keep in mind is that we don't want to
> reject requests when the quota is exhausted.
> Let's say that we want to guarantee an average rate R=5 partitions/sec
> allowing a burst B=500 partitions.
> With our current mechanism, this translates to following parameters:
> - Quota = 5
> - Samples = B / R + 1 = 101 (to allow the burst)
> - Time Window = 1s (the default)
> Now, let's say that a client wants to create 7 topics with 80 partitions
> each at
> the time T. It brings the rate to 5.6 (7 * 80 / 100) which is above the
> quota so
> any new request is rejected until the quota gets back above R. In theory,
> client must wait 12 secs ((5.6 - 5) / 5 * 100) to get it back to R. In
> practice, due
> to the sparse samples (one sample worth 560), the rate won't decrease until
> that sample is dropped and it will be after 101 secs. It gets worse if the
> is increased.
> With the token bucket algorithm, this translate to the following
> - Rate = 5
> - Tokens = 500
> The same request decreases the number of available tokens to -60 which is