After some further research I think I answered at least part of this myself.

KIP-74 [1] states the following about ordering of partitions in the
fetch request:
"The solution is to reorder partitions in fetch request in round-robin
fashion to continue fetching from first empty partition received or to
perform random shuffle of partitions before each request."

Which explains the delay in my listing until data is read from the
topics with only one partition. Initially both these topics are
fetched last and then they move forward in every subsequent fetch
request until at some point they are among the first 50 (if run with
default settings for max.partition.fetch.bytes and fetch.max.bytes and
assume all partitions contain enough data to satisfy
max.partition.fetch.bytes) and receive data. In my test scenario this
takes 24 fetch requests. To better illustrate this we can set
max.fetch.bytes = max.partition.fetch.bytes which causes every fetch
request to only contain data from one partition.

The consumer logs the order at debug level, so we can check progress
in the output:

2019-01-11 14:10:52 DEBUG Fetcher:195 - [Consumer clientId=consumer-1,
groupId=cg1547212251445] Sending READ_UNCOMMITTED fetch for partitions
[aaa-33, aaa-34, aaa-35, ... , zzz-90, zzz-24, mmm-0, 000-0] to broker
localhost:9092 (id: 0 rack: null)
2019-01-11 14:10:52 DEBUG Fetcher:195 - [Consumer clientId=consumer-1,
groupId=cg1547212251445] Sending READ_UNCOMMITTED fetch for partitions
[aaa-34, aaa-35, aaa-36, ... , zzz-24, mmm-0, 000-0, aaa-33] to broker
localhost:9092 (id: 0 rack: null)
2019-01-11 14:10:53 DEBUG Fetcher:195 - [Consumer clientId=consumer-1,
groupId=cg1547212251445] Sending READ_UNCOMMITTED fetch for partitions
[aaa-35, aaa-36, aaa-37, ... , mmm-0, 000-0, aaa-33, aaa-34] to broker
localhost:9092 (id: 0 rack: null)
...
2019-01-11 14:12:58 DEBUG Fetcher:195 - [Consumer clientId=consumer-1,
groupId=cg1547212251445] Sending READ_UNCOMMITTED fetch for partitions
[zzz-90, zzz-24, mmm-0, 000-0, ... , zzz-88, zzz-22, zzz-89, zzz-23]
to broker localhost:9092 (id: 0 rack: null)
2019-01-11 14:12:58 DEBUG Fetcher:195 - [Consumer clientId=consumer-1,
groupId=cg1547212251445] Sending READ_UNCOMMITTED fetch for partitions
[zzz-24, mmm-0, 000-0, aaa-33, ... , zzz-22, zzz-89, zzz-23, zzz-90]
to broker localhost:9092 (id: 0 rack: null)
2019-01-11 14:12:58 DEBUG Fetcher:195 - [Consumer clientId=consumer-1,
groupId=cg1547212251445] Sending READ_UNCOMMITTED fetch for partitions
[mmm-0, 000-0, aaa-33, aaa-34, ... , zzz-89, zzz-23, zzz-90, zzz-24]
to broker localhost:9092 (id: 0 rack: null)

So I'll withdraw my suggestions around code improvements, as this is
already being handled well. The question around best practice for
handling something like this remains though. If anybody has any
suggestions I'd love to hear them!

Best regards,
Sönke

[1] https://cwiki.apache.org/confluence/display/KAFKA/KIP-74%3A+Add+Fetch+Response+Size+Limit+in+Bytes

On Wed, Jan 9, 2019 at 4:35 PM Sönke Liebau <[EMAIL PROTECTED]> wrote:

Sönke Liebau
Partner
Tel. +49 179 7940878
OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany