Subject: [DISCUSS] KIP-317 - Add end-to-end data encryption functionality to Apache Kafka


Tom, good point, I've done exactly that -- hashing record keys -- but it's
unclear to me what should happen when the hash key must be rotated. In my
case the (external) solution involved rainbow tables, versioned keys, and
custom materializers that were aware of older keys for each record.

In particular I had a pipeline that would re-key records and re-ingest
them, while opportunistically overwriting records materialized with the old
key.

For a native solution I think maybe we'd need to carry around any old
versions of each record key, perhaps as metadata. Then brokers and
materializers can compact records based on _any_ overlapping key, maybe?
Not sure.

Ryanne

On Thu, May 7, 2020, 12:05 PM Tom Bentley <[EMAIL PROTECTED]> wrote: