Parameter Configuration

When creating a Kafka instance, the system parameter template is used by default. You can also customize the configuration within the parameter template.

Notes
  1. After upgrading the Kafka version, if there are parameters that need to be deleted, deprecated, or modified, please refer to the corresponding changes on the official Kafka website.
  2. To ensure the stable operation of the Kafka environment, certain parameters and parameter prefixes are not supported for configuration (for example, advertised. authorizer. broker. controller cruise.control.metrics.reporter.bootstrap. cruise.control.metrics.topic host.name inter.broker.listener.name listener. listeners. log.dir password. port process.roles sasl. security. servers,node.id ssl. super.user zookeeper.clientCnxnSocket zookeeper.connect zookeeper.set.acl zookeeper.ssl).

TOC

Procedure

CLI
Web Console
# Update the configuration of the instance
kubectl -n <namespace> patch rdskafka <name> --type=merge --patch='{"spec": {"config": {"auto.create.topics.enable":"true"}}}'

Parameter Support Description

Parametertypologydescriptionsdefault valueselectable value
auto.create.topics.enablebooleanEnable auto creation of topic on the servertrue
auto.leader.rebalance.enablebooleanEnables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds. If the leader imbalance exceeds leader.imbalance.per.broker.percentage, leader rebalance to the preferred leader for partitions is triggered.true
background.threadsintThe number of threads to use for various background processing tasks10[1,...]
compression.typestringSpecify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.producer
delete.topic.enablebooleanEnables delete topic. Delete topic through the admin tool will have no effect if this config is turned offtrue
leader.imbalance.check.interval.secondslongThe frequency with which the partition rebalance check is triggered by the controller300
leader.imbalance.per.broker.percentageintThe ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.10
log.flush.interval.messageslongThe number of messages accumulated on a log partition before messages are flushed to disk9223372036854775807[1,...]
log.flush.offset.checkpoint.interval.msintThe frequency with which we update the persistent record of the last flush which acts as the log recovery point60000[0,...]
log.flush.scheduler.interval.mslongThe frequency in ms that the log flusher checks whether any log needs to be flushed to disk9223372036854775807
log.flush.start.offset.checkpoint.interval.msintThe frequency with which we update the persistent record of log start offset60000[0,...]
log.retention.byteslongThe maximum size of the log before deleting it-1
log.retention.hoursintThe number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property168
log.roll.hoursintThe maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property168[1,...]
log.roll.jitter.hoursintThe maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property0[0,...]
log.segment.bytesintThe maximum size of a single log file1073741824[14,...]
log.segment.delete.delay.mslongThe amount of time to wait before deleting a file from the filesystem60000[0,...]
message.max.bytesintThe largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.1048588[0,...]
min.insync.replicasintWhen a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.1[1,...]
num.io.threadsintThe number of threads that the server uses for processing requests, which may include disk I/O8[1,...]
num.network.threadsintThe number of threads that the server uses for receiving requests from the network and sending responses to the network3[1,...]
num.recovery.threads.per.data.dirintThe number of threads per data directory to be used for log recovery at startup and flushing at shutdown1[1,...]
num.replica.fetchersintNumber of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker.1
offset.metadata.max.bytesintThe maximum size for a metadata entry associated with an offset commit4096
offsets.commit.required.acksshortThe required acks before the commit can be accepted. In general, the default (-1) should not be overridden-1
offsets.commit.timeout.msintOffset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.5000[1,...]
offsets.load.buffer.sizeintBatch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large).5242880[1,...]
offsets.retention.check.interval.mslongFrequency at which to check for stale offsets600000[1,...]
offsets.retention.minutesintAfter a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period.10080[1,...]
offsets.topic.compression.codecintCompression codec for the offsets topic - compression may be used to achieve "atomic" commits0
offsets.topic.num.partitionsintThe number of partitions for the offset commit topic (should not change after deployment)50[1,...]
offsets.topic.replication.factorshortThe replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.3[1,...]
offsets.topic.segment.bytesintThe offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads104857600[1,...]
queued.max.requestsintThe number of queued requests allowed for data-plane, before blocking the network threads500[1,...]
quota.consumer.defaultlongDEPRECATED: Used only when dynamic default quotas are not configured for or in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second9223372036854775807[1,...]
quota.producer.defaultlongDEPRECATED: Used only when dynamic default quotas are not configured for , or in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second9223372036854775807[1,...]
replica.fetch.min.bytesintMinimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs1
replica.fetch.wait.max.msintmax wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics500
replica.high.watermark.checkpoint.interval.mslongThe frequency with which the high watermark is saved out to disk5000
replica.lag.time.max.mslongIf a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr30000
replica.socket.receive.buffer.bytesintThe socket receive buffer for network requests65536
replica.socket.timeout.msintThe socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms30000
request.timeout.msintThe configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.30000
socket.receive.buffer.bytesintThe SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.102400
socket.request.max.bytesintThe maximum number of bytes in a socket request104857600[1,...]
socket.send.buffer.bytesintThe SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.102400
transaction.max.timeout.msintThe maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.900000[1,...]
transaction.state.log.load.buffer.sizeintBatch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).5242880[1,...]
transaction.state.log.min.isrintOverridden min.insync.replicas config for the transaction topic.2[1,...]
transaction.state.log.num.partitionsintThe number of partitions for the transaction topic (should not change after deployment).50[1,...]
transaction.state.log.replication.factorshortThe replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.3[1,...]
transaction.state.log.segment.bytesintThe transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads104857600[1,...]
transactional.id.expiration.msintThe time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings.604800000[1,...]
unclean.leader.election.enablebooleanIndicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data lossfalse
zookeeper.max.in.flight.requestsintThe maximum number of unacknowledged requests the client will send to Zookeeper before blocking.10[1,...]
zookeeper.session.timeout.msintZookeeper session timeout18000

For more parameter support, go to the Kafka website.