When creating a Kafka instance, the system parameter template is used by default. You can also customize the configuration within the parameter template.
advertised.
authorizer.
broker.
controller
cruise.control.metrics.reporter.bootstrap.
cruise.control.metrics.topic
host.name
inter.broker.listener.name
listener.
listeners.
log.dir
password.
port
process.roles
sasl.
security.
servers,node.id
ssl.
super.user
zookeeper.clientCnxnSocket
zookeeper.connect
zookeeper.set.acl
zookeeper.ssl
).Parameter | typology | descriptions | default value | selectable value |
---|---|---|---|---|
auto.create.topics.enable | boolean | Enable auto creation of topic on the server | true | |
auto.leader.rebalance.enable | boolean | Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds . If the leader imbalance exceeds leader.imbalance.per.broker.percentage , leader rebalance to the preferred leader for partitions is triggered. | true | |
background.threads | int | The number of threads to use for various background processing tasks | 10 | [1,...] |
compression.type | string | Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. | producer | |
delete.topic.enable | boolean | Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off | true | |
leader.imbalance.check.interval.seconds | long | The frequency with which the partition rebalance check is triggered by the controller | 300 | |
leader.imbalance.per.broker.percentage | int | The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage. | 10 | |
log.flush.interval.messages | long | The number of messages accumulated on a log partition before messages are flushed to disk | 9223372036854775807 | [1,...] |
log.flush.offset.checkpoint.interval.ms | int | The frequency with which we update the persistent record of the last flush which acts as the log recovery point | 60000 | [0,...] |
log.flush.scheduler.interval.ms | long | The frequency in ms that the log flusher checks whether any log needs to be flushed to disk | 9223372036854775807 | |
log.flush.start.offset.checkpoint.interval.ms | int | The frequency with which we update the persistent record of log start offset | 60000 | [0,...] |
log.retention.bytes | long | The maximum size of the log before deleting it | -1 | |
log.retention.hours | int | The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property | 168 | |
log.roll.hours | int | The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property | 168 | [1,...] |
log.roll.jitter.hours | int | The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property | 0 | [0,...] |
log.segment.bytes | int | The maximum size of a single log file | 1073741824 | [14,...] |
log.segment.delete.delay.ms | long | The amount of time to wait before deleting a file from the filesystem | 60000 | [0,...] |
message.max.bytes | int | The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. | 1048588 | [0,...] |
min.insync.replicas | int | When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. | 1 | [1,...] |
num.io.threads | int | The number of threads that the server uses for processing requests, which may include disk I/O | 8 | [1,...] |
num.network.threads | int | The number of threads that the server uses for receiving requests from the network and sending responses to the network | 3 | [1,...] |
num.recovery.threads.per.data.dir | int | The number of threads per data directory to be used for log recovery at startup and flushing at shutdown | 1 | [1,...] |
num.replica.fetchers | int | Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker. | 1 | |
offset.metadata.max.bytes | int | The maximum size for a metadata entry associated with an offset commit | 4096 | |
offsets.commit.required.acks | short | The required acks before the commit can be accepted. In general, the default (-1) should not be overridden | -1 | |
offsets.commit.timeout.ms | int | Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. | 5000 | [1,...] |
offsets.load.buffer.size | int | Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large). | 5242880 | [1,...] |
offsets.retention.check.interval.ms | long | Frequency at which to check for stale offsets | 600000 | [1,...] |
offsets.retention.minutes | int | After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period. | 10080 | [1,...] |
offsets.topic.compression.codec | int | Compression codec for the offsets topic - compression may be used to achieve "atomic" commits | 0 | |
offsets.topic.num.partitions | int | The number of partitions for the offset commit topic (should not change after deployment) | 50 | [1,...] |
offsets.topic.replication.factor | short | The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. | 3 | [1,...] |
offsets.topic.segment.bytes | int | The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads | 104857600 | [1,...] |
queued.max.requests | int | The number of queued requests allowed for data-plane, before blocking the network threads | 500 | [1,...] |
quota.consumer.default | long | DEPRECATED: Used only when dynamic default quotas are not configured for or in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second | 9223372036854775807 | [1,...] |
quota.producer.default | long | DEPRECATED: Used only when dynamic default quotas are not configured for , or in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second | 9223372036854775807 | [1,...] |
replica.fetch.min.bytes | int | Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs | 1 | |
replica.fetch.wait.max.ms | int | max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics | 500 | |
replica.high.watermark.checkpoint.interval.ms | long | The frequency with which the high watermark is saved out to disk | 5000 | |
replica.lag.time.max.ms | long | If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr | 30000 | |
replica.socket.receive.buffer.bytes | int | The socket receive buffer for network requests | 65536 | |
replica.socket.timeout.ms | int | The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms | 30000 | |
request.timeout.ms | int | The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. | 30000 | |
socket.receive.buffer.bytes | int | The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. | 102400 | |
socket.request.max.bytes | int | The maximum number of bytes in a socket request | 104857600 | [1,...] |
socket.send.buffer.bytes | int | The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. | 102400 | |
transaction.max.timeout.ms | int | The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction. | 900000 | [1,...] |
transaction.state.log.load.buffer.size | int | Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large). | 5242880 | [1,...] |
transaction.state.log.min.isr | int | Overridden min.insync.replicas config for the transaction topic. | 2 | [1,...] |
transaction.state.log.num.partitions | int | The number of partitions for the transaction topic (should not change after deployment). | 50 | [1,...] |
transaction.state.log.replication.factor | short | The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. | 3 | [1,...] |
transaction.state.log.segment.bytes | int | The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads | 104857600 | [1,...] |
transactional.id.expiration.ms | int | The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings. | 604800000 | [1,...] |
unclean.leader.election.enable | boolean | Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss | false | |
zookeeper.max.in.flight.requests | int | The maximum number of unacknowledged requests the client will send to Zookeeper before blocking. | 10 | [1,...] |
zookeeper.session.timeout.ms | int | Zookeeper session timeout | 18000 |
For more parameter support, go to the Kafka website.