Create Instance

You can create a Kafka instance to build a high-throughput, low-latency real-time data pipeline to support the diverse needs of business systems in scenarios such as streaming data processing and service decoupling.

TOC

Create Kafka Instance

Procedure

CLI
Web Console

Create a Kafka instance via CLI:

cat << EOF | kubectl -n default create -f -
apiVersion: middleware.alauda.io/v1
kind: RdsKafka
metadata:
  name: my-cluster
spec:
  entityOperator:
    topicOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    userOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    tlsSidecar:
      resources:
        limits:
          cpu: 200m
          memory: 128Mi
        requests:
          cpu: 200m
          memory: 128Mi
  version: 3.8
  replicas: 3
  config:
    auto.create.topics.enable: "false"
    auto.leader.rebalance.enable: "true"
    background.threads: "10"
    compression.type: producer
    default.replication.factor: "3"
    delete.topic.enable: "true"
    log.retention.hours: "168"
    log.roll.hours: "168"
    log.segment.bytes: "1073741824"
    message.max.bytes: "1048588"
    min.insync.replicas: "1"
    num.io.threads: "8"
    num.network.threads: "3"
    num.recovery.threads.per.data.dir: "1"
    num.replica.fetchers: "1"
    unclean.leader.election.enable: "false"
  resources:
    limits:
      cpu: 2
      memory: 4Gi
    requests:
      cpu: 2
      memory: 4Gi
  storage:
    size: 1Gi
    # Replace with available storage class
    class: local-path
    deleteClaim: false
  kafka:
    listeners:
      plain: {}
      external:
        type: nodeport
        tls: false
  zookeeper:
    # Currently the same as Kafka broker
    replicas: 3
    resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 1
        memory: 2Gi
    # Storage maintained the same as Kafka broker
    storage:
      size: 1Gi
      # Replace with available storage class
      class: local-path
      deleteClaim: false
EOF

After creating the instance, you can check the status of the instance using the following command:

$ kubectl get rdskafka -n <namespace> -o=custom-columns=NAME:.metadata.name,VERSION:.spec.version,STATUS:.status.phase,MESSAGE:.status.reason,CreationTimestamp:.metadata.creationTimestamp
NAME         VERSION   STATUS   MESSAGE                                   CreationTimestamp
my-cluster   3.8       Active   <none>                                    2025-03-06T08:46:57Z
test38       3.8       Failed   Pod is unschedulable or is not starting   2025-03-06T08:46:36Z

The meanings of the output table fields are as follows:

FieldDescription
NAMEInstance Name
VERSIONCurrently only supports these 4 versions: 2.5.0, 2.7.0, 2.8.2, 3.8
STATUSThe current status of the instance, which may have the following statuses:
  • Creating: The instance is being created
  • Updating: The instance is being updated
  • Failed: The instance has encountered an unrecoverable error
  • Paused: The instance has been manually paused
  • Restarting: The instance is restarting
  • Active: The instance is ready for use
MESSAGEThe reason for the instance's current status
CreationTimestampThe timestamp when the instance was created

Create Single-Node Kafka Instance

Important Notice

It is recommended to set the Broker node count to 3 when creating an instance. If your Broker node count is less than the recommended value of 3, you will need to modify certain parameters during creation.

Procedure

CLI
Web Console

Create a single-node Kafka instance via CLI:

cat << EOF | kubectl -n default create -f -
apiVersion: middleware.alauda.io/v1
kind: RdsKafka
metadata:
  name: my-cluster
spec:
  entityOperator:
    topicOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    userOperator:
      resources:
        limits:
          cpu: 1
          memory: 2Gi
        requests:
          cpu: 1
          memory: 2Gi
    tlsSidecar:
      resources:
        limits:
          cpu: 200m
          memory: 128Mi
        requests:
          cpu: 200m
          memory: 128Mi
  version: 3.8
  replicas: 1
  config:
    auto.create.topics.enable: "false"
    auto.leader.rebalance.enable: "true"
    background.threads: "10"
    compression.type: producer
    delete.topic.enable: "true"
    log.retention.hours: "168"
    log.roll.hours: "168"
    log.segment.bytes: "1073741824"
    message.max.bytes: "1048588"
    min.insync.replicas: "1"
    num.io.threads: "8"
    num.network.threads: "3"
    num.recovery.threads.per.data.dir: "1"
    num.replica.fetchers: "1"
    unclean.leader.election.enable: "false"
    ## Ensure that the following parameter configurations are correct
    default.replication.factor: "1"
    offsets.topic.replication.factor: 1
    transaction.state.log.replication.factor: 1
    transaction.state.log.min.isr: 1
  resources:
    limits:
      cpu: 2
      memory: 4Gi
    requests:
      cpu: 2
      memory: 4Gi
  storage:
    size: 1Gi
    # Replace with available storage class
    class: local-path
    deleteClaim: false
  kafka:
    listeners:
      plain: {}
      external:
        type: nodeport
        tls: false
  zookeeper:
    # Currently the same as Kafka broker
    replicas: 1
    resources:
      limits:
        cpu: 1
        memory: 2Gi
      requests:
        cpu: 1
        memory: 2Gi
    # Storage maintained the same as Kafka broker
    storage:
      size: 1Gi
      # Replace with available storage class
      class: local-path
      deleteClaim: false
EOF