Replication factor: 1 larger than available brokers: 0 – Create Kafka Topic
This tutorial guides you on how to resolve errorĀ Replication factor: 1 larger than available brokers: 0 while creating Kafka Topic.
Error – Replication factor: 1 larger than available brokers: 0
I was trying to create Kafka topic after running Kafka Server in daemon mode. First, I tried to start Kafka Server in daemon mode using the following command.
$ /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
Then, tried creating Kafka topic, which resulted in error “Replication factor: 1 larger than available brokers: 0” as shown below
$ /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Error while executing topic command : Replication factor: 1 larger than available brokers: 0. [2021-01-18 12:29:44,869] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: Replication factor: 1 larger than available brokers: 0. (kafka.admin.TopicCommand$)
Let’s see in the below section, how did I troubleshoot to fix the above error.
Troubleshooting – Replication larger than available brokers
I found that, I had installed Kafka server and started without installing zookeeper and tried first which resulted in above error.
Therefore, tried to install zookeeper and tried to create topic again. But it resulted with same error again.
Further check, I tried to run Kafka Server with non-daemon mode i.e., without using option -daemon as shown below. Then I figured out from the logs that the Kafka Server was not started due to AccessDeniedException and Kafka was shutting down in the end.
[2021-01-18 12:46:33,692] ERROR Disk error while locking directory /tmp/kafka-logs (kafka.server.LogDirFailureChannel) java.nio.file.AccessDeniedException: /tmp/kafka-logs/.lock at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) at java.base/sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:182) at java.base/java.nio.channels.FileChannel.open(FileChannel.java:292) at java.base/java.nio.channels.FileChannel.open(FileChannel.java:345) at kafka.utils.FileLock.<init>(FileLock.scala:31) at kafka.log.LogManager.$anonfun$lockLogDirs$1(LogManager.scala:235) at scala.collection.StrictOptimizedIterableOps.flatMap(StrictOptimizedIterableOps.scala:117) at scala.collection.StrictOptimizedIterableOps.flatMap$(StrictOptimizedIterableOps.scala:104) at scala.collection.mutable.ArraySeq.flatMap(ArraySeq.scala:38) at kafka.log.LogManager.lockLogDirs(LogManager.scala:233) at kafka.log.LogManager.<init>(LogManager.scala:105) at kafka.log.LogManager$.apply(LogManager.scala:1212) at kafka.server.KafkaServer.startup(KafkaServer.scala:290) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:82) at kafka.Kafka.main(Kafka.scala) [2021-01-18 12:46:33,700] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.nio.file.AccessDeniedException: /tmp/kafka-logs/recovery-point-offset-checkpoint at java.base/sun.nio.fs.UnixException.translateToIOException(UnixException.java:90) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:111) at java.base/sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:116) at java.base/sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:219) at java.base/java.nio.file.Files.newByteChannel(Files.java:370) at java.base/java.nio.file.Files.createFile(Files.java:647) at kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:87) at kafka.server.checkpoints.OffsetCheckpointFile.<init>(OffsetCheckpointFile.scala:65) at kafka.log.LogManager.$anonfun$recoveryPointCheckpoints$1(LogManager.scala:107) at scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:99) at scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:86) at scala.collection.mutable.ArraySeq.map(ArraySeq.scala:38) at kafka.log.LogManager.<init>(LogManager.scala:106) at kafka.log.LogManager$.apply(LogManager.scala:1212) at kafka.server.KafkaServer.startup(KafkaServer.scala:290) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44) at kafka.Kafka$.main(Kafka.scala:82) at kafka.Kafka.main(Kafka.scala) [2021-01-18 12:46:33,702] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
To fix this, I tried to run the same command asĀ sudo user as shown below.
$ sudo /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties
Finally, the logs showed that Kafka Server started without any issues.
[2021-01-18 12:46:56,917] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:56,920] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:56,922] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:56,925] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:56,939] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) [2021-01-18 12:46:56,940] INFO [broker-0-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread) [2021-01-18 12:46:57,017] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient) [2021-01-18 12:46:57,052] INFO Stat of the created znode at /brokers/ids/0 is: 41,41,1610974017037,1610974017037,1,0,0,72057814455681026,254,0,41 (kafka.zk.KafkaZkClient) [2021-01-18 12:46:57,054] INFO Registered broker 0 at path /brokers/ids/0 with addresses: PLAINTEXT://cs-496675541762-default-boost-mm6vg:9092, czxid (broker epoch): 41 (kafka.zk.KafkaZkClient) [2021-01-18 12:46:57,156] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:57,164] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:57,166] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) [2021-01-18 12:46:57,176] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:57,199] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator) [2021-01-18 12:46:57,200] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator) [2021-01-18 12:46:57,203] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) [2021-01-18 12:46:57,227] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager) [2021-01-18 12:46:57,247] INFO Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache) [2021-01-18 12:46:57,252] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) [2021-01-18 12:46:57,255] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) [2021-01-18 12:46:57,263] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) [2021-01-18 12:46:57,302] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2021-01-18 12:46:57,338] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) [2021-01-18 12:46:57,351] INFO [SocketServer brokerId=0] Starting socket server acceptors and processors (kafka.network.SocketServer) [2021-01-18 12:46:57,368] INFO [SocketServer brokerId=0] Started data-plane acceptor and processor(s) for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) [2021-01-18 12:46:57,369] INFO [SocketServer brokerId=0] Started socket server acceptors and processors (kafka.network.SocketServer) [2021-01-18 12:46:57,379] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser) [2021-01-18 12:46:57,379] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser) [2021-01-18 12:46:57,379] INFO Kafka startTimeMs: 1610974017369 (org.apache.kafka.common.utils.AppInfoParser) [2021-01-18 12:46:57,381] INFO [KafkaServer id=0] started (kafka.server.KafkaServer) [2021-01-18 12:46:57,465] INFO [broker-0-to-controller-send-thread]: Recorded new controller, from now on will use broker 0 (kafka.server.BrokerToControllerRequestThread)
Then tried running the following command with daemon mode as sudo user
$ sudo /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
Finally, I could create Kafka topic successfully as sudo user.
$ /usr/local/kafka/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test Created topic test.
That’s it. This is how I troubleshooted error “Replication larger than available brokers” that I faced while creating Kafka topic and fixed in my environment.
Hope it helped š
- zookeeper is not a recognized option while running kafka-console-consumer.sh
- How to Start Stop Restart MariaDB on Linux OS ?
- How to set or change root password in Ubuntu Linux ?
- Putty Fatal Error No supported authentication methods available
- How to find which users belongs to a specific group in linux
- Give write permissions for specific user or group for specific folder in linux
- How to unzip a zip file from Terminal (Google Cloud Shell)
- Build a Docker Image with a Dockerfile and Cloud Build in GCP?
- How to create GCP project on Google Cloud Platform
- MariaDB ā How to set max_connections permanently ?
- How to create GCP project on Google Cloud Platform
- Is it possible to change Google Cloud Platform Project ID ?
- Create non-root SSH user account and provide access to specific folders
- MySQL : How to grant all privileges to the user on database ?
- Find Java JDK Path ā OpenJDK Installed
- How to list all kafka topics using kafka testclient ?