KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

IMPORTANT! If you trying to install Apache Atlas and receiving this error, there is a separate article:

Suppose that we are faced with these exceptions. The first:

[ReadOnlyZKClient-localhost:2181@<id>] [WARN] ReadOnlyZKClient$ZKTask$1:183 - <id> to localhost:2181 failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 1
[ReadOnlyZKClient-localhost:2181@<id>] [WARN] ReadOnlyZKClient$ZKTask$1:183 - <id> to localhost:2181 failed for get of /hbase/meta-region-server, code = CONNECTIONLOSS, retries = 1

The second:

Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
        at org.apache.zookeeper.KeeperException.create(
        at org.apache.zookeeper.KeeperException.create(
        at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(

The third:

WARN] ConnectionImplementation:529 - Retrieve cluster id failed
java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
        at java.util.concurrent.CompletableFuture.reportGet(
        at java.util.concurrent.CompletableFuture.get(
        at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(
        at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
        at java.lang.reflect.Constructor.newInstance(
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(
        at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(

The hbase-client cannot connect to the Zookeeper. You need to pay attention to the address:

... to localhost:2181 failed for get of ...

If there is a real Zookeeper instance at this address, then you need to check its state. But if there is no ZooKeeper instance at the given address, then the problem is in the under-configured hbase-site.xml configuration. If you have a Cloudera distribution, then it is located here:


Check the following properties:

  • hbase.zookeeper.quorum
  • hbase.cluster.distributed
  • hbase.rootdir

In my case, the hbase.zookeeper.quorum field was not populated. Therefore, the hbase-client tried to connect to ZooKeeper using the standard host (localhost) and the standard port (2181).

If the file is empty or not exist at all – refer to the official documentation:

Telegram channel

If you still have any questions, feel free to ask me in the comments under this article or write me at

If I saved your day, you can support me 🤝

2 thoughts on “KeeperErrorCode = ConnectionLoss for /hbase/hbaseid

  1. Hi Mark,

    I am trying to build apache atlas and the same problem has been occurred. Would you please give me more details of how your problem was resolved?
    P.S. The Zookeeper instance was run and built using docker.

    Thanks in advance,

    1. Hi Mostafa!
      1. I suggest you to check, if the ZooKeeper instance is really run on port 2181 and localhost.
      2. If it so, try to connect to ZooKeeper using zookeeper-client command in console (like zookeeper-client -server localhost:2181), or scripts and try to perform any operations like “ls /”.
      3. If ZooKeeper is working fine and print the output for “ls /” commands, check the hbase-site.xml config, especially hbase.zookeeper.quorum property. If you don’t know where this file is, you can use locate hbase-site.xml command on linux or great search tool named “Everything” on windows (
      4. Check the property in Apache Atlas configuration file named
      5. Check out some information about property HBASE_MANAGES_ZK.

Leave a Reply

Your email address will not be published. Required fields are marked *