1、所需文件一览

Compose 文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
name: zookeeper-cluster-demo

networks:
zookeeper-net:
name: zookeeper-cluster-net
driver: bridge

services:
zookeeper-1:
image: ${image}
restart: ${restart}
hostname: ${host_name}-1
container_name: ${container_name}-1
networks:
- zookeeper-net
ports:
- 127.0.0.1:2181:2181
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- ./${container_name}-1/data:/data
- ./${container_name}-1/datalog:/datalog
- ./${container_name}-1/logs:/logs
environment:
ZOO_MY_ID: 1

zookeeper-2:
image: ${image}
restart: ${restart}
hostname: ${host_name}-2
container_name: ${container_name}-2
networks:
- zookeeper-net
ports:
- 127.0.0.1:2182:2181
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- ./${container_name}-2/data:/data
- ./${container_name}-2/datalog:/datalog
- ./${container_name}-2/logs:/logs
environment:
ZOO_MY_ID: 2

zookeeper-3:
image: ${image}
restart: ${restart}
hostname: ${host_name}-3
container_name: ${container_name}-3
networks:
- zookeeper-net
ports:
- 127.0.0.1:2183:2181
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- ./${container_name}-3/data:/data
- ./${container_name}-3/datalog:/datalog
- ./${container_name}-3/logs:/logs
environment:
ZOO_MY_ID: 3

zookeeper-4:
image: ${image}
restart: ${restart}
hostname: ${host_name}-4
container_name: ${container_name}-4
networks:
- zookeeper-net
ports:
- 127.0.0.1:2184:2181
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- ./${container_name}-4/data:/data
- ./${container_name}-4/datalog:/datalog
- ./${container_name}-4/logs:/logs
environment:
ZOO_MY_ID: 4

zookeeper-5:
image: ${image}
restart: ${restart}
hostname: ${host_name}-5
container_name: ${container_name}-5
networks:
- zookeeper-net
ports:
- 127.0.0.1:2185:2181
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- ./${container_name}-5/data:/data
- ./${container_name}-5/datalog:/datalog
- ./${container_name}-5/logs:/logs
environment:
ZOO_MY_ID: 5

zoo.cfg 文件内容如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
autopurge.snapRetainCount=3
autopurge.purgeInterval=0
maxClientCnxns=60
standaloneEnabled=true
admin.enableServer=true
admin.serverPort=8080
server.1=zoo-1:2888:3888;2181
server.2=zoo-2:2888:3888;2181
server.3=zoo-3:2888:3888;2181
server.4=zoo-4:2888:3888;2181
server.5=zoo-5:2888:3888;2181

.env 文件内容如下:

1
2
3
4
image=zookeeper:3.8.4-jre-17
host_name=zoo
restart=unless-stopped
container_name=zookeeper

2、解析 Compose 文件

使用 docker compose config 命令来验证 Compose 文件是否正确:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
$ docker compose config
name: zookeeper-cluster-demo
services:
zookeeper-1:
container_name: zookeeper-1
environment:
ZOO_MY_ID: "1"
hostname: zoo-1
image: zookeeper:3.8.4-jre-17
networks:
zookeeper-net: null
ports:
- mode: ingress
host_ip: 127.0.0.1
target: 2181
published: "2181"
protocol: tcp
restart: unless-stopped
volumes:
- type: bind
source: E:\docker-volumes\zookeeper-cluster\zoo.cfg
target: /conf/zoo.cfg
bind:
create_host_path: true
- type: bind
source: E:\docker-volumes\zookeeper-cluster\zookeeper-1\data
target: /data
bind:
create_host_path: true
- type: bind
source: E:\docker-volumes\zookeeper-cluster\zookeeper-1\datalog
target: /datalog
bind:
create_host_path: true
- type: bind
source: E:\docker-volumes\zookeeper-cluster\zookeeper-1\logs
target: /logs
bind:
create_host_path: true

# ......

networks:
zookeeper-net:
name: zookeeper-cluster-net
driver: bridge

解析结果分析:

  • services.volumes 配置中显示类型为 bind,为绑定挂载,将容器中的 /data/datalog/zoo.cfg 文件映射到宿主机。

  • services.ports 配置指定映射容器的 2181 端口到宿主机的 2181-2185 端口,且指定 IP 为 127.0.0.1,如果不指定该 IP,除了宿主机,其他可以访问宿主机的机器也可以访问到该容器,这样是不安全的。另外,如果只是其他容器需要连接到该容器,而宿主机不需要访问容器,则可以不发布容器端口,而是指定 networks 属性,将容器连接到同一网络,从而使容器互相通信。例如,在另一个服务中使用 ZooKeeper Compose 中定义的网络。

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    networks:
    kafka-net:
    name: kafka-cluster-net
    driver: bridge
    zookeeper-net:
    name: zookeeper-cluster-net
    external: true

    services:
    broker-1:
    networks:
    - kafka-net
    - zookeeper-net

2.1 卷的多种选项

2.1.1 绑定挂载
1
2
3
4
5
6
7
services:
zookeeper-1:
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- ./${container_name}-5/data:/data
- ./${container_name}-5/datalog:/datalog
- ./${container_name}-5/logs:/logs
2.1.2 匿名卷
1
2
3
4
services:
zookeeper-1:
volumes:
- ./zoo.cfg:/conf/zoo.cfg
2.1.3 命名卷
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
volumes:
volume-1-data:
name: zookeeper-cluster-demo-volume-1-data
volume-1-datalog:
name: zookeeper-cluster-demo-volume-1-datalog
volume-1-logs:
name: zookeeper-cluster-demo-volume-1-logs

services:
zookeeper-1:
volumes:
- ./zoo.cfg:/conf/zoo.cfg
- volume-1-data:/data
- volume-1-datalog:/datalog
- volume-1-logs:/logs

3、创建并启动容器

使用 docker compose up 命令来启动容器:

1
2
3
4
5
6
7
8
$ docker compose up -d
[+] Running 6/6
✔ Network zookeeper-cluster-net Created 0.0s
✔ Container zookeeper-cluster-demo-zookeeper-4-1 Started 0.0s
✔ Container zookeeper-cluster-demo-zookeeper-3-1 Started 0.0s
✔ Container zookeeper-cluster-demo-zookeeper-1-1 Started 0.0s
✔ Container zookeeper-cluster-demo-zookeeper-2-1 Started 0.0s
✔ Container zookeeper-cluster-demo-zookeeper-5-1 Started 0.1s

-d 选项为后台启动(分离模式启动)。

使用 docker ps 命令来查看刚才启动的容器:

1
2
3
4
5
6
7
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fb9a577c7b64 zookeeper:3.8.4-jre-17 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2185->2181/tcp zookeeper-cluster-demo-zookeeper-5-1
49a5b27dd23b zookeeper:3.8.4-jre-17 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp zookeeper-cluster-demo-zookeeper-3-1
208e8651a00b zookeeper:3.8.4-jre-17 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, 8080/tcp zookeeper-cluster-demo-zookeeper-1-1
781c40747ce9 zookeeper:3.8.4-jre-17 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp zookeeper-cluster-demo-zookeeper-2-1
049260f67598 zookeeper:3.8.4-jre-17 "/docker-entrypoint.…" About a minute ago Up About a minute 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2184->2181/tcp zookeeper-cluster-demo-zookeeper-4-1

4、查看 ZooKeeper 状态

4.1 ZooKeeper CLI 查看状态

1
2
3
4
5
6
7
8
9
10
11
$ docker exec -it fb9a577c7b64 zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

$ docker exec -it 49a5b27dd23b zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

输出结果中的 Mode,展示了节点的模式,可以看到上面的 LeaderFollower 节点

使用 ZooKeeper CLI 连接 ZooKeeper 集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
$ docker exec -it fb9a577c7b64 zkServer.sh status

$ zkCli.sh -server localhost 2181
Connecting to localhost:2181
2024-07-29 15:38:44,597 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC
2024-07-29 15:38:44,597 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:host.name=zoo-5
2024-07-29 15:38:44,597 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.version=17.0.11
2024-07-29 15:38:44,597 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.vendor=Eclipse Adoptium
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.home=/opt/java/openjdk
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.class.path=/apache-zookeeper-3.8.4-bin/bin/../zookeeper-server/target/classes:/apache-zookeeper-3.8.4-bin/bin/../build/classes:/apache-zookeeper-3.8.4-bin/bin/../zookeeper-server/target/lib/*.jar:/apache-zookeeper-3.8.4-bin/bin/../build/lib/*.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/zookeeper-prometheus-metrics-3.8.4.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/zookeeper-jute-3.8.4.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/zookeeper-3.8.4.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/snappy-java-1.1.10.5.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/slf4j-api-1.7.30.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/simpleclient_servlet-0.9.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/simpleclient_hotspot-0.9.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/simpleclient_common-0.9.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/simpleclient-0.9.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-transport-native-unix-common-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-transport-native-epoll-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-transport-classes-epoll-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-transport-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-resolver-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-handler-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-common-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-codec-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/netty-buffer-4.1.105.Final.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/metrics-core-4.1.12.1.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/logback-core-1.2.13.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/logback-classic-1.2.13.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jline-2.14.6.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-util-ajax-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-util-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-servlet-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-server-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-security-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-io-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jetty-http-9.4.53.v20231009.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/javax.servlet-api-3.1.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jackson-databind-2.15.2.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jackson-core-2.15.2.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/jackson-annotations-2.15.2.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/commons-io-2.11.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/commons-cli-1.5.0.jar:/apache-zookeeper-3.8.4-bin/bin/../lib/audience-annotations-0.12.0.jar:/apache-zookeeper-3.8.4-bin/bin/../zookeeper-*.jar:/apache-zookeeper-3.8.4-bin/bin/../zookeeper-server/src/main/resources/lib/*.jar:/conf:
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.io.tmpdir=/tmp
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:java.compiler=<NA>
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:os.name=Linux
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:os.arch=aarch64
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:os.version=6.6.22-linuxkit
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:user.name=root
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:user.home=/root
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:user.dir=/apache-zookeeper-3.8.4-bin
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:os.memory.free=118MB
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:os.memory.max=256MB
2024-07-29 15:38:44,598 [myid:] - INFO [main:o.a.z.Environment@98] - Client environment:os.memory.total=128MB
2024-07-29 15:38:44,599 [myid:] - INFO [main:o.a.z.ZooKeeper@637] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7113b13f
2024-07-29 15:38:44,600 [myid:] - INFO [main:o.a.z.c.X509Util@78] - Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation
2024-07-29 15:38:44,602 [myid:] - INFO [main:o.a.z.ClientCnxnSocket@239] - jute.maxbuffer value is 1048575 Bytes
2024-07-29 15:38:44,606 [myid:] - INFO [main:o.a.z.ClientCnxn@1747] - zookeeper.request.timeout value is 0. feature enabled=false
Welcome to ZooKeeper!
2024-07-29 15:38:44,607 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):o.a.z.ClientCnxn$SendThread@1177] - Opening socket connection to server localhost/[0:0:0:0:0:0:0:1]:2181.
2024-07-29 15:38:44,608 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):o.a.z.ClientCnxn$SendThread@1179] - SASL config status: Will not attempt to authenticate using SASL (unknown error)
2024-07-29 15:38:44,610 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):o.a.z.ClientCnxn$SendThread@1013] - Socket connection established, initiating session, client: /[0:0:0:0:0:0:0:1]:37150, server: localhost/[0:0:0:0:0:0:0:1]:2181
JLine support is enabled
2024-07-29 15:38:44,621 [myid:localhost:2181] - INFO [main-SendThread(localhost:2181):o.a.z.ClientCnxn$SendThread@1453] - Session establishment complete on server localhost/[0:0:0:0:0:0:0:1]:2181, session id = 0x5000010d7780000, negotiated timeout = 30000

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

4.2 通过 AdminServer 查看节点状态

Zookeeper 通过 AdminServer 提供了一种基于 HTTP 的接口,可以使用 curl 命令来获取节点的状态信息。AdminServer 默认运行在端口 8080(可以根据配置进行更改)。你可以通过访问特定的 URL 来获取节点的各种信息,包括状态、配置信息等。

zoo.cfg 文件中有两个相关配置:

1
2
admin.enableServer=true
admin.serverPort=8080

执行结果如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
$ curl http://localhost:8080/commands/stats
{
"version" : "3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC",
"read_only" : false,
"server_stats" : {
"packets_sent" : 33,
"packets_received" : 33,
"fsync_threshold_exceed_count" : 0,
"client_response_stats" : {
"last_buffer_size" : 16,
"min_buffer_size" : 16,
"max_buffer_size" : 16
},
"provider_null" : false,
"uptime" : 2292719,
"server_state" : "leader",
"outstanding_requests" : 0,
"min_latency" : 0,
"avg_latency" : 1.44,
"max_latency" : 7,
"data_dir_size" : 718,
"log_dir_size" : 67108880,
"last_processed_zxid" : 4294967306,
"num_alive_client_connections" : 0,
"auth_failed_count" : 0,
"non_mtlsremote_conn_count" : 0,
"non_mtlslocal_conn_count" : 0
},
"client_response" : {
"last_buffer_size" : 16,
"min_buffer_size" : 16,
"max_buffer_size" : 16
},
"proposal_stats" : {
"last_buffer_size" : 48,
"min_buffer_size" : 48,
"max_buffer_size" : 48
},
"node_count" : 5,
"connections" : [ ],
"secure_connections" : [ ],
"command" : "stats",
"error" : null
}

除了 /commands/stats 接口外,还有其他的接口例如:

  • http://localhost:8080/commands/monitor:获取监控信息
  • http://localhost:8080/commands/config:获取配置信息
  • http://localhost:8080/commands/envi:获取环境信息

5、环境变量

如果 Compose 文件中未提供 zoo.cfg 文件,则容器启动将使用 ZooKeeper 推荐的默认值。可以使用如下的环境变量进行覆盖:

  • ZOO_TICK_TIME
  • ZOO_INIT_LIMIT
  • ZOO_SYNC_LIMIT
  • ZOO_MAX_CLIENT_CNXNS
  • ZOO_STANDALONE_ENABLED
  • ZOO_ADMINSERVER_ENABLED
  • ZOO_AUTOPURGE_PURGEINTERVAL
  • ZOO_AUTOPURGE_SNAPRETAINCOUNT
  • ZOO_4LW_COMMANDS_WHITELIST

5.1 高级配置

5.1.1 ZOO_CFG_EXTRA

并非所有的 ZooKeeper 都通过 Docker 环境变量进行公开。提供的环境变量仅涵盖最小配置和一些经常更改的选项。

可以使用该配置将额外的配置参数添加到 ZooKeeper 的配置文件中。

例如:

1
$ docker run --name some-zookeeper --restart always -e ZOO_CFG_EXTRA="metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider metricsProvider.httpPort=7070" zookeeper
5.1.2 JVMFLAGS

该配置用来设置 JVM 属性。

下面的配置选择使用 Netty 而不是 NIO 作为服务器通信框架:

1
$ docker run --name some-zookeeper --restart always -e JVMFLAGS="-Dzookeeper.serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory" zookeeper

下面配置则用来修改 JVM 的堆最大值:

1
$ docker run --name some-zookeeper --restart always -e JVMFLAGS="-Xmx1024m" zookeeper

5.2 复制模式

5.2.1 ZOO_MY_ID

该配置的值必须在集群中是唯一的,范围应该在1到255之间,不要求连续。

可以通过在数据目录下创建一个名为 myid 的文件,文件中只包含一服务器 ID,这个 ID 要与配置文件中的 ID 一致。

注意:如果挂载卷后容器的 /data 路径下存在 myid 文件,则此配置将无效。

5.2.2 ZOO_SERVERS

该变量允许您指定 Zookeeper 集群的机器列表。

每个条目应按以下格式指定:server.id=<address1>:<port1>:<port2>[:role];[<client port address>:]<client port>

条目之间用空格分隔。

注意:如果挂载卷后容器的 /conf 路径下已经存在 zoo.cfg 文件,则此配置将无效。

6、其他配置

6.1 日志

可以修改 /conf/logback.xml 文件来覆盖默认的日志记录配置,将自定义配置文件作为卷装载到容器中。

6.2 数据存储

/data/datalog 挂载卷,分别用于保存 ZooKeeper 的内存数据快照和数据库事务日志。

注意放置事务日志的位置,专用的事务日志设备是保持良好性能的关键。将日志放在繁忙的设备上会对性能产生不利影响。

7、关于 ZooKeeper 群组

为了保证高可用性,ZooKeeper 以集群方式运行。由于使用了再均衡算法,建议一个 ZooKeeper 集群应该包含奇数个节点。

只有当群组中的大多数节点处于可用状态时,ZooKeeper 才能处理外部请求。也就是说,一个包含3个节点的群组,允许1个节点失效,而一个包含5个节点的群组允许2两个节点失效。

假设有一个包含 5 个节点的群组,如果要对群组做一些如更换节点的修改,那么就需要依次重启每个节点。如果你的群组无法容忍多个节点失效,那么在维护过程中就存在风险。

不过,也不建议一个群组包含超过 7 个节点,因为 ZooKeeper 使用了一致性协议,节点过多则会降低整个群组的性能。

此外,如果由于客户端连接太多,5 个或 7 个节点仍然无法支撑负载,则可以考虑增加额外的观察者节点来分摊只读流量。

相关链接

zookeeper - Official Image | Docker Hub

OB tags

#Docker #ZooKeeper #注册中心 #微服务