分区操作

1.分区Leader平衡
当创建一个主题时,该主题的分区及副本会被均匀的分配到kafka集群相应节点上,这样优先副本也在集群中均匀分布,通常当一个主题创建后优先副本会作为分区的Leader副本,Leader负责所有的读写操作,但随着时间的推移,当Leader节点发生故障时,就会从Follower节点中选出一个新的Leader,这样就有可能导致集群的负载不均衡,从而影响整个集群的健壮性和稳定性,当原Leader节点恢复后再次加入到集群时也不会主动称为Leader副本,kafka提供了两种重新选择优先副本作为分区Leader的方法,使集群负载重新达到平衡.
(1)自动平衡:在代理节点启动时,设置auto.leader.rebalance.enable=true,默认为true.当该配置为true时,控制器在故障转移操作时会启动一个定时任务,每隔${leader.imbalance.check.interval.seconds}秒(默认是5分钟)触发一次分区分配均衡操作,而只有在代理的不均衡的百分比达到${leader.imbalance.per.broker.percentage}(该配置默认是10,即不均衡比例达到10%)以上时才会真正执行分区重新分配操作,若该配置设置为false,当某个节点在失效前是某个分区的优先副本,即失效前是Leader副本,该节点恢复后他也只是一个Follower副本
(2)手动平衡:kafka提供了一个对分区Leader进行重新平衡的工具脚本kafka-preferred-replica-election.sh,通过该工具将优先副本选举为Leader,从而重新让集群分区达到平衡
第一种方法kafka自动触发,但存在一定时间的延迟,第二种方法需要手动执行,同时提供更细粒度的分区均衡操作,支持以JSON字符串形式指定需要触发平衡操作的分区列表,若不指定分区,则会尝试对所有分区执行将优先副本选为Leader副本
例如,查看当前主题”kafka-action”分区副本分布信息:
./kafka-topics.sh –zookeeper node1:2181,node2:2181,node3:2181 –describe –topic kafka-action
从当前的分区副本分布情况可知,brokerId为3的节点关闭后,分区及副本分布情况如下:
        Topic: kafka-action     Partition: 0    Leader: 1       Replicas: 1,2   Isr: 1,2
        Topic: kafka-action     Partition: 1    Leader: 2       Replicas: 2,3   Isr: 2
        Topic: kafka-action     Partition: 2    Leader: 1       Replicas: 3,1   Isr: 1
        Topic: kafka-action     Partition: 3    Leader: 1       Replicas: 3,1   Isr: 1
        Topic: kafka-action     Partition: 4    Leader: 1       Replicas: 1,3   Isr: 1
        Topic: kafka-action     Partition: 5    Leader: 1       Replicas: 3,1   Isr: 1
        Topic: kafka-action     Partition: 6    Leader: 1       Replicas: 1,3   Isr: 1
        Topic: kafka-action     Partition: 7    Leader: 1       Replicas: 3,1   Isr: 1
        Topic: kafka-action     Partition: 8    Leader: 1       Replicas: 1,3   Isr: 1
        Topic: kafka-action     Partition: 9    Leader: 1       Replicas: 3,1   Isr: 1
重启开启brokerId为3的节点后,
Topic:kafka-action      PartitionCount:10       ReplicationFactor:2     Configs:
        Topic: kafka-action     Partition: 0    Leader: 1       Replicas: 1,2   Isr: 1,2
        Topic: kafka-action     Partition: 1    Leader: 2       Replicas: 2,3   Isr: 2,3
        Topic: kafka-action     Partition: 2    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 3    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 4    Leader: 1       Replicas: 1,3   Isr: 1,3
        Topic: kafka-action     Partition: 5    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 6    Leader: 1       Replicas: 1,3   Isr: 1,3
        Topic: kafka-action     Partition: 7    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 8    Leader: 1       Replicas: 1,3   Isr: 1,3
        Topic: kafka-action     Partition: 9    Leader: 1       Replicas: 3,1   Isr: 1,3
从当前的分区副本分布情况可知,brokerId为3的节点关闭后,分区1的Leader转移到AR列表种的另一个brokerId为1的节点上,这样就会增加该节点的负载,若过一段时间重新启动brokerId为3的节点,由于关闭了分区自动平衡功能,因此需要手动执行分区平衡操作才能重新将brokerId为3的节点选举分分区的leader,下面详细介绍平衡分区的具体操作:
首先创建一个json文件:
{
        “partitions”: [{
                “topic”: “kafka-action”,
                “partition”: 3
        }]
}
该文件配置对主题“kafka-action”的分区编号为3的分区进行平衡操作,执行以下命令进行分区Leader平衡操作:
[root@kafka1 bin]# ./kafka-preferred-replica-election.sh –zookeeper node1:2181,node2:2181,node3:2181 –path-to-json-file ./replica.json
Created preferred replica election path with kafka-action-3
Successfully started preferred replica election for partitions Set(kafka-action-3)
[root@kafka1 bin]# ./kafka-topics.sh –zookeeper node1:2181,node2:2181,node3:2181 –describe –topic kafka-action
Topic:kafka-action      PartitionCount:10       ReplicationFactor:2     Configs:
        Topic: kafka-action     Partition: 0    Leader: 1       Replicas: 1,2   Isr: 1,2
        Topic: kafka-action     Partition: 1    Leader: 2       Replicas: 2,3   Isr: 2,3
        Topic: kafka-action     Partition: 2    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 3    Leader: 3       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 4    Leader: 1       Replicas: 1,3   Isr: 1,3
        Topic: kafka-action     Partition: 5    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 6    Leader: 1       Replicas: 1,3   Isr: 1,3
        Topic: kafka-action     Partition: 7    Leader: 1       Replicas: 3,1   Isr: 1,3
        Topic: kafka-action     Partition: 8    Leader: 1       Replicas: 1,3   Isr: 1,3
        Topic: kafka-action     Partition: 9    Leader: 1       Replicas: 3,1   Isr: 1,3
从分区分布信息可知,当brokerId为3的节点重新启动后,经过手动分区平衡之后分区3的有限副本节点重新成为该副本的Leader副本
2.分区迁移
当下线一个节点前,需要将该节点上的分区副本迁移到其他可用节点上,kafka并不会自动进行分区副本迁移,若不进行手动重新分配,就会导致某些主题数据丢失和不可用的情况,当新增节点时,也只有新创建的主题才会分配到新的节点上,而之前主题的分区并不会自动分配到新加入的节点上,因为在主题创建时,该主题的AR列表种并没有新加入的节点,为了解决这个问题,就需要让副本再次进行合理的分配
本小节分别对节点下线,集群扩容两种应用场景分区副本的迁移进行讲解,详细介绍分区迁移的基本步骤
(1)节点下线分区迁移
首先创建一个主题
[root@kafka1 bin]# ./kafka-topics.sh –zookeeper node1:2181,node2:2181,node3:2181 –create –topic reassign-partitions –partitions 3 –replication-factor 1
该主题分区副本分布情况如下:
[root@kafka1 bin]# ./kafka-topics.sh –zookeeper node1:2181,node2:2181,node3:2181 –describe –topic reassign-partitions
Topic:reassign-partitions       PartitionCount:3        ReplicationFactor:1     Configs:
        Topic: reassign-partitions      Partition: 0    Leader: 3       Replicas: 3     Isr: 3
        Topic: reassign-partitions      Partition: 1    Leader: 1       Replicas: 1     Isr: 1
        Topic: reassign-partitions      Partition: 2    Leader: 2       Replicas: 2     Isr: 2
然后,假设需要将brokerId为2的节点下线,在下线前我们通过kafka提供的kafka-reassign-partitions.sh脚本按以下步骤将该分区转移到其他节点上:
1)生成分区分配方案,首先创建一个文件,该文件以JSON字符串格式指定要进行分区重分配的主题,例如创建一个名为topics-to-move.json的文件,该文件内容如下所示,若要对多个主题分区重新分配,则以JSON格式制定做足“topic”,version为固定值
{
        “topics”: [{“topic”: “reassign-partitions”}],
        “version”: 1
}
然后执行以下生成分区分配方案的命令
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –topics-to-move-json-file ./topics-to-move.json –broker-list “1,3” –generate
该命令各个参数说明如下
zookeeper:制定zookeeper地址,从zookeeper获取主题元数据信息
topic-to-move-json-file:指定分区重分配对应的主题配置文件的路径,该配置文件的内容为以JSON格式制定需要进行重新分配的主题
broker-list:指定分区可迁移的brokerId列表,本例时要下线brokerId为2的节点,需要将该节点的分区迁移到brokerId为1和3的节点上,因此这里指定brokerId的列表为1,3
generate:指定该命令类型为生成一个分区分配的参考配置
该命令的底层实现原理为:从zookeeper种读取主题元数据信息及指定的有效代理,根据分区副本分配算法重新计算指定主题的分区副本分配方案
生成分区分配方案的命令执行后在控制台输出信息如下:
Current partition replica assignment
{
    “version”: 1,
    “partitions”: [{
        “topic”: “reassign-partitions”,
        “partition”: 2,
        “replicas”: [2],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 1,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 0,
        “replicas”: [3],
        “log_dirs”: [“any”]
    }]
}
Proposed partition reassignment configuration
{
    “version”: 1,
    “partitions”: [{
        “topic”: “reassign-partitions”,
        “partition”: 0,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 2,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 1,
        “replicas”: [3],
        “log_dirs”: [“any”]
    }]
}
以上信息包含两部分:当前分区分配信息以及根据指定的代理列表生成的分区分配方案,kafka推荐的分配方案已将3个分区分别分配到brokerId为1和3的两节点上,将kafka生成的分区重分配方案信息复制到partitions-reassignment.json文件中,新的分区分配方案如下所示
{
    “version”: 1,
    “partitions”: [{
        “topic”: “reassign-partitions”,
        “partition”: 0,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 2,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 1,
        “replicas”: [3],
        “log_dirs”: [“any”]
    }]
}
2)执行分区迁移,通过步骤1生成了分区新的分配方案,执行以下命令对指定主题的分区进行迁移
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –reassignment-json-file ./partitions-reassignment.json –execute
Current partition replica assignment
{“version”:1,”partitions”:[{“topic”:”reassign-partitions”,”partition”:2,”replicas”:[2],”log_dirs”:[“any”]},{“topic”:”reassign-partitions”,”partition”:1,”replicas”:[1],”log_dirs”:[“any”]},{“topic”:”reassign-partitions”,”partition”:0,”replicas”:[3],”log_dirs”:[“any”]}]}
Save this to use as the –reassignment-json-file option during rollback
Successfully started reassignment of partitions.
该命令的各个参数说明如下:
zookeeper:指定zookeeper地址,该命令会将新的分区分配方案信息写入zookeeper相应节点
reassignment-json-file:指定分区分配方案的文件路径,该分配文件是以JSON格式指定各分区对应的brokerId列表
execute:指定该命令操作类型为执行分区迁移
分区迁移的基本原理是在目标节点上创建分区目录,然后复制原分区数据到目标节点,最后删除原节点的数据,因此在迁移时要确保目标节点有足够的空间
3)查看分区迁移进度,执行以下命令查看分区迁移的进度
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –reassignment-json-file ./partitions-reassignment.json –verify
输出信息:
Status of partition reassignment:
Reassignment of partition reassign-partitions-0 completed successfully
Reassignment of partition reassign-partitions-2 completed successfully
Reassignment of partition reassign-partitions-1 completed successfully
从分区迁移进度信息可知:3各分区已完成迁移,若分区还在迁移中,则状态为in progress,分区迁移一旦开始即无法停止,更不要强行停止集群,否则会造成数据不一致,带来意想不到的后果,因此设置合理的文件保留时间是很有必要的,这样在数据迁移时要迁移的数据量就相对较小
4)查看分区分配信息,再次执行查看该主题分区副本分布信息如下:
[root@kafka1 bin]# ./kafka-topics.sh –zookeeper node1:2181,node2:2181,node3:2181 –describe –topic reassign-partitions
Topic:reassign-partitions       PartitionCount:3        ReplicationFactor:1     Configs:
        Topic: reassign-partitions      Partition: 0    Leader: 1       Replicas: 1     Isr: 1
        Topic: reassign-partitions      Partition: 1    Leader: 3       Replicas: 3     Isr: 3
        Topic: reassign-partitions      Partition: 2    Leader: 1       Replicas: 1     Isr: 1
从分区分配信息可知,已按照分区重分配方案完成了分区迁移,在以上操作步骤中执行分区迁移(execute)时并没有对数据复制流量进行限制,在数据量比较大时对复制流量的限制会在一定程度上减少数据迁移操作时对其他操作带来的影响,进而保证集群稳定
(2)集群扩容数据迁移
上一节通过下线brokerId为2的节点介绍了分区数据迁移的基本步骤,本小节通过将节点恢复加入集群来模拟集群扩容,在介绍集群扩容数据迁移基本操作时先介绍复制跟限流控制的相关操作,首先执行生成分区重分配方案的命令,生成的分区重分配方案信息如下所示:
Current partition replica assignment
{
    “version”: 1,
    “partitions”: [{
        “topic”: “reassign-partitions”,
        “partition”: 2,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 1,
        “replicas”: [3],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 0,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }]
}
Proposed partition reassignment configuration
{
    “version”: 1,
    “partitions”: [{
        “topic”: “reassign-partitions”,
        “partition”: 0,
        “replicas”: [2],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 2,
        “replicas”: [1],
        “log_dirs”: [“any”]
    }, {
        “topic”: “reassign-partitions”,
        “partition”: 1,
        “replicas”: [3],
        “log_dirs”: [“any”]
    }]
}
分区迁移时复制限流有两种方法:一是通过动态修改配置,二是通过kafka-reassign-partitions.sh脚本支持的throttle参数设置
1)动态配置限流
通过动态配置方式限流,在复制过程中并没有相应的日志信息显示已限流,因此为了展示动态配置限流效果,我们首先增加分区的数据量,然后通过查看分区迁移进度与不限流之前进度状态的对比,来验证限流是否生效,当将复制流量限制在一个比较小的数字时,分区迁移过程将变慢, 查看迁移进度时在一个时间范围内会出现正在迁移的状态(in progress)
这里再介绍kafka自带的另一个用于生成测试数据的脚本kafka-verifiable-producer.sh,该脚本用于向指定主题发送自增整型数字消息,执行以下命令生成10万条消息:
[root@kafka1 bin]# ./kafka-verifiable-producer.sh –broker-list kafka1:9092,kafka2:9092,kafka3:9092 –topic reassign-partitions –max-message 100000
参数max-message用于指定要发送的消息总数,然后按以下步骤设置数据复制时限流配置,设置数据复制被限流的副本列表(根据分区重分配方案设置,分区ID:brokerId):
[root@kafka1 bin]# ./kafka-configs.sh –zookeeper node1:2181,node2:2181,node3:2181 –entity-type topics –entity-name reassign-partitions –alter –add-config leader.replication.throttled.replicas=[0:2,1:3,2:1],follower.replication.throttled.replicas=[0:2,1:3,2:1]
Completed Updating config for entity: topic ‘reassign-partitions’.
设置brokerId为2的节点复制速率为100B/S
[root@kafka1 bin]# ./kafka-configs.sh –zookeeper node1:2181,node2:2181,node3:2181 –entity-type brokers –entity-name 2 –alter –add-config follower.replication.throttled.rate=100,leader.replication.throttled.rate=1024
Completed Updating config for entity: brokers ‘2’.
通过zookeeper客户端查看该节点配置信息
[zk: node2:2181(CONNECTED) 1] get /config/brokers/2
{“version”:1,”config”:{“leader.replication.throttled.rate”:”1024″,”follower.replication.throttled.rate”:”100″}}
cZxid = 0xb000000a8
ctime = Sat Mar 02 03:14:50 EST 2019
mZxid = 0xb000000a8
mtime = Sat Mar 02 03:14:50 EST 2019
pZxid = 0xb000000a8
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 111
numChildren = 0
执行分区迁移
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –reassignment-json-file ./partitions-reassignment.json –execute
Current partition replica assignment
{“version”:1,”partitions”:[{“topic”:”reassign-partitions”,”partition”:2,”replicas”:[1],”log_dirs”:[“any”]},{“topic”:”reassign-partitions”,”partition”:1,”replicas”:[3],”log_dirs”:[“any”]},{“topic”:”reassign-partitions”,”partition”:0,”replicas”:[1],”log_dirs”:[“any”]}]}
Save this to use as the –reassignment-json-file option during rollback
Successfully started reassignment of partitions.
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –reassignment-json-file ./partitions-reassignment.json –verify
Status of partition reassignment:
Reassignment of partition reassign-partitions-0 completed successfully
Reassignment of partition reassign-partitions-2 completed successfully
Reassignment of partition reassign-partitions-1 completed successfully
Throttle was removed.
由分区迁移进度状态信息可知,各分区迁移已完成,同时限额配置已被移除,此时再通过zookeeper客户端查看该主题及节点所设置的动态配置信息,可以看到相应的配置信息均已被删除
2)throttle设置限流,分区迁移脚本提供了参数throttle用于设置限流值,例如,设置迁移时数据复制速率为1KB/S
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –reassignment-json-file ./partitions-reassignment.json –execute –throttle 1024
Current partition replica assignment
{“version”:1,”partitions”:[{“topic”:”reassign-partitions”,”partition”:2,”replicas”:[1],”log_dirs”:[“any”]},{“topic”:”reassign-partitions”,”partition”:1,”replicas”:[3],”log_dirs”:[“any”]},{“topic”:”reassign-partitions”,”partition”:0,”replicas”:[2],”log_dirs”:[“any”]}]}
Save this to use as the –reassignment-json-file option during rollback
Warning: You must run Verify periodically, until the reassignment completes, to ensure the throttle is removed. You can also alter the throttle by rerunning the Execute command passing a new value.
The inter-broker throttle limit was set to 1024 B/s
警告信息提示:需要执行验证迁移进度命令,以确保限流设置被删除,如果迁移过程较慢,可以调整限额再次执行迁移命令,最后一行显示了当前所设置的限额值
[root@kafka1 bin]# ./kafka-reassign-partitions.sh –zookeeper node1:2181,node2:2181,node3:2181 –reassignment-json-file ./partitions-reassignment.json –verify
Status of partition reassignment:
Reassignment of partition reassign-partitions-0 completed successfully
Reassignment of partition reassign-partitions-2 completed successfully
Reassignment of partition reassign-partitions-1 completed successfully
Throttle was removed.
3.增加分区
当前版本的kafka并不支持减少分区的操作,也就是说,只能对一个主题执行增加分区的操作,kafka自带的kafka-topics.sh脚本可以很方便的对某个主题的分区数进行修改,为了介绍分区及副本数变化的操作,我们创建一个名为“partition-replica-foo”主题,该主题由3个分区,1个副本,创建该主题命令如下:
[root@kafka1 bin]# ./kafka-topics.sh –create –zookeeper node1:2181,node2:2181,node3:2181 –replication-factor 1 –partitions 3 –topic partition-replica-foo
Created topic “partition-replica-foo”.
登录zookeeper客户端查看该主题的分区元数据信息:
[zk: node2:2181(CONNECTED) 3] get /brokers/topics/partition-replica-foo
{“version”:1,”partitions”:{“2″:[1],”1″:[3],”0”:[2]}}
cZxid = 0xb000000ef
ctime = Sat Mar 02 04:24:49 EST 2019
mZxid = 0xb000000ef
mtime = Sat Mar 02 04:24:49 EST 2019
pZxid = 0xb000000f1
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 52
numChildren = 1
可以看到主题当前有3个分区,现在将分区修改为6,执行以下命令:
[root@kafka1 bin]# ./kafka-topics.sh –alter –zookeeper node1:2181,node2:2181,node3:2181 –partitions 6 –topic partition-replica-foo
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
再次查看该主题的元数据信息:
[zk: node2:2181(CONNECTED) 7] get /brokers/topics/partition-replica-foo
{“version”:1,”partitions”:{“4″:[3],”5″:[1],”1″:[3],”0″:[2],”2″:[1],”3”:[2]}}
cZxid = 0xb000000ef
ctime = Sat Mar 02 04:24:49 EST 2019
mZxid = 0xb000000f9
mtime = Sat Mar 02 04:31:17 EST 2019
pZxid = 0xb000000f1
cversion = 1
dataVersion = 1
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 76
numChildren = 1
由当前分区信息可知:该主题分区数已成功扩展至6个
4.增加副本
前一小节创建的主题由6个分区,1个副本,本小节介绍如何将该主题的副本数修改为2
首先查看该主题副本分布情况,执行命令如下:
[root@kafka1 bin]# ./kafka-topics.sh –zookeeper node1:2181,node2:2181,node3:2181 –describe –topic partition-replica-foo
该主题分区副本分布信息如下:
Topic:partition-replica-foo     PartitionCount:6        ReplicationFactor:1     Configs:
        Topic: partition-replica-foo    Partition: 0    Leader: 2       Replicas: 2     Isr: 2
        Topic: partition-replica-foo    Partition: 1    Leader: 3       Replicas: 3     Isr: 3
        Topic: partition-replica-foo    Partition: 2    Leader: 1       Replicas: 1     Isr: 1
        Topic: partition-replica-foo    Partition: 3    Leader: 2       Replicas: 2     Isr: 2
        Topic: partition-replica-foo    Partition: 4    Leader: 3       Replicas: 3     Isr: 3
        Topic: partition-replica-foo    Partition: 5    Leader: 1       Replicas: 1     Isr: 1
将3个节点依次记为B1~B3,6个分区依次记为P0~P5,由该主题当前分区副本分配信息可知,该主题第一个分区即P0分布在B2上,即对应brokerId列表数组的第1个位置,根据副本分配算法可知由于起始shift为0,由first Replica Index+shift=1,可得firstReplicaIndex为1,当修改副本数为2后,根据副本分配算法得到新的分区副本分布情况如下表所示,这里根据副本分配算法来确定新的副本分布情况,当然也可以采用该算法进行分配,只要保证各副本均匀分布在所有节点即可:
轮次
B1
B2
B3
SHIFT
FIRSTREPLICAINDEX+SHIFT
第一轮
P0
P1
0
1
P2
P3
P4
1
2
P5
第二轮
P0
0
2
P1
P2
P3
1
3
P4
P5
创建一个JSON文件,该文件内容为该主题对应每个分区副本列表,这里指定该文件名为replica-extends.json