Raft算法之日志复制

Raft算法之日志复制

一、日志复制大致流程

在Leader选举过程中,集群最终会选举出一个Leader节点,而集群中剩余的其他节点将会成为Follower节点。Leader节点除了向Follower节点发送心跳消息,还会处理客户端的请求,并将客户端的更新操作以消息(Append Entries消息)的形式发送到集群中所有的Follower节点。当Follower节点记录收到的这些消息之后,会向Leader节点返回相应的响应消息。当Leader节点在收到半数以上的Follower节点的响应消息之后,会对客户端的请求进行应答。最后,Leader会提交客户端的更新操作,该过程会发送Append Entries消息到Follower节点,通知Follower节点该操作已经提交,同时Leader节点和Follower节点也就可以将该操作应用到自己的状态机中。

参考资料:https://blog.csdn.net/qq_43949280/article/details/122669244

二、ETCD中raft模块的日志复制

2.1 消息的发送

前文中提到Leader节点会处理客户端的更新操作,这就是阅读代码的入口。

ETCD代码中除了有raft模块,还有一个raftexample模块,是对raft模块的使用示例,该模块位置如下:
在这里插入图片描述

看完这个模块的文件,觉得处理数据存储的入口应该在kvstore.go文件中。在这个文件中有一个newKVStore(...)方法,如果要使用使用kvstore结构体的话,肯定会调用newKVStore(...)方法。

我们来看看这个方法的调用点:
在这里插入图片描述

可以确定调用点只在main.go文件中,如下所示:

// contrib/raftexample/main.go文件
func main() {cluster := flag.String("cluster", "http://127.0.0.1:9021", "comma separated cluster peers")id := flag.Int("id", 1, "node ID")kvport := flag.Int("port", 9121, "key-value server port")join := flag.Bool("join", false, "join an existing cluster")flag.Parse()proposeC := make(chan string)defer close(proposeC)confChangeC := make(chan raftpb.ConfChange)defer close(confChangeC)// raft provides a commit stream for the proposals from the http apivar kvs *kvstore // 定义kvstoregetSnapshot := func() ([]byte, error) { return kvs.getSnapshot() }commitC, errorC, snapshotterReady := newRaftNode(*id, strings.Split(*cluster, ","), *join, getSnapshot, proposeC, confChangeC)kvs = newKVStore(<-snapshotterReady, proposeC, commitC, errorC) // ref-1 创建kvstore// the key-value http handler will propose updates to raftserveHttpKVAPI(kvs, *kvport, confChangeC, errorC) // ref-2 使用kvstore
}

我们接着看ref-2处使用kvstore的函数serveHttpKVAPI(...)的细节,如下所示:

// serveHttpKVAPI starts a key-value server with a GET/PUT API and listens.
func serveHttpKVAPI(kv *kvstore, port int, confChangeC chan<- raftpb.ConfChange, errorC <-chan error) {srv := http.Server{ // 创建http serverAddr: ":" + strconv.Itoa(port),Handler: &httpKVAPI{store:       kv, // 把前面提到的kvstore 赋值给httpKVAPI的成员字段storeconfChangeC: confChangeC,},}go func() { // 开启http serverif err := srv.ListenAndServe(); err != nil {log.Fatal(err)}}()// exit when raft goes downif err, ok := <-errorC; ok {log.Fatal(err)}
}

现在的关键是httpKVAPI类型,它基于由raft支撑的key-value存储来处理http请求,下面是该类型细节:

// contrib/raftexample/httpapi.go文件
// Handler for a http based key-value store backed by raft
type httpKVAPI struct {store       *kvstoreconfChangeC chan<- raftpb.ConfChange
}func (h *httpKVAPI) ServeHTTP(w http.ResponseWriter, r *http.Request) {key := r.RequestURIdefer r.Body.Close()switch {case r.Method == "PUT": // ref-3 设置键值对时,是用的put方法,在该模块的reademe文件有提到。v, err := ioutil.ReadAll(r.Body) // 读取客户端传递过来的bodyif err != nil {log.Printf("Failed to read on PUT (%v)\n", err)http.Error(w, "Failed on PUT", http.StatusBadRequest)return}h.store.Propose(key, string(v)) // ref-4 kvstore处理存储键值对// Optimistic-- no waiting for ack from raft. Value is not yet// committed so a subsequent GET on the key may return old valuew.WriteHeader(http.StatusNoContent)case r.Method == "GET":if v, ok := h.store.Lookup(key); ok {w.Write([]byte(v))} else {http.Error(w, "Failed to GET", http.StatusNotFound)}case r.Method == "POST":url, err := ioutil.ReadAll(r.Body)if err != nil {log.Printf("Failed to read on POST (%v)\n", err)http.Error(w, "Failed on POST", http.StatusBadRequest)return}nodeId, err := strconv.ParseUint(key[1:], 0, 64)if err != nil {log.Printf("Failed to convert ID for conf change (%v)\n", err)http.Error(w, "Failed on POST", http.StatusBadRequest)return}cc := raftpb.ConfChange{Type:    raftpb.ConfChangeAddNode,NodeID:  nodeId,Context: url,}h.confChangeC <- cc// As above, optimistic that raft will apply the conf changew.WriteHeader(http.StatusNoContent)case r.Method == "DELETE":nodeId, err := strconv.ParseUint(key[1:], 0, 64)if err != nil {log.Printf("Failed to convert ID for conf change (%v)\n", err)http.Error(w, "Failed on DELETE", http.StatusBadRequest)return}cc := raftpb.ConfChange{Type:   raftpb.ConfChangeRemoveNode,NodeID: nodeId,}h.confChangeC <- cc// As above, optimistic that raft will apply the conf changew.WriteHeader(http.StatusNoContent)default:w.Header().Set("Allow", "PUT")w.Header().Add("Allow", "GET")w.Header().Add("Allow", "POST")w.Header().Add("Allow", "DELETE")http.Error(w, "Method not allowed", http.StatusMethodNotAllowed)}
}

终于在ref-4处看到了kvstore处理存储键值对的入口,就是Propose(...)方法。下面是该方法的细节:

// contrib/raftexample/kvstore.go文件
func (s *kvstore) Propose(k string, v string) {var buf bytes.Buffer// 对key-value数据进行编码,存储到buf中if err := gob.NewEncoder(&buf).Encode(kv{k, v}); err != nil {log.Fatal(err)}s.proposeC <- buf.String() // 将buf中的数据传递过channel
}

在代码中可以看到把数据传递给了proposeC这个channel,现在的关键就是找出来哪儿在从这个channel读取数据。

首先找到proposeC字段所在的类型定义,然后查看proposeC字段的使用点,可以看到它是在创建kvstore类型变量的时候传递进来的一个channel。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-JEILdB8q-1689314736004)(images/image-20230707171843285.png)]

接着跟踪,可以发现这个channel是newKVStore(...)函数的一个入参,这个函数我们在一开始的时候分析过。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-5GV6iAJv-1689314736005)(images/image-20230707172103285.png)]

我们重新回到ref-1处的代码,看看newKVStore调用是怎么传递这个关键channel的:

// contrib/raftexample/main.go文件
func main() {cluster := flag.String("cluster", "http://127.0.0.1:9021", "comma separated cluster peers")id := flag.Int("id", 1, "node ID")kvport := flag.Int("port", 9121, "key-value server port")join := flag.Bool("join", false, "join an existing cluster")flag.Parse()proposeC := make(chan string) // 创建proposeCdefer close(proposeC)confChangeC := make(chan raftpb.ConfChange)defer close(confChangeC)// raft provides a commit stream for the proposals from the http apivar kvs *kvstore // 定义kvstoregetSnapshot := func() ([]byte, error) { return kvs.getSnapshot() }commitC, errorC, snapshotterReady := newRaftNode(*id, strings.Split(*cluster, ","), *join, getSnapshot, proposeC, confChangeC) // ref-5 传递proposeC给raft的nodekvs = newKVStore(<-snapshotterReady, proposeC, commitC, errorC) // ref-1 创建kvstore// the key-value http handler will propose updates to raftserveHttpKVAPI(kvs, *kvport, confChangeC, errorC) // ref-2 使用kvstore
}

现在可以断定proposeC这个channel的数据读取就在ref-5处代码调用的newRaftNode(...)里面,代码如下所示:

// contrib/raftexample/raft.go 文件
// newRaftNode initiates a raft instance and returns a committed log entry
// channel and error channel. Proposals for log updates are sent over the
// provided the proposal channel. All log entries are replayed over the
// commit channel, followed by a nil message (to indicate the channel is
// current), then new log entries. To shutdown, close proposeC and read errorC.
func newRaftNode(id int, peers []string, join bool, getSnapshot func() ([]byte, error), proposeC <-chan string,confChangeC <-chan raftpb.ConfChange) (<-chan *commit, <-chan error, <-chan *snap.Snapshotter) {commitC := make(chan *commit)errorC := make(chan error)rc := &raftNode{proposeC:    proposeC, // ref-6  proposeC赋值给字段proposeCconfChangeC: confChangeC,commitC:     commitC,errorC:      errorC,id:          id,peers:       peers,join:        join,waldir:      fmt.Sprintf("raftexample-%d", id),snapdir:     fmt.Sprintf("raftexample-%d-snap", id),getSnapshot: getSnapshot,snapCount:   defaultSnapshotCount,stopc:       make(chan struct{}),httpstopc:   make(chan struct{}),httpdonec:   make(chan struct{}),logger: zap.NewExample(),snapshotterReady: make(chan *snap.Snapshotter, 1),// rest of structure populated after WAL replay}go rc.startRaft()return commitC, errorC, rc.snapshotterReady
}

我们接着跟raftNodeproposeC调用点,从下图中可以看到读取proposeC数据点只有一个。
在这里插入图片描述

我们接着看读取数据的具体代码:

// contrib/raftexample/raft.go文件
func (rc *raftNode) serveChannels() {snap, err := rc.raftStorage.Snapshot()if err != nil {panic(err)}rc.confState = snap.Metadata.ConfStaterc.snapshotIndex = snap.Metadata.Indexrc.appliedIndex = snap.Metadata.Indexdefer rc.wal.Close()ticker := time.NewTicker(100 * time.Millisecond)defer ticker.Stop()// send proposals over raftgo func() {confChangeCount := uint64(0)for rc.proposeC != nil && rc.confChangeC != nil {select {case prop, ok := <-rc.proposeC: // 读取键值对数据if !ok {rc.proposeC = nil} else {// blocks until accepted by raft state machinerc.node.Propose(context.TODO(), []byte(prop)) // ref-7 处理客户端写入的键值对}case cc, ok := <-rc.confChangeC:if !ok {rc.confChangeC = nil} else {confChangeCount++cc.ID = confChangeCountrc.node.ProposeConfChange(context.TODO(), cc)}}}// client closed channel; shutdown raft if not alreadyclose(rc.stopc)}()...... // 省略
}

我们接着看ref-7raftNode是怎么处理键值对写入的,由于Node是一个接口,我们需要看看这个Propose(...)方法的实现:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-TQpZJlzu-1689314736006)(images/image-20230707173531638.png)]

可以看到在raft模块中只有一个实现,在node.go文件中,如下所示:

// raft/node.go文件
func (n *node) Propose(ctx context.Context, data []byte) error {return n.stepWait(ctx, pb.Message{Type: pb.MsgProp, Entries: []pb.Entry{{Data: data}}})
}

可以看到,把数据放入到了pb.Entry中,并且将pb.Message的消息类型设置为了pb.MsgProp。我们接着看stepWait(...)方法:

// raft/node.go 文件
func (n *node) stepWait(ctx context.Context, m pb.Message) error {return n.stepWithWaitOption(ctx, m, true)
}// 进入到使用消息的状态机中。
// Step advances the state machine using msgs. The ctx.Err() will be returned,
// if any.
func (n *node) stepWithWaitOption(ctx context.Context, m pb.Message, wait bool) error {if m.Type != pb.MsgProp { // 如果消息类型不是pb.MsgPropselect {case n.recvc <- m:return nilcase <-ctx.Done():return ctx.Err()case <-n.done:return ErrStopped}}ch := n.propc // 赋值channelpm := msgWithResult{m: m} // 依据消息构建msgWithResult类型变量if wait { // 上游传递是truepm.result = make(chan error, 1) // 创建接收处理结果的channel}select {case ch <- pm: // ref-7  将构建的消息发送出去if !wait {return nil}case <-ctx.Done():return ctx.Err()case <-n.done:return ErrStopped}select {case err := <-pm.result: // ref-8  等待处理结果if err != nil {return err}case <-ctx.Done():return ctx.Err()case <-n.done:return ErrStopped}return nil
}

ref-7处是在将消息发送出去,那么现在的关键就是消息在哪儿读取的呢?

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-HhXH0t68-1689314736006)(images/image-20230707174633031.png)]

依据调用点信息,我们找到如下使用propc的地方:

// raft/node.go文件
func (n *node) run() {var propc chan msgWithResultvar readyc chan Readyvar advancec chan struct{}var rd Readyr := n.rn.raftlead := Nonefor {if advancec != nil {readyc = nil} else if n.rn.HasReady() {// Populate a Ready. Note that this Ready is not guaranteed to// actually be handled. We will arm readyc, but there's no guarantee// that we will actually send on it. It's possible that we will// service another channel instead, loop around, and then populate// the Ready again. We could instead force the previous Ready to be// handled first, but it's generally good to emit larger Readys plus// it simplifies testing (by emitting less frequently and more// predictably).rd = n.rn.readyWithoutAccept()readyc = n.readyc}if lead != r.lead {if r.hasLeader() {if lead == None {r.logger.Infof("raft.node: %x elected leader %x at term %d", r.id, r.lead, r.Term)} else {r.logger.Infof("raft.node: %x changed leader from %x to %x at term %d", r.id, lead, r.lead, r.Term)}propc = n.propc // ref-9 将节点的propc赋值给变量propc} else {r.logger.Infof("raft.node: %x lost leader %x at term %d", r.id, lead, r.Term)propc = nil}lead = r.lead}select {// TODO: maybe buffer the config propose if there exists one (the way// described in raft dissertation)// Currently it is dropped in Step silently.case pm := <-propc:  // 读取propc中的数据m := pm.m // 将pb.Message取出来m.From = r.iderr := r.Step(m) // ref-9if pm.result != nil {pm.result <- errclose(pm.result)}...... // 省略其他casecase <-advancec:n.rn.Advance(rd)rd = Ready{}advancec = nilcase c := <-n.status:c <- getStatus(r)case <-n.stop:close(n.done)return}}

该run()方法在前一篇博文中分析过,在此就不在赘述。我们接着看ref-9处是如何在step(...)方法中处理消息的:

// raft/raft.go文件
func (r *raft) Step(m pb.Message) error {// Handle the message term, which may result in our stepping down to a follower.switch { // 处理消息的任期数据case m.Term == 0: // 由于前面的数据都没有设置term,所以会走这个case// local messagecase m.Term > r.Term:...... // 省略case m.Term < r.Term:...... // 省略}switch m.Type {case pb.MsgHup:...... // 省略case pb.MsgVote, pb.MsgPreVote:...... // 省略default:err := r.step(r, m) // ref-10 处理消息if err != nil {return err}}return nil
}

我们继续看ref-10处是如何处理消息的,下面是该函数的访问点:
在这里插入图片描述

我们知道当前分析的是Leader节点,所以可以直接锁定唯一调用点就是将stepLeader赋值给r.step,代码如下所示:

// raft/raft.go文件
func (r *raft) becomeLeader() {// TODO(xiangli) remove the panic when the raft implementation is stableif r.state == StateFollower {panic("invalid transition [follower -> leader]")}r.step = stepLeader // ref-11 将stepLeader赋值给step字段r.reset(r.Term)r.tick = r.tickHeartbeatr.lead = r.idr.state = StateLeader...... // 省略
}

现在的关键就是stepLeader函数了。becomeLeader在上一篇博客中也提到过。下面我们接着看stepLeader函数细节:

// raft/raft.go文件
func stepLeader(r *raft, m pb.Message) error {// These message types do not require any progress for m.From.switch m.Type {case pb.MsgBeat:...... // 省略return nilcase pb.MsgCheckQuorum:...... // 省略return nilcase pb.MsgProp: // 依据前文阅读代码,消息类型是MsgProp,所以会走这个分支if len(m.Entries) == 0 {r.logger.Panicf("%x stepped empty MsgProp", r.id)}if r.prs.Progress[r.id] == nil {// If we are not currently a member of the range (i.e. this node// was removed from the configuration while serving as leader),// drop any new proposals.return ErrProposalDropped}if r.leadTransferee != None {r.logger.Debugf("%x [term %d] transfer leadership to %x is in progress; dropping proposal", r.id, r.Term, r.leadTransferee)return ErrProposalDropped}for i := range m.Entries {e := &m.Entries[i]var cc pb.ConfChangeIif e.Type == pb.EntryConfChange { // 如果是配置改变var ccc pb.ConfChangeif err := ccc.Unmarshal(e.Data); err != nil {panic(err)}cc = ccc} else if e.Type == pb.EntryConfChangeV2 { // 如果是配置改变的V2版本var ccc pb.ConfChangeV2if err := ccc.Unmarshal(e.Data); err != nil {panic(err)}cc = ccc}if cc != nil {alreadyPending := r.pendingConfIndex > r.raftLog.appliedalreadyJoint := len(r.prs.Config.Voters[1]) > 0wantsLeaveJoint := len(cc.AsV2().Changes) == 0var refused stringif alreadyPending {refused = fmt.Sprintf("possible unapplied conf change at index %d (applied to %d)", r.pendingConfIndex, r.raftLog.applied)} else if alreadyJoint && !wantsLeaveJoint {refused = "must transition out of joint config first"} else if !alreadyJoint && wantsLeaveJoint {refused = "not in joint state; refusing empty conf change"}if refused != "" {r.logger.Infof("%x ignoring conf change %v at config %s: %s", r.id, cc, r.prs.Config, refused)m.Entries[i] = pb.Entry{Type: pb.EntryNormal}} else {r.pendingConfIndex = r.raftLog.lastIndex() + uint64(i) + 1}}}if !r.appendEntry(m.Entries...) { // ref-13 将entry数据追加到raftlog中return ErrProposalDropped}r.bcastAppend() // ref-12 将entry数据广播到其他节点上return nilcase pb.MsgReadIndex:...... // 省略return nil}// All other message types require a progress for m.From (pr).pr := r.prs.Progress[m.From]if pr == nil {r.logger.Debugf("%x no progress available for %x", r.id, m.From)return nil}switch m.Type {...... // 省略}

ref-12处的代码是我们的关注点,接着看看数据是怎么广播出去的:

// raft/raft.go文件
// bcastAppend sends RPC, with entries to all peers that are not up-to-date
// according to the progress recorded in r.prs.
func (r *raft) bcastAppend() {// r.prs字段记录着其他节点的信息。这个visit方法就是遍历其他所有节点,然后发送信息r.prs.Visit(func(id uint64, _ *tracker.Progress) {if id == r.id {return}r.sendAppend(id) // ref-14 发送数据给其他节点})
}

我们接着看看怎么发送数据给其他节点的:

// raft/raft.go 文件
// sendAppend sends an append RPC with new entries (if any) and the
// current commit index to the given peer.
func (r *raft) sendAppend(to uint64) {r.maybeSendAppend(to, true)
}
// maybeSendAppend sends an append RPC with new entries to the given peer,
// if necessary. Returns true if a message was sent. The sendIfEmpty
// argument controls whether messages with no entries will be sent
// ("empty" messages are useful to convey updated Commit indexes, but
// are undesirable when we're sending multiple messages in a batch).
func (r *raft) maybeSendAppend(to uint64, sendIfEmpty bool) bool {pr := r.prs.Progress[to]if pr.IsPaused() {return false}m := pb.Message{}m.To = to// 从r.raftlog中获取任期和entry数据。这个地方就和前面往r.raftlog中存入日志呼应起来了。term, errt := r.raftLog.term(pr.Next - 1)ents, erre := r.raftLog.entries(pr.Next, r.maxMsgSize)if len(ents) == 0 && !sendIfEmpty {return false}if errt != nil || erre != nil { // send snapshot if we failed to get term or entries...... // 省略对错误情况的处理} else {// 组装要发送的消息m.Type = pb.MsgApp  // 注意这个消息类型是pb.MsgAppm.Index = pr.Next - 1m.LogTerm = termm.Entries = entsm.Commit = r.raftLog.committedif n := len(m.Entries); n != 0 {switch pr.State {// optimistically increase the next when in StateReplicatecase tracker.StateReplicate:last := m.Entries[n-1].Indexpr.OptimisticUpdate(last)pr.Inflights.Add(last)case tracker.StateProbe:pr.ProbeSent = truedefault:r.logger.Panicf("%x is sending append in unhandled state %s", r.id, pr.State)}}}r.send(m) // 发送数据return true
}

现在的关键点,在于r.send(m)是如何将数据发送出去的:

// raft/raft.go文件
// send schedules persisting state to a stable storage and AFTER that
// sending the message (as part of next Ready message processing).
func (r *raft) send(m pb.Message) {if m.From == None {m.From = r.id}if m.Type == pb.MsgVote || m.Type == pb.MsgVoteResp || m.Type == pb.MsgPreVote || m.Type == pb.MsgPreVoteResp {if m.Term == 0 {// All {pre-,}campaign messages need to have the term set when// sending.// - MsgVote: m.Term is the term the node is campaigning for,//   non-zero as we increment the term when campaigning.// - MsgVoteResp: m.Term is the new r.Term if the MsgVote was//   granted, non-zero for the same reason MsgVote is// - MsgPreVote: m.Term is the term the node will campaign,//   non-zero as we use m.Term to indicate the next term we'll be//   campaigning for// - MsgPreVoteResp: m.Term is the term received in the original//   MsgPreVote if the pre-vote was granted, non-zero for the//   same reasons MsgPreVote ispanic(fmt.Sprintf("term should be set when sending %s", m.Type))}} else {if m.Term != 0 {panic(fmt.Sprintf("term should not be set when sending %s (was %d)", m.Type, m.Term))}// do not attach term to MsgProp, MsgReadIndex// proposals are a way to forward to the leader and// should be treated as local message.// MsgReadIndex is also forwarded to leader.if m.Type != pb.MsgProp && m.Type != pb.MsgReadIndex {m.Term = r.Term}}r.msgs = append(r.msgs, m) // 将消息m追加到r.msgs上
}

消息被追加到r.msgs上,那么哪儿又在读取这个r.msgs呢?只有一个地方,该r.msgs被赋值给其他字段:

// raft/node.go文件
func newReady(r *raft, prevSoftSt *SoftState, prevHardSt pb.HardState) Ready {rd := Ready{Entries:          r.raftLog.unstableEntries(),CommittedEntries: r.raftLog.nextEnts(),Messages:         r.msgs, // 将r.msgs赋值给Messages}...... // 省略其他处理return rd
}

传输Messages的地方如下所示:

func (rc *raftNode) serveChannels() {...... // 省略// event loop on raft state machine updatesfor {select {case <-ticker.C:rc.node.Tick()// store raft entries to wal, then publish over commit channelcase rd := <-rc.node.Ready():rc.wal.Save(rd.HardState, rd.Entries)if !raft.IsEmptySnap(rd.Snapshot) {rc.saveSnap(rd.Snapshot)rc.raftStorage.ApplySnapshot(rd.Snapshot)rc.publishSnapshot(rd.Snapshot)}rc.raftStorage.Append(rd.Entries)rc.transport.Send(rd.Messages) // 调用传输模块,发送消息。这个传输模块是ETCD的etcdserver模块提供的。applyDoneC, ok := rc.publishEntries(rc.entriesToApply(rd.CommittedEntries))if !ok {rc.stop()return}rc.maybeTriggerSnapshot(applyDoneC)rc.node.Advance()case err := <-rc.transport.ErrorC:rc.writeError(err)returncase <-rc.stopc:rc.stop()return}}
}

2.2 消息的接收

在集群中,Follower会接收到leader的消息,我们直接看becomeFollower函数,如下所示:

// raft/raft.go 文件
func (r *raft) becomeFollower(term uint64, lead uint64) {r.step = stepFollower // ref-15 设置处理消息接收的函数r.reset(term)r.tick = r.tickElectionr.lead = leadr.state = StateFollowerr.logger.Infof("%x became follower at term %d", r.id, r.Term)
}

我们接着看关键函数stepFollower,如下所示:

// raft/raft.go文件
func stepFollower(r *raft, m pb.Message) error {switch m.Type {case pb.MsgProp:if r.lead == None {r.logger.Infof("%x no leader at term %d; dropping proposal", r.id, r.Term)return ErrProposalDropped} else if r.disableProposalForwarding {r.logger.Infof("%x not forwarding to leader %x at term %d; dropping proposal", r.id, r.lead, r.Term)return ErrProposalDropped}m.To = r.leadr.send(m)case pb.MsgApp: // 上文中Leader最后发送的消息类型就是pb.MsgApp,因此会走这个分支r.electionElapsed = 0r.lead = m.Fromr.handleAppendEntries(m) // ref-16 处理消息中的entries数据case pb.MsgHeartbeat:r.electionElapsed = 0r.lead = m.Fromr.handleHeartbeat(m)case pb.MsgSnap:r.electionElapsed = 0r.lead = m.Fromr.handleSnapshot(m)case pb.MsgTransferLeader:if r.lead == None {r.logger.Infof("%x no leader at term %d; dropping leader transfer msg", r.id, r.Term)return nil}m.To = r.leadr.send(m)case pb.MsgTimeoutNow:r.logger.Infof("%x [term %d] received MsgTimeoutNow from %x and starts an election to get leadership.", r.id, r.Term, m.From)// Leadership transfers never use pre-vote even if r.preVote is true; we// know we are not recovering from a partition so there is no need for the// extra round trip.r.hup(campaignTransfer)case pb.MsgReadIndex:if r.lead == None {r.logger.Infof("%x no leader at term %d; dropping index reading msg", r.id, r.Term)return nil}m.To = r.leadr.send(m)case pb.MsgReadIndexResp:if len(m.Entries) != 1 {r.logger.Errorf("%x invalid format of MsgReadIndexResp from %x, entries count: %d", r.id, m.From, len(m.Entries))return nil}r.readStates = append(r.readStates, ReadState{Index: m.Index, RequestCtx: m.Entries[0].Data})}return nil
}

我们接着看关键函数handleAppendEntries,如下所示:

// raft/raft.go文件
func (r *raft) handleAppendEntries(m pb.Message) {if m.Index < r.raftLog.committed { // 如果消息的index小于提交的记录,则什么也不做。r.send(pb.Message{To: m.From, Type: pb.MsgAppResp, Index: r.raftLog.committed})return}// 开始追加entry数据if mlastIndex, ok := r.raftLog.maybeAppend(m.Index, m.LogTerm, m.Commit, m.Entries...); ok {r.send(pb.Message{To: m.From, Type: pb.MsgAppResp, Index: mlastIndex})} else {...... // 省略}
}

现在关键步骤是r.raftLog.maybeAppend(m.Index, m.LogTerm, m.Commit, m.Entries...),我们接着看:

// raft/log.go 文件
// maybeAppend returns (0, false) if the entries cannot be appended. Otherwise,
// it returns (last index of new entries, true).
func (l *raftLog) maybeAppend(index, logTerm, committed uint64, ents ...pb.Entry) (lastnewi uint64, ok bool) {if l.matchTerm(index, logTerm) {lastnewi = index + uint64(len(ents))ci := l.findConflict(ents)switch {case ci == 0:case ci <= l.committed:l.logger.Panicf("entry %d conflict with committed entry [committed(%d)]", ci, l.committed)default:offset := index + 1l.append(ents[ci-offset:]...) // ref-16 将数据追加到日志中}l.commitTo(min(committed, lastnewi)) // 提交数据return lastnewi, true}return 0, false
}

ref-16处代码在处理数据的追加,详细细节如下:

// raft/log.go文件
func (l *raftLog) append(ents ...pb.Entry) uint64 {if len(ents) == 0 {return l.lastIndex()}if after := ents[0].Index - 1; after < l.committed {l.logger.Panicf("after(%d) is out of range [committed(%d)]", after, l.committed)}l.unstable.truncateAndAppend(ents)return l.lastIndex()
}// raft/log_unstable.go文件
func (u *unstable) truncateAndAppend(ents []pb.Entry) {after := ents[0].Indexswitch {case after == u.offset+uint64(len(u.entries)):// after is the next index in the u.entries// directly appendu.entries = append(u.entries, ents...)case after <= u.offset:u.logger.Infof("replace the unstable entries from index %d", after)// The log is being truncated to before our current offset// portion, so set the offset and replace the entriesu.offset = afteru.entries = entsdefault:// truncate to after and copy to u.entries// then appendu.logger.Infof("truncate the unstable entries before index %d", after)u.entries = append([]pb.Entry{}, u.slice(u.offset, after)...)u.entries = append(u.entries, ents...)}
}func (u *unstable) truncateAndAppend(ents []pb.Entry) {after := ents[0].Indexswitch {case after == u.offset+uint64(len(u.entries)):// after is the next index in the u.entries// directly appendu.entries = append(u.entries, ents...)case after <= u.offset:u.logger.Infof("replace the unstable entries from index %d", after)// The log is being truncated to before our current offset// portion, so set the offset and replace the entriesu.offset = afteru.entries = entsdefault:// truncate to after and copy to u.entries// then appendu.logger.Infof("truncate the unstable entries before index %d", after)u.entries = append([]pb.Entry{}, u.slice(u.offset, after)...)u.entries = append(u.entries, ents...)}
}

日志复制流程的分析到这儿就结束了。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/21769.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

如何在 Windows 中免费合并 PDF 文件 [在线和离线]

PDF是一种广泛使用的文件格式&#xff0c;具有兼容性好、安全性高、易于打印、方便浏览等众多优点。在工作和学习过程中&#xff0c;经常需要将同一类型的PDF文件合并起来&#xff0c;以方便传输和查看&#xff0c;使得合并PDF文件成为一种重要的数据整合方法。 如果您想知道如…

android更换开机动画

android11 路径&#xff1a;device / {vendor-name} / {platform-name} / {device-name} / system / bootanimation.zip 例&#xff1a;android \ device \ softwinner \ ceres \ ceres-b6 \ system \ bootanimation.zip android13 路径&#xff1a;device / softwinner / {PRO…

基于单片机空气质量检测二氧化碳 一氧化碳温湿度PM2.5检测系统的设计与实现

功能介绍 以51单片机作为主控系统&#xff1b;对空气空气中有毒有害气体进行监测&#xff1b;使用LCD1602液晶显示&#xff0c;采集到的PM2.5值通过单片机串口传输&#xff1b;通过传感器对室内PM2.5粉尘进行检查&#xff1b;通过按键设置的上限值&#xff1b;当检测到有毒气体…

iOS开发 - NotificationService语音播报

iOS NotificationService语音播报 最近碰到个接收到推送要实现语音播报的需求&#xff0c;需要后台推送通知&#xff0c;APP客户端收到通知之后语音播放&#xff1a;“您的账户收到一笔巨款”的功能。 因为工程之前已经集成了极光推送服务。这里直接使用Notification Service…

Vue3+Vite+Pinia+Naive后台管理系统搭建之八:构建 login.vue 登录页

前言 如果对 vue3 的语法不熟悉的&#xff0c;可以移步Vue3.0 基础入门&#xff0c;快速入门。 项目所需要的图片&#xff0c;icon图标&#xff08;推荐&#xff1a;阿里巴巴矢量图标库&#xff09;自行获取&#xff0c;命名一致就行。 1. 构建 src/components/CopyRight.vu…

MySQL~索引的优缺点是什么?有哪些优化索引的方法?

1.索引的优缺点 优点&#xff1a;提高查询记录的速度。 缺点&#xff1a; 需要占用空间&#xff0c;索引是一种用空间换时间的做法创建索引和维护索引都需要消耗时间&#xff0c;会降低表的增删查改效率&#xff0c;因为每次进行增删查改&#xff0c;都需要对索引进行维护&a…

系统设计蓝图 / 备忘单

开发一个强大、可扩展和高效的系统可能会令人望而却步。然而&#xff0c;了解关键概念和组件可以使这个过程更可管理。在本博客文章中&#xff0c;我们将探讨系统设计的关键概念和组件&#xff0c;如DNS、负载均衡、API网关等&#xff0c;以及一个简明的备忘单&#xff0c;可以…

自定义类型详解

目录 一、结构体类型 1.认识结构体 2.如何使用结构体类型 2.1按顺序初始化结构体&#xff0c;并通过.符号访问 ​2.2无序初始化结构体&#xff0c;并用.符号访问 2.3通过地址的方式访问结构体变量成员 2.4结构体的自引用 2.4.1错误的自引用 2.4.2正确的自引用 3.结构体内存…

Ansible 自动化运维工具(完善版)

目录 Ansible概述 Ansible特点 Ansible应用 1、使用者 2、Ansible工具集合 3、作用对象 Ansible的搭建 环境 ansible主机 1、ansible 2、Ansible-doc Ansible模块 1.command模块 2.shell模块 3.raw模块 Ansible概述 Ansible是最近非常火的一款开源运维自动化工具…

最全的ubuntu20.04上安装nvidia和cuda

文章目录 一、环境要求二、查询推荐安装的驱动版本三、安装470四、查看显卡驱动是否成功五、安装对应版本的cuda 官方路径 一、环境要求 ubuntu20.04如果之前有过驱动应该先卸载 sudo apt-get --purge remove nvidia* sudo apt autoremove# 卸载cuda sudo apt-get --purge r…

Oracle通过函数调用dblink同步表数据方案(全量/增量)

创建对应的包&#xff0c;以方便触发调用 /*包声明*/ CREATE OR REPLACE PACKAGE yjb.pkg_scene_job AS /*创建同步任务*/FUNCTION F_SYNC_DRUG_STOCK RETURN NUMBER;/*同步*/PROCEDURE PRC_SYNC_DRUG_STOCK(RUNJOB VARCHAR2) ; END pkg_scene_job; /*包体*/ CREATE OR REPL…

Hudi基础知识讲解

Hudi概述 Hudi是一种数据湖的存储格式&#xff0c;在Hadoop文件系统之上提供了更新数据和删除数据的能力以及消费变化数据的能力。支持多种计算引擎&#xff0c;提供IUD接口&#xff0c;在 HDFS的数据集上提供了插入更新和增量拉取的流原语。 基础架构图 Hudi特性 ACID事务能…