Ceph
2016-07-17 11:01:21 0 举报
Ceph是一个高度可扩展的分布式存储系统,它提供了对象、块和文件存储服务。Ceph的设计目标是实现高性能、高可靠性和容错性,同时具有易于管理和维护的特点。Ceph的核心组件是一组自主运行且无状态的守护进程,它们通过CRUSH算法自动管理数据的位置和复制,以实现数据的高可用性和负载均衡。Ceph还支持多种存储介质,如硬盘、SSD和NVMe SSD,以及多种网络接口,如10GbE、40GbE和InfiniBand。此外,Ceph还提供了丰富的API和工具,方便用户进行数据管理和监控。总之,Ceph是一个强大而灵活的存储解决方案,适用于各种规模的企业和数据中心。
作者其他创作
大纲/内容
end
ACCEPT
LEASE ACK
propose_pending
refresh_from_paxos
初始化 Paxos
COLLECT
PG Map
Monitor 处理
PEON
boostrap
Paxos
Full Synchronizing
check uncommited value
no
LEASE
Monitor Map
CRUSH Map
OP_COOKIE
Messager收到消息
need sync
found quorum or detect election
STATE_UPDATING
OP_VICTORY
STATE_PROBING
finish or error
Thread 2
Spawn
PROBE and Sync
begin
调用Monitor的ms_dispatch
PROBE REPLY
leader collected uncommitted value
ACTIVE
OP_GET_CHUNK
paxos propose_pending
pika new pn
preprocess_query
accept
LAST
leader commit local and send commit to quorum
Data
yes
PaxosService的dispatch
finish_round
STATE_PEON
is leader
PROPOSE
OSD Map
转给leader
peon recv begin
Montor加入Mesager dispatch 队列
Boostrasp
BEGIN
peon_init
OP_GET_COOKIE_FULL/OP_GET_COOKIE_RECENT
Timeout
STATE_SYNCHRONIZING
LEADER
win_election
leader recv accept from quorum
UPDATING
启动DBStore(Leveldb)
PaxosService
pn my accept pn
finish_context
STATE_ELECTING
新建初始化Messager
lose_election
Log
trigger_propose
STATE_UPDATING_PREVIOUS
ELECTION
获取解析monmap
call_election
recv collect
collect begin
COMMIT
RECOVERING
ELECTING
WRITING
lose
Thread 3
prepare_update
Election
STATE_WRITING
初始化 PaxosService
Thread 1
STATE_REFRESH
need update
STATE_ACTIVE
For PaxosService
win
STATE_LEADER
leader_init
should_propose 计算延迟时间
Leveldb
OP_ACK
leader commit local and send commit to quorum
PROBE
STATE_RECOVERING
OP_CHUNK/OP_LAST_CHUNK
look for uncommitted value
paxos become UPDATING
启动定时任务
0 条评论
下一页