DistributedSystem
2017-06-09 12:13:02 2 举报
登录查看完整内容
为你推荐
查看更多
抱歉,暂无相关内容
MIT 6.824 paper summary notes
作者其他创作
大纲/内容
high scalable
RDD(resilient distributed dataset)
In-memory Data
Distributed Transaction
Client
99.9% standard
strong C
user' computers not reliable
P2P
Map work2
hard to find data item
look up resources
Execute phase:read & calculate
Lock server service for all ohjects
Optimistic concurrency control (OCC)
version vector
replicated state machine
PNUTS
\"record master\" per record
transformation
R+WN
Use Raft-like Paxos
Map work1
release consistency
hint handoff
finger tables for fewer hops
cache metadata
data immutable
Key-based partition
User specific consistency
often connectless
Persistence
sloppy quorums when nodes fail
BitTorrent
Master(metadata)
Chord
version #
Zookeeper
all read and write need acquire lock
solve
through YMB with master location bits
Spinnaker
one-side RDMA(remote direct memory access) (network adapter)
Snapshot
Primary/Backup for each node
create new RDD
GFS
Raft
hierarchyfile system
Map workn
Log replication
Amazon
Recovery: log
cause abort if either fails
Main task:Distributed commit & concurrency control
strong Cwith Log
if 2f + 1 severs
conflict version control & merge
only write diffs
Distributed Computation
Reduce work1
write linearizabilitycan read stale data
Primary/Backup Replication
Commit phase:lock + validate(version #)
tracker server
ChunkSever2
Reduce work2
Master
Distributed shared memory
FaRM
eventual C
DHT(distributed hash table)
improve performance
high available
Performance:slow
API options
failure
may stale data or use version# to sync read
same replica data for all different regions
Lineage graph
record-level mastering
Key/Value store
Microsoft
Membership (gossip-based)
replic
communication
state transfer (state information may be too large)
stale read
Dynamo
action
operation log
send write diffs to all copies when release lock
always writeable
easy be attacked by evil participants
fault tolerance
LinkedIn
Pros:spreads network/caching costs over users
Spark
fault tolerant distributed consensus algorithm/protocol
coordination service for distributed system
ChunkSever1
Log channels between Primary/Backup
Problem 2: whole page replacement - amplify write
Leader election
timestamp
Store result in
works well for only few conflicts
Mulin
great for iterative algorithm & data mining tools
Problem 1: block whole page - false conflict
YMB(Yahoo! message broker)
P2P with DHT(zero-hop finger table)
relax C
ChunkSeverN
distributed Key/Value store
General way:two-phase(2PC)
Cons:
MapReduce
Merkle tree
relax/time-line C
hypervisor
Bayou
Reduce workn
decentralized system
Conventional way:invalid other server's whole page
收藏
0 条评论
回复 删除
下一页