Logical grouping of instances within an availability zone.
Types of placement groups -
Cluster - instances bucketed into a low-latency group in a single Availability Zone.
Same rack
Same AZ
Same hardware
Pro: Provides a great network (10 Gbps speed between instances)
Con: If the rack fails, all instances will suffer downtime.
Use-case
Big Data jobs that need to complete very fast
Apps requiring extremely low latency and high network throughput
Risk appetite of failure is acceptable to the use cases.
Spread - instances are spread across different hardware (max 7 instances per group per AZ)
Minimises the failure risk
EC2 instances are placed on different hardware.
Pro: Reduced risk of simultaneous failure
Pro: spans across AZ
Con: Limited to 7 instances per AZ
Use Case :
Apps requiring high availability
Critical apps where each instance must be isolated from failure of another instance.
Partition - spreads instances across many different partitions which rely on different sets of racks of hardware within an AZ. Capable of scaling to 100s of instances per group (Hadoop, Cassandra, Kafka)
instances spread across partitions in multiple AZs
can have up to 7 partitions per AZ
each partition can have many EC2 instances
each partition represents a rack
Pro : safe from rack failure
Pro : allows 100s of EC2 instances to be set up
Pro : instances in a partition do not share hardware racks with other partitions
partition information available as metadata to the EC2 instances
Use-Cases : Partition aware apps like Big Data apps : HDFS, HBase, Cassandra, Kafka
No comments:
Post a Comment