🔄 MongoDB Replica Set High Availability

Interactive Demo: Automatic Failover & Data Protection

🛡️ Why This Matters: The Zero-Downtime Promise

Traditional databases often require complex clustering solutions, expensive third-party tools, or manual failover procedures that can take minutes or hours to restore service during failures. During this time, your application is completely unavailable.

MongoDB's differentiator: Built-in replica sets provide automatic failover in seconds, continuous data protection, and transparent read scaling. No additional clustering software, no manual intervention, no single points of failure.

Real Impact: Applications achieve 99.99%+ uptime with automatic recovery, eliminate data loss risks, and scale read operations across multiple servers without architectural changes.

📋 What This Demo Shows

⚙️ MongoDB Node Priority Configuration for 2-Node Quorum Scenarios

Customer Concern: "What happens when a primary fails and only 2 nodes remain? Won't MongoDB become read-only?"

🔧 Priority-Based Election Configuration

MongoDB maintains write availability through strategic priority settings and proper replica set sizing with data-bearing nodes:

Scenario 1: 3-Node Standard Replica Set
Node A (Primary)
Priority: 2
Node B (Secondary)
Priority: 1
Node C (Secondary)
Priority: 1

Result: If primary fails, Node B or C can become primary with majority vote (2/3 majority maintained)

Scenario 2: Data-Bearing Priority Config
Node A (Primary)
Priority: 3
Node B (Secondary)
Priority: 2
Node C (Secondary)
Priority: 1

Configuration Command:

rs.reconfig({..., members: [{..., priority: 3}, {..., priority: 2}, {..., priority: 1}]})

Result: Node A preferred as primary, but B can take over if A fails and majority exists

🎯 Reality Check: When MongoDB Stays Available

  • Majority Rule: With proper configuration, 2 out of 3 nodes can maintain write availability
  • Data-Bearing Nodes: All nodes store data and can serve reads, providing better resource utilization
  • Priority Weighting: Higher priority nodes become primary when available, ensuring predictable failover
  • No Single Points: Multiple nodes can serve as primary, unlike single-master systems
⚠️ True 2-Node Limitation

MongoDB intentionally becomes read-only in true 2-node scenarios to prevent split-brain. This is a feature, not a limitation - it ensures data consistency over availability when network partitions occur.

Best Practice: Deploy with 3+ data-bearing nodes for optimal availability and resource utilization.

🐘 PostgreSQL WAL-Based High Availability Comparison

PostgreSQL achieves high availability through Write-Ahead Logging (WAL) replication, similar to systems like Aurora and AlloyDB:

PostgreSQL Streaming Replication
Primary Server
WAL Writer
→ WAL Stream →
Standby Server
WAL Receiver
Failure Scenario Walkthrough:
1. Primary Failure Detection:

• Connection manager (pgpool/HAProxy) detects primary failure

• Application connections start failing

2. Manual Failover Process:

• DBA manually promotes standby: pg_promote()

• Or automatic with tools like Patroni/repmgr

3. Application Reconnection:

• Connection string updates required

• Load balancer reconfiguration

• Application restart may be needed

⚠️ PostgreSQL HA Limitations:
  • Manual Intervention: Requires DBA action or complex orchestration tools
  • Single Point of Failure: Only one primary can accept writes
  • Application Awareness: Apps must handle connection failures and reconnects
  • Split-Brain Risk: Without proper fencing, multiple primaries possible
  • Failover Time: Typically 30-60 seconds minimum for detection + promotion
MongoDB Automatic Failover Advantage
Automatic Detection:
Heartbeat failure (10s)
Automatic Election:
Priority-based voting (2-3s)
Automatic Recovery:
New primary ready (~15s total)
✅ MongoDB HA Advantages:
  • Zero Manual Intervention: Fully automated failover process
  • Driver Intelligence: Applications automatically discover new primary
  • Sub-15 Second Recovery: Typical failover in under 15 seconds
  • Built-in Split-Brain Prevention: Majority voting prevents dual primaries
  • Transparent to Apps: Retryable writes handle transient failures

🏆 Real-World Impact Comparison

Aspect MongoDB Replica Sets PostgreSQL + WAL
Failover Time 10-15 seconds (automatic) 30-60+ seconds (manual/scripted)
DBA Intervention None required Manual promotion or complex tooling
Application Changes None (driver handles automatically) Connection handling, retries
Split-Brain Prevention Built-in majority consensus Requires fencing mechanisms

🎮 Demo Controls

100.0%
Write Availability
3
Replica Members
0
Writes Completed
0
Reads Completed
0
Write Retries
0
Read Retries

🏗️ MongoDB Replica Set Architecture

Replica Set: "myReplicaSet"

📖 Read Preference Options

Primary - All reads from primary (strong consistency)
Primary Preferred - Primary first, then secondaries
Secondary - Only from secondaries (read scaling)
Secondary Preferred - Secondaries first, then primary

📊 Replica Set Activity Log