Skip to content

Services

All six services run as Kubernetes Deployment objects on Amazon EKS, managed by the shared Helm chart (helm/appointment-service/). Each service has its own namespace for network policy isolation and its own IAM role via IRSA for least-privilege AWS API access.

API Gateway

Technology: AWS ALB + Amazon API Gateway

The entry point for all client requests. Terminates TLS, enforces OIDC authentication, applies rate limiting, and routes traffic to the appropriate backend service running on EKS. Country-prefixed path routing (/fr/, /de/) enables multi-country dispatch without per-country deployments.


Search Service

Runtime: EKS pods (horizontally scaled, r6i.xlarge node group) Data store: Amazon OpenSearch Cache: ElastiCache Redis

Handles patient queries combining location, specialty, and availability filters. The service translates natural-language-style queries into OpenSearch DSL:

"dentist within 10km of Paris with availability this week"
  → geo_distance filter + range filter on next_available_slot

Index strategy: One index per country (practitioners_fr, practitioners_de, …). This allows country-specific field mappings and simplifies data residency compliance.

Caching: Frequent search queries are cached in Redis with a 30–60 second TTL. Cache misses fall through to OpenSearch.

Node group: Search pods run on a dedicated r6i.xlarge node group (memory-optimised) with a workload=search:NoSchedule taint that prevents general workloads from landing there.

HPA bounds: 3–30 replicas, scaling on CPU 70% and memory 80%.


Booking Service

Runtime: EKS pods (canary deployment strategy) Data store: RDS Aurora PostgreSQL — booking cluster (db.r6g.xlarge writer, 1 reader replica)

Manages the full appointment lifecycle: create, cancel, reschedule. This is the revenue-critical write path and deploys via canary (10% initial traffic weight) to limit blast radius.

Double-booking prevention is enforced at the database level with a unique partial index:

CREATE UNIQUE INDEX idx_no_double_book
  ON bookings (practitioner_id, slot_start)
  WHERE status = 'confirmed';

A Redis pre-check reduces contention under load, but the DB constraint is the definitive gate. See ADR-004.

Booking flow (abbreviated):

  1. Redis pre-check — is the slot likely available?
  2. INSERT INTO bookings — unique index enforces correctness
  3. Publish BookingCreated to MSK Kafka topic booking-events

HPA bounds: 3–20 replicas.


Availability Service

Runtime: EKS pods (canary deployment strategy) Hot cache: ElastiCache Redis — bitmap per practitioner per day Cold store: RDS PostgreSQL (canonical schedules + exceptions, stored in the practitioner cluster)

Computes and caches practitioner time slots. The Redis representation is compact: a bitmap of 24 slots per day per practitioner.

Memory footprint: 500K practitioners × 14 days = 7 million keys ≈ 2–4 GB in Redis.

The service consumes BookingCreated and BookingCancelled events from Kafka to keep the bitmap current, then publishes AvailabilityChanged for downstream consumers (Search Service index updates).


Practitioner Service

Runtime: EKS pods (rolling deployment strategy) Data store: RDS Aurora PostgreSQL — practitioner cluster (db.r6g.large writer, 1 reader replica)

CRUD for practitioner profiles: specialty, location, working hours, consultation types. Tables are partitioned by country_code to support multi-country isolation at the data tier.

Practitioner profile changes publish to the practitioner-events Kafka topic, which the Search Service consumes to keep OpenSearch indices current.


Patient Service

Runtime: EKS pods (canary deployment strategy) Data store: RDS Aurora PostgreSQL — patient cluster (db.r6g.large writer, 1 reader replica)

Stores patient profiles and booking history. Deploys via canary because profile data is user-facing and regressions have direct patient impact.


Notification Service

Runtime: EKS pods (rolling deployment strategy) Triggers: Kafka consumer on booking-events Delivery: Amazon SES (email), Amazon SNS (SMS/push)

Sends appointment confirmations, reminders, and cancellation notices. Purely event-driven — it has no synchronous API surface. Deploys via rolling update because a brief notification delay has no booking impact.

Events consumed:

Kafka Event Notification Sent
BookingCreated Confirmation to patient + practitioner
BookingCancelled Cancellation notice to patient + practitioner
BookingRescheduled Updated confirmation to both parties

Service Summary

Service Strategy HPA Range Node Group Data Store
Search Rolling 3–30 search (r6i.xlarge) OpenSearch + Redis
Booking Canary 10% 3–20 general (m6i.xlarge) RDS booking cluster
Availability Canary 10% 2–10 general Redis + RDS practitioner
Practitioner Rolling 2–10 general RDS practitioner cluster
Patient Canary 10% 2–10 general RDS patient cluster
Notification Rolling 2–10 general — (Kafka consumer only)