
Let's cut the nonsense.
You've read fifteen articles about microservices. They all show the same cartoon diagrams. Boxes connected by arrows. "Service A talks to Service B." Groundbreaking stuff.
Here's an actual microservice architecture example you can steal. Production-ready patterns. Real code. The brutal truth about when this architecture will save your company—and when it'll sink it.
Why Most Microservice Architecture Examples Are Useless
They show you the happy path. Everything communicates perfectly. No network failures. No data consistency nightmares. No 3 AM pages because someone deployed a breaking change to the user service.
The diagram lied to you. Distributed systems are hard. Accept it or go back to your monolith.
We're building an e-commerce order processing system. Four services. One database per service. Event-driven communication. This microservice architecture example will hurt—but you'll learn.
The System: Order Processing Microservice Architecture Example
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ API │────▶│ Order │────▶│ Inventory │
│ Gateway │ │ Service │ │ Service │
└─────────────┘ └─────────────┘ └─────────────┘
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐
│ Payment │ │ Shipping │
│ Service │ │ Service │
└─────────────┘ └─────────────┘
│
▼
┌─────────────────────────────────┐
│ Message Broker │
│ (RabbitMQ) │
└─────────────────────────────────┘
Four services. Each owns its data. Each can be deployed independently. Each can fail independently—which they will.
Service 1: Order Service (The Brain)
This is where orders are born and die. It orchestrates the chaos.
// order-service/src/order.controller.ts
import { Controller, Post, Body, Inject } from '@nestjs/common';
import { ClientProxy } from '@nestjs/microservices';
import { CreateOrderDto } from './dto/create-order.dto';
import { OrderService } from './order.service';
@Controller('orders')
export class OrderController {
constructor(
private readonly orderService: OrderService,
@Inject('INVENTORY_SERVICE') private inventoryClient: ClientProxy,
@Inject('PAYMENT_SERVICE') private paymentClient: ClientProxy,
) {}
@Post()
async createOrder(@Body() dto: CreateOrderDto) {
// Step 1: Reserve inventory (or die trying)
const reserved = await this.inventoryClient
.send('reserve_inventory', {
productId: dto.productId,
quantity: dto.quantity
})
.toPromise();
if (!reserved.success) {
return { status: 'FAILED', reason: 'OUT_OF_STOCK' };
}
// Step 2: Process payment
const payment = await this.paymentClient
.send('process_payment', {
amount: dto.amount,
userId: dto.userId,
reservationId: reserved.reservationId,
})
.toPromise();
if (!payment.success) {
// Compensating transaction - release the inventory
await this.inventoryClient
.send('release_inventory', { reservationId: reserved.reservationId })
.toPromise();
return { status: 'FAILED', reason: 'PAYMENT_DECLINED' };
}
// Step 3: Create the order
const order = await this.orderService.create({
...dto,
reservationId: reserved.reservationId,
paymentId: payment.paymentId,
status: 'CONFIRMED',
});
return { status: 'SUCCESS', orderId: order.id };
}
}
Notice the compensating transaction. When payment fails, we release inventory. This is the Saga pattern. Forget distributed transactions—they don't scale and they'll lock your database into oblivion.
Service 2: Inventory Service (The Gatekeeper)
// inventory-service/internal/handler/inventory.go
package handler
import (
"context"
"time"
"github.com/google/uuid"
"inventory-service/internal/repository"
)
type InventoryHandler struct {
repo *repository.InventoryRepo
}
type ReserveRequest struct {
ProductID string `json:"productId"`
Quantity int `json:"quantity"`
}
type ReserveResponse struct {
Success bool `json:"success"`
ReservationID string `json:"reservationId,omitempty"`
Error string `json:"error,omitempty"`
}
func (h *InventoryHandler) ReserveInventory(ctx context.Context, req ReserveRequest) ReserveResponse {
// Pessimistic lock. Yes, it's slow. Yes, it's correct.
// Pick one: fast and wrong, or slow and right.
tx, err := h.repo.BeginTx(ctx)
if err != nil {
return ReserveResponse{Success: false, Error: "TX_FAILED"}
}
defer tx.Rollback()
stock, err := h.repo.GetStockForUpdate(ctx, tx, req.ProductID)
if err != nil {
return ReserveResponse{Success: false, Error: "PRODUCT_NOT_FOUND"}
}
if stock.Available < req.Quantity {
return ReserveResponse{Success: false, Error: "INSUFFICIENT_STOCK"}
}
reservationID := uuid.New().String()
err = h.repo.CreateReservation(ctx, tx, repository.Reservation{
ID: reservationID,
ProductID: req.ProductID,
Quantity: req.Quantity,
ExpiresAt: time.Now().Add(15 * time.Minute), // Auto-release after 15 min
Status: "PENDING",
})
if err != nil {
return ReserveResponse{Success: false, Error: "RESERVATION_FAILED"}
}
err = h.repo.DecrementStock(ctx, tx, req.ProductID, req.Quantity)
if err != nil {
return ReserveResponse{Success: false, Error: "STOCK_UPDATE_FAILED"}
}
tx.Commit()
return ReserveResponse{
Success: true,
ReservationID: reservationID,
}
}
15-minute reservation expiry. Critical. If the order service crashes mid-flow, we don't hold inventory hostage forever. A background worker reclaims expired reservations. Design for failure.
Service 3: Payment Service (The Money Pit)
# payment-service/app/handlers/payment_handler.py
from dataclasses import dataclass
from typing import Optional
import uuid
import structlog
from app.clients.stripe_client import StripeClient
from app.repositories.payment_repo import PaymentRepository
from app.events.publisher import EventPublisher
logger = structlog.get_logger()
@dataclass
class PaymentRequest:
amount: int # Always in cents. Always.
user_id: str
reservation_id: str
idempotency_key: Optional[str] = None
@dataclass
class PaymentResponse:
success: bool
payment_id: Optional[str] = None
error: Optional[str] = None
class PaymentHandler:
def __init__(
self,
stripe: StripeClient,
repo: PaymentRepository,
publisher: EventPublisher,
):
self.stripe = stripe
self.repo = repo
self.publisher = publisher
async def process_payment(self, req: PaymentRequest) -> PaymentResponse:
# Idempotency: The internet will retry. Your code must handle it.
idempotency_key = req.idempotency_key or f"{req.reservation_id}:{req.amount}"
existing = await self.repo.get_by_idempotency_key(idempotency_key)
if existing:
logger.info("duplicate_payment_request", payment_id=existing.id)
return PaymentResponse(success=True, payment_id=existing.id)
payment_id = str(uuid.uuid4())
# Record intent BEFORE calling Stripe
await self.repo.create_payment(
id=payment_id,
user_id=req.user_id,
amount=req.amount,
status="PENDING",
idempotency_key=idempotency_key,
)
try:
stripe_result = await self.stripe.charge(
amount=req.amount,
customer_id=req.user_id,
idempotency_key=idempotency_key,
)
except Exception as e:
logger.error("stripe_charge_failed", error=str(e))
await self.repo.update_status(payment_id, "FAILED")
return PaymentResponse(success=False, error="CHARGE_FAILED")
await self.repo.update_status(payment_id, "COMPLETED")
# Emit event for other services
await self.publisher.publish(
"payment.completed",
{
"paymentId": payment_id,
"reservationId": req.reservation_id,
"amount": req.amount,
},
)
return PaymentResponse(success=True, payment_id=payment_id)
Idempotency keys are not optional. The network will duplicate your requests. Stripe will time out but still charge the card. Your code will retry. Without idempotency, you'll double-charge customers and spend your weekend issuing refunds.
The Message Broker: Where Services Go to Talk
Stop making synchronous calls for everything. Not every operation needs an immediate response.
# docker-compose.yml (partial)
services:
rabbitmq:
image: rabbitmq:3.12-management-alpine
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: byteforth
RABBITMQ_DEFAULT_PASS: brutal-password-change-this
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: rabbitmq-diagnostics -q ping
interval: 10s
timeout: 5s
retries: 5
// shipping-service/src/consumers/order.consumer.ts
import { Controller } from '@nestjs/common';
import { EventPattern, Payload } from '@nestjs/microservices';
import { ShippingService } from '../shipping.service';
@Controller()
export class OrderConsumer {
constructor(private shippingService: ShippingService) {}
@EventPattern('order.confirmed')
async handleOrderConfirmed(@Payload() data: OrderConfirmedEvent) {
// This runs async. Order service doesn't wait.
// If this fails, the message goes to dead letter queue.
// A human reviews it. The order still exists.
await this.shippingService.createShipment({
orderId: data.orderId,
address: data.shippingAddress,
items: data.items,
});
}
}
Async by default. Sync by exception. The shipping label doesn't need to exist before the customer sees "Order Confirmed." Decouple everything that can be decoupled.
Database Per Service: The Non-Negotiable Rule
-- inventory-service/migrations/001_initial.sql
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
sku VARCHAR(50) UNIQUE NOT NULL,
available_quantity INT NOT NULL DEFAULT 0,
reserved_quantity INT NOT NULL DEFAULT 0,
updated_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE reservations (
id UUID PRIMARY KEY,
product_id UUID REFERENCES products(id),
quantity INT NOT NULL,
status VARCHAR(20) NOT NULL, -- PENDING, CONFIRMED, RELEASED, EXPIRED
expires_at TIMESTAMP NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_reservations_expires ON reservations(expires_at)
WHERE status = 'PENDING';
-- order-service/migrations/001_initial.sql
CREATE TABLE orders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL,
reservation_id UUID NOT NULL, -- Reference, not FK
payment_id UUID, -- Reference, not FK
status VARCHAR(20) NOT NULL,
total_amount INT NOT NULL,
created_at TIMESTAMP DEFAULT NOW()
);
-- No foreign keys to other services' tables.
-- You don't own that data. You store a reference.
-- If you need user details, you call the user service or cache it.
No cross-service joins. Ever. The moment you add a foreign key to another service's table, you've built a distributed monolith. Congratulations—you get all the complexity of microservices with none of the benefits.
Handling Failure: The Part Everyone Skips
// order-service/src/resilience/circuit-breaker.ts
import CircuitBreaker from 'opossum';
const inventoryBreaker = new CircuitBreaker(
async (request: ReserveRequest) => {
return await inventoryClient.send('reserve_inventory', request).toPromise();
},
{
timeout: 3000, // 3 seconds max
errorThresholdPercentage: 50, // Open circuit at 50% failure
resetTimeout: 30000, // Try again after 30 seconds
}
);
inventoryBreaker.on('open', () => {
logger.warn('inventory_circuit_open', {
message: 'Inventory service is down. Failing fast.'
});
alerting.trigger('INVENTORY_SERVICE_DEGRADED');
});
inventoryBreaker.on('halfOpen', () => {
logger.info('inventory_circuit_half_open', {
message: 'Testing inventory service recovery...'
});
});
// Usage
try {
const result = await inventoryBreaker.fire(reserveRequest);
} catch (err) {
if (err.message === 'Breaker is open') {
return { status: 'SERVICE_UNAVAILABLE', retry_after: 30 };
}
throw err;
}
Circuit breakers prevent cascade failures. When inventory service dies, you don't want the order service burning through thread pools waiting for timeouts. Fail fast. Tell the customer. Retry later.
Observability: If You Can't See It, You Can't Fix It
# docker-compose.yml (observability stack)
services:
jaeger:
image: jaegertracing/all-in-one:1.53
ports:
- "16686:16686" # UI
- "4317:4317" # OTLP gRPC
prometheus:
image: prom/prometheus:v2.48.0
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
grafana:
image: grafana/grafana:10.2.0
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=brutal
// Distributed tracing - follow a request across services
import { trace, SpanStatusCode } from '@opentelemetry/api';
const tracer = trace.getTracer('order-service');
async function createOrder(dto: CreateOrderDto) {
return tracer.startActiveSpan('createOrder', async (span) => {
span.setAttribute('user.id', dto.userId);
span.setAttribute('product.id', dto.productId);
try {
// The trace context propagates automatically to downstream services
const result = await processOrder(dto);
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
span.recordException(error);
throw error;
} finally {
span.end();
}
});
}
Every request gets a trace ID. That ID flows through every service. When something breaks at 3 AM, you search by trace ID and see the entire journey. Without this, debugging microservices is archaeology.
When NOT to Use This Microservice Architecture Example
Here's the brutal truth nobody tells you:
If your team has fewer than 20 engineers, you probably don't need microservices.
Microservices solve organizational problems, not technical ones. They let multiple teams deploy independently. They let you scale specific components. They let you use different tech stacks.
If you're three developers building an MVP, a monolith will ship faster, debug easier, and cost less.
Use this architecture when:
- ▹Multiple teams need to deploy independently
- ▹You have genuinely different scaling requirements per component
- ▹You need fault isolation (payment can fail without killing inventory)
- ▹You're prepared to invest in infrastructure (CI/CD, observability, service mesh)
Skip it when:
- ▹You're still finding product-market fit
- ▹Your team is small enough to coordinate via Slack
- ▹You don't have dedicated DevOps/Platform engineers
- ▹"Because Netflix does it" is your primary justification
The Deployment Reality
# kubernetes/order-service/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 3
selector:
matchLabels:
app: order-service
template:
metadata:
labels:
app: order-service
spec:
containers:
- name: order-service
image: byteforth/order-service:v1.2.3
ports:
- containerPort: 3000
env:
- name: INVENTORY_SERVICE_URL
value: "http://inventory-service:3001"
- name: RABBITMQ_URL
valueFrom:
secretKeyRef:
name: rabbitmq-credentials
key: url
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 10
periodSeconds: 5
readinessProbe:
httpGet:
path: /ready
port: 3000
initialDelaySeconds: 5
periodSeconds: 3
Health checks are mandatory. Kubernetes needs to know when your service is actually ready to receive traffic, not just when the container started. A service that's "up" but can't connect to its database is worse than a service that's obviously down.
Final Architecture: The Complete Microservice Architecture Example
┌──────────────────────────────────────────────────────────┐
│ KUBERNETES CLUSTER │
│ │
│ Client │───────▶│ ┌─────────────┐ │
│ │ Ingress │ │
│ │ (NGINX) │ │
│ └──────┬──────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ API │────▶│ Order │───▶│ Inventory │ │
│ │ Gateway │ │ Service │ │ Service │ │
│ │ (Kong) │ │ (x3 pods) │ │ (x2 pods) │ │
│ └─────────────┘ └──────┬──────┘ └──────┬──────┘ │
│ │ │ │
│ ┌───────────────────┴──────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────┐ │
│ │ RabbitMQ (Clustered) │ │
│ └─────────────────────────────────────────────────┘ │
│ │ │
│ ├────────────────┬───────────────┐ │
│ ▼ ▼ ▼ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Payment │ │ Shipping │ │Notification │ │
│ │ Service │ │ Service │ │ Service │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
│
┌─────────────────────────┼─────────────────────────┐
│ OBSERVABILITY │ │
│ ┌──────────┐ ┌─────────┴───┐ ┌──────────┐ │
│ │ Jaeger │ │ Prometheus │ │ Grafana │ │
│ └──────────┘ └─────────────┘ └──────────┘ │
└───────────────────────────────────────────────────┘
What You Actually Learned
This microservice architecture example gave you:
- ▹Saga pattern for distributed transactions without two-phase commit
- ▹Idempotency for safe retries in unreliable networks
- ▹Circuit breakers for graceful degradation
- ▹Event-driven communication for loose coupling
- ▹Database-per-service for true independence
- ▹Distributed tracing for debugging across services
Copy this. Modify it. Ship it.
Or don't—stay with your monolith if it's working. There's no shame in simple systems that actually deliver value.
ByteForth builds systems that don't break at 3 AM. When they do break, we know exactly why. Talk to us if your architecture needs surgery.