Deployment¶
This section covers deploying a FasterAPI application to production — from containers to cloud platforms to bare-metal servers.
Pages¶
| Topic | What you learn |
|---|---|
| Docker | Dockerfile, multi-stage builds, docker-compose |
| systemd Service | Service unit file, auto-start, journald logging |
| Gunicorn + Uvicorn | Multi-process worker pooling, production config |
| HTTPS — Let's Encrypt | Free TLS certificates with Certbot and Nginx |
| Nginx & Traefik | Reverse proxy, TLS termination, load balancing |
| Cloud Services | AWS, GCP, Azure deployment options |
| Kubernetes | Manifests, health checks, rolling updates |
ASGI servers¶
FasterAPI is an ASGI application; you need an ASGI server to run it:
| Server | Notes |
|---|---|
| uvicorn | Recommended. Lightweight, production-ready. |
| hypercorn | Supports HTTP/2 and HTTP/3. |
| daphne | Django Channels' server; ASGI-native. |
| granian | Rust-based; very fast. |
uvicorn (recommended)¶
pip install uvicorn[standard]
# Development
uvicorn main:app --reload
# Production (multiple workers)
uvicorn main:app --host 0.0.0.0 --port 8000 --workers 4
Hypercorn¶
Hypercorn supports HTTP/2 with TLS:
Daphne¶
Number of workers¶
A rule of thumb for CPU-bound workloads: 2 × CPU cores + 1. For I/O-bound APIs, experiment with higher values.
For Python 3.13 with SubInterpreterPool, a single uvicorn worker can leverage
multiple CPU cores — see Concurrency & Parallelism.
Environment variables¶
Always configure the application through environment variables in production. See Settings & Environment Variables.
Health checks¶
Expose a /health endpoint for load balancers and container orchestrators:
Next steps¶
- Docker — containerise your app.
- systemd Service — run as a managed Linux service.
- Gunicorn + Uvicorn — multi-worker process pooling.
- HTTPS — Let's Encrypt — free TLS with Certbot and Nginx.
- Nginx & Traefik — reverse proxy and load balancing.