When users stream music, watch videos, or download files from your website, should your application backend (Node.js, Go, Python, PHP) handle these requests? Or should a web server like Nginx serve them directly?
The answer affects your server costs, performance, and scalability. Major platforms like Netflix, Spotify, and Airbnb use Nginx for static content while their application servers handle only business logic. Here’s why.
Table of Contents
How Static File Serving Works
Without Nginx (Application Server Handles Everything)
Browser → Nginx (reverse proxy) → Application Backend → Read file → Stream back → Nginx → Browser
Every file request goes through your application code, middleware, and frameworks.
With Nginx Static Serving
Browser → Nginx → Read file from disk → Browser
Nginx reads the file directly from disk and sends it to the browser. Your application backend never executes.
Why Nginx Beats Application Servers for Static Files
1. Designed for High Concurrency
Nginx was built to solve the “C10K problem” — serving 10,000+ simultaneous connections efficiently.
| Feature | Nginx | Node.js/Go/Python/PHP |
|---|---|---|
| Architecture | Event-driven, non-blocking | Thread/process per request |
| Memory per connection | ~2.5 KB | ~8-50 KB (depends on language) |
| 10,000 concurrent files | ~25 MB RAM | ~80-500 MB RAM |
| Concurrency limit | 50,000+ connections | 5,000-10,000 connections |
Example: An e-learning platform streaming video lessons to 5,000 students simultaneously would use 12 MB RAM with Nginx vs. 400+ MB with a Node.js backend.
2. Zero-Copy File Transfer with sendfile()
This is the game-changing technical difference.
When your application backend serves a file:
Disk → Kernel buffer → Application memory → Kernel socket buffer → Network
The file data is copied multiple times through memory.
When Nginx serves a file with sendfile():
Disk → Kernel buffer → Network (direct transfer)
The file data never enters application memory. The Linux kernel transfers bytes directly from disk to the network socket.
Benefits:
- No memory allocation for file contents
- No CPU time wasted copying bytes
- No garbage collection pressure
- Faster transfer speeds
3. No Application Overhead
Every request through your application backend runs the full middleware stack:
What happens when a user downloads a PDF through your Node.js/Express app:
javascript
// ALL of this executes for EVERY file download:
app.use(cors()) // Parse headers
app.use(helmet()) // Security checks
app.use(morgan('combined')) // Request logging
app.use(rateLimit()) // Check rate limits
app.use(authentication()) // Verify JWT tokens
app.use(compression()) // GZIP compression
// ... then finally serve the file
With Nginx, none of this runs. The file goes directly from disk to browser.
4. Better Caching Headers by Default
Nginx handles HTTP caching correctly out of the box:
nginx
location /downloads/ {
root /var/www/files;
expires 1y;
add_header Cache-Control "public, immutable";
}
What this does:
expires 1y— Browser caches file for 1 yearimmutable— Browser never checks if file changedETagandLast-Modified— Generated automatically from file metadata304 Not Modified— Nginx handles efficiently without reading the file
Most application frameworks require manual configuration to get caching right, and developers often get it wrong.
5. Efficient Range Requests (Media Seeking)
When a user seeks to 2:30 in a video or audio file:
GET /videos/tutorial.mp4
Range: bytes=5242880-
Both Nginx and application servers support this, but:
- Nginx uses
sendfile()with byte offset (zero-copy) - Application servers read the range into memory first, then send
For a 500 MB video file, Nginx uses almost no RAM. Node.js might allocate 10-50 MB just to send that chunk.
Real-World Examples: How Companies Use Nginx
Netflix and Video Streaming
Netflix uses Nginx (via their custom Open Connect CDN) for video delivery while application servers handle:
- User authentication
- Video recommendations
- Playback tracking
- Subtitle generation
Why? Video files are huge (GB per movie). Nginx’s sendfile() allows efficient streaming to millions of concurrent users without excessive RAM usage.
Spotify and Audio Files
Spotify’s infrastructure uses Nginx to serve audio chunks while their backend handles:
- Playlist management
- Song search
- User preferences
- Playback analytics
Why? Audio streaming requires range request support (seeking in songs). Nginx handles this natively with zero application code.
E-Commerce Platforms (Shopify, WooCommerce)
Product images, PDFs (invoices, manuals), and downloadable files are served via Nginx while the application handles:
- Shopping cart logic
- Payment processing
- Inventory management
- Order tracking
Why? A product page might have 20+ images. Loading them through PHP/Ruby would waste application threads that could process checkouts.
SaaS Applications (Slack, Notion, Figma)
User-uploaded files (attachments, exports, designs) are served via Nginx/CDN while the application handles:
- Real-time collaboration
- Access control
- File metadata management
- Search indexing
Why? File downloads shouldn’t block application threads needed for WebSocket connections and API calls.
What Files Should Nginx Serve Directly?
Serve via Nginx (Static Content)
| File Type | Examples | Cache Duration |
|---|---|---|
| Images | .jpg, .png, .svg, .webp | 1 year |
| Audio | .mp3, .m4a, .ogg, .wav | 1 year |
| Video | .mp4, .webm, .mov | 1 year |
| Documents | .pdf, .docx, .xlsx | 1 year |
| Fonts | .woff2, .woff, .ttf | 1 year |
| CSS/JS bundles | Hashed filenames from webpack/vite | 1 year |
| Data files | .json, .xml, .csv (if static) | 1 month |
Rule: If the file doesn’t change based on who’s requesting it, Nginx should serve it.
Serve via Application (Dynamic Content)
| Response Type | Examples | Why |
|---|---|---|
| User-specific data | Profiles, dashboards, personalized feeds | Different per user |
| Authentication | Login, OAuth, session management | Security logic |
| Form processing | Contact forms, file uploads, payments | Validation required |
| Database queries | Search results, filtered data | Computed per request |
| API responses | REST/GraphQL endpoints | Business logic |
Performance Impact: Real Numbers
Small Scale (100-500 concurrent users)
Honestly, you won’t notice much difference. Modern application servers handle static files fine at this scale.
Medium Scale (1,000-10,000 concurrent users)
| Metric | Application Server | Nginx |
|---|---|---|
| RAM for 1,000 concurrent file downloads | 80-400 MB | 2.5 MB |
| CPU overhead per request | Middleware + file I/O | Near zero |
| Max concurrent streams (1 server) | 5,000-10,000 | 50,000+ |
| Application threads freed | 0 (busy with files) | 1,000 (100% for logic) |
Large Scale (100,000+ users)
This is where Nginx shines. By offloading static files:
- Application servers handle 10x more API requests
- You need fewer application server instances (cost savings)
- Nginx can scale independently with cheap static file servers
- Add a CDN (CloudFlare, Fastly) and Nginx only serves first requests
Setting Up Nginx for Static Files
Basic Configuration
nginx
server {
listen 80;
server_name example.com;
# Serve static files directly
location /images/ {
root /var/www/static;
expires 1y;
add_header Cache-Control "public, immutable";
}
location /downloads/ {
root /var/www/static;
expires 1y;
add_header Cache-Control "public, immutable";
add_header Accept-Ranges bytes; # Enable range requests
}
# Proxy API requests to application backend
location /api/ {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
File Structure
/var/www/static/
├── images/
│ ├── products/
│ │ ├── item-001.jpg
│ │ └── item-002.jpg
│ └── logos/
│ └── brand.svg
└── downloads/
├── manuals/
│ └── user-guide.pdf
└── media/
└── podcast-ep01.mp3
Docker Setup
yaml
# docker-compose.yml
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- static_files:/var/www/static:ro # Read-only access
app:
image: node:18
volumes:
- static_files:/var/www/static:ro # Application can still read for dev/fallback
volumes:
static_files:
Adding a CDN (Phase 2)
Once your site grows, add a CDN like CloudFlare (free tier available):
Browser → CloudFlare CDN → Nginx → Disk
(cached globally)
What happens:
- First request: CloudFlare fetches from Nginx, caches it globally
- Subsequent requests: Served from CloudFlare’s edge servers (200+ locations worldwide)
- Your Nginx server only serves first request per file per CDN location
CDN benefits:
- Faster delivery (files served from nearest location)
- Lower server bandwidth costs
- DDoS protection (CloudFlare/Fastly handles traffic spikes)
- Free tier supports millions of requests
The immutable header we set tells the CDN to never revalidate the file.
Common Mistakes to Avoid
1. Serving Static Files Through Application Code
javascript
// ❌ BAD: Every image request hits Express
app.get('/images/:filename', (req, res) => {
res.sendFile(`./static/images/${req.params.filename}`);
});
nginx
# ✅ GOOD: Nginx serves directly
location /images/ {
root /var/www/static;
expires 1y;
}
2. Missing Cache Headers
Without proper cache headers, browsers re-download files unnecessarily.
nginx
# ❌ BAD: No caching
location /images/ {
root /var/www/static;
}
# ✅ GOOD: 1-year cache
location /images/ {
root /var/www/static;
expires 1y;
add_header Cache-Control "public, immutable";
}
3. Using Application Server for Range Requests
Streaming platforms must support seeking (byte range requests).
javascript
// ❌ BAD: Manual range handling in Node.js (complex, inefficient)
app.get('/video/:id', async (req, res) => {
const range = req.headers.range;
// 50+ lines of range parsing and streaming code...
});
nginx
# ✅ GOOD: Nginx handles ranges automatically
location /videos/ {
root /var/www/media;
add_header Accept-Ranges bytes;
}
4. Not Separating Static and Dynamic Routes
nginx
# ❌ BAD: Everything goes to application
location / {
proxy_pass http://localhost:3000;
}
# ✅ GOOD: Static files first, then application
location /static/ {
root /var/www;
expires 1y;
}
location / {
proxy_pass http://localhost:3000;
}
When You DON’T Need Nginx for Static Files
Small Projects (<1,000 users)
If you’re building a side project or MVP, your framework’s built-in static file serving is fine:
- Express has
express.static() - Django has
STATIC_ROOT - Rails has the asset pipeline
Focus on building features, not premature optimization.
When Files Need Access Control
If downloads require authentication checks:
javascript
// User must be logged in and own the document
app.get('/documents/:id', authenticate, authorize, (req, res) => {
res.sendFile(`./private/${req.params.id}.pdf`);
});
You need application logic. However, you can still optimize with signed URLs (generated by app, served by Nginx/S3).
Serverless Deployments
If you’re on Vercel, Netlify, or AWS Lambda, they handle static file optimization automatically. No need to configure Nginx.
Key Takeaways
- Nginx uses
sendfile()(zero-copy) — files transfer from disk to network without entering application memory - Application servers add overhead — middleware, logging, and memory allocation for every request
- Cache headers matter more than raw speed —
immutablemeans files are requested once, then cached forever - Separate concerns — Nginx for static content, application for business logic
- Scale independently — Static file servers are cheap and easy to scale
- Add a CDN later — CloudFlare/Fastly caches files globally, Nginx serves only first requests
- Start simple — For small projects, framework defaults are fine. Optimize when you have traffic.