Serverless and Edge Computing: A Practical Guide

Serverless and edge computing have transformed how we deploy and scale web applications. Instead of managing servers, you write functions that automatically scale from zero to millions of users.

Edge computing takes this further by running code geographically close to users for minimal latency. Let’s break down how these technologies work and when you’d actually want to use them.

What is Serverless?

Serverless doesn’t mean “no servers”, it means you don’t manage them. The provider handles infrastructure, scaling, and maintenance. You just write “functions”.

The functions are stateless, auto-scaling, and you only pay for execution time. So what’s the tradeoff? Well there are several but the first one is Cold starts. The first request after idle time is slower because the container needs to spin up.

The serverless Platforms as a Service are sticky, preventing you easily moving to another platform.

They are stateless, meaning each invocation is independent and doesn’t retain any state between invocations.

In some cases, they don’t run Node, so they behave much differently when building locally, which complicates development and testing.

Each request is handled by a NEW function and so you can imagine that if you have a site that gets a lot of traffic, and makes a lot of requests, this will lead to expensive hosting bills or you playing the pauper on social media.

Traditional vs. Serverless vs. Edge

Think of it this way:

  • Traditional servers are always running, always costing you money, and you handle all the scaling yourself. Great for predictable, high-traffic workloads. Lots of different options for hosting and scaling.
  • Serverless (AWS Lambda, Vercel Functions, GCP Functions) spins up containers on demand and kills them when idle. Auto-scales from zero to infinity. Cold starts around 100-500ms.
  • Edge (Cloudflare Workers, Vercel Edge) uses V8 Isolates instead of containers, running your code in 200+ locations worldwide. Cold starts under 1ms.

Cost Projections

Here’s how the costs break down at different scales:

Requests / Month ~RPS (Avg) AWS Lambda Cloudflare Workers VPS / K8s Cluster Winner
1 Million 0.4 $0.00 (Free Tier) $0.00 (Free Tier) $40–$60 (Min HA Setup) Serverless
10 Million 4.0 ~$12 ~$5 $40–$100 Serverless
100 Million 40 ~$120 ~$35 $80–$150 Tie / Workers
500 Million 200 ~$600 ~$155 $150–$300 VPS / Workers
1 Billion 400 ~$1,200+ ~$305 $200–$400 VPS / EC2

The Hub and Spoke Pattern

Also called the citadel pattern, this is where serverless and traditional infrastructure stop competing and start complementing each other. The idea is simple: keep a central hub (your main application running on containers or a VPS) and offload specific tasks to serverless “spokes” at the edge.

Your core API, database connections, and stateful logic stay on traditional infrastructure where they belong. But image resizing, auth token validation, A/B testing, geo-routing and rate limiting all move to edge functions that run close to the user.

When to Use Serverless

  • Unpredictable or spiky traffic — APIs that go from 0 to 10,000 requests in minutes (webhooks, event-driven workflows)
  • Lightweight, stateless tasks — image processing, PDF generation, sending emails, data transformation
  • Low-traffic side projects — anything that sits idle most of the time and you don’t want to pay for an always-on server… and you don’t know how to setup a Coolify server.
  • Edge logic — geolocation routing, header manipulation, request validation before it hits your origin

When to Use Containers / VPS

  • Sustained high traffic — once you’re consistently above ~100M requests/month, a VPS is cheaper (see the table above)
  • Stateful workloads — WebSocket connections, long-running processes, anything that needs to hold state between requests
  • Database-heavy applications — connection pooling and persistent connections don’t play well with serverless cold starts
  • Complex applications — monoliths or microservices that need shared memory, background workers, or cron jobs

The Hybrid Approach

The best architectures often use both. It depends on your specific use case and requirements. It depends on the team, the budget, and the complexity of your application.

Knowing the tradeoffs is the difference between a seasoned developer and a junior. It’s important that you make the right decisions based off your needs and constraints.

Good luck and godspeed!

/ DevOps / Development / Serverless / Cloud