
December 07, 2025
7 min read
Table of Contents
By Kokil Thapa | Last reviewed: April 2026
Managing servers is the part of Laravel deployment that nobody enjoys — scaling EC2 instances during traffic spikes, configuring load balancers, patching operating systems, and monitoring resource usage at 2 AM when something breaks. Serverless architecture eliminates all of this. As a Laravel developer in Nepal who has deployed production APIs on both traditional VPS and serverless infrastructure, I can confirm that AWS Lambda with Laravel Vapor changes how you think about deployment entirely. This guide covers the two main paths to serverless Laravel — Vapor and Bref — along with cold-start optimization, database strategies, queue handling, and the trade-offs you need to understand before going serverless in 2026.
Quick Answer — What Is Serverless Laravel?
Serverless Laravel runs your application on AWS Lambda instead of traditional servers. Each HTTP request triggers a Lambda function that executes your Laravel code and returns a response. You pay only for actual compute time (no idle server costs), and Lambda auto-scales from zero to thousands of concurrent requests. Laravel Vapor ($39/month) is the official deployment platform; Bref is an open-source alternative for manual AWS Lambda deployment.
Why Should Laravel Developers Consider Serverless in 2026?
Traditional VPS-based Laravel deployments work well for predictable traffic, but they struggle with:
- Traffic spikes — a viral blog post or marketing campaign overwhelms a fixed-capacity server
- Idle cost — you pay for server resources 24/7 even when traffic is near zero at night
- Scaling complexity — horizontal scaling requires load balancers, auto-scaling groups, and health checks
- Server maintenance — OS patches, security updates, PHP version upgrades, and disk space management
Serverless eliminates these problems by running your code on-demand, scaling automatically, and billing per-millisecond of execution time.
When Serverless Makes Sense
- API-driven applications with variable or unpredictable traffic (proper API rate limiting becomes critical at scale)
- Multi-tenant SaaS in Laravel platforms that need auto-scaling without infrastructure management
- Microservices and webhook handlers with bursty request patterns
- Startups that want to minimize infrastructure cost during early growth
- Applications that need zero-downtime deployments
When Serverless Does NOT Make Sense
- Applications with consistent, high-volume traffic (traditional servers are cheaper at scale)
- Long-running processes exceeding Lambda's 15-minute timeout
- Applications requiring persistent WebSocket connections
- Workloads with heavy file system operations (Lambda has limited ephemeral storage)
What Is Laravel Vapor and How Does It Work?
Laravel Vapor is the official serverless deployment platform built by the Laravel team. It deploys your Laravel application to AWS Lambda with zero server configuration.
How Vapor Works
- You write standard Laravel code — no framework changes required
- Vapor packages your application and deploys it to AWS Lambda
- API Gateway routes HTTP requests to your Lambda function
- Lambda executes your Laravel application and returns the response
- SQS handles queued jobs, S3 handles file storage, ElastiCache provides Redis
Vapor Pricing
Vapor itself costs $39/month (team plan). AWS costs depend on usage — a typical Laravel API serving 100,000 requests/day costs approximately $15–$30/month in Lambda compute, plus database and caching costs. Compared to a $50–$100/month VPS, the total cost is similar for moderate traffic but scales automatically for spikes.
Getting Started with Vapor
# Install Vapor CLI composer require laravel/vapor-cli --dev # Initialize Vapor in your project php artisan vapor:install # Deploy to staging vapor deploy staging # Deploy to production vapor deploy productionVapor Configuration (vapor.yml)
id: 12345 name: my-saas-api environments: production: memory: 1024 cli-memory: 512 runtime: php-8.3:al2 build: - 'composer install --no-dev' database: my-aurora-cluster cache: my-redis-cluster queues: - default - notificationsHow Do You Deploy Laravel on AWS Lambda with Bref?
Bref is an open-source alternative to Vapor that gives you full control over your AWS Lambda deployment using the Serverless Framework.
When to Choose Bref Over Vapor
- You want full control over your AWS infrastructure
- You do not want to pay Vapor's monthly subscription
- You need custom Lambda layers or runtimes
- You are deploying non-Laravel PHP applications alongside Laravel
Basic Bref Setup
# Install Bref composer require bref/bref bref/laravel-bridge # serverless.yml configuration service: my-laravel-api provider: name: aws region: ap-southeast-1 runtime: provided.al2 environment: APP_ENV: production functions: web: handler: Bref\LaravelBridge\Http\OctaneHandler timeout: 28 layers: - ${bref-extra:php-83} events: - httpApi: '*' artisan: handler: artisan timeout: 120 layers: - ${bref-extra:php-83} - ${bref-extra:console} # Deploy npx serverless deployHow Do You Handle Cold Starts in Serverless Laravel?
Cold starts are the biggest performance concern with serverless Laravel. A cold start occurs when Lambda creates a new execution environment — loading PHP, bootstrapping Laravel, and establishing database connections — before handling the first request.
Typical Cold Start Times
| Configuration | Cold Start Time |
|---|---|
| Laravel + Vapor (1024MB) | 200–500ms |
| Laravel + Bref (1024MB) | 300–800ms |
| Laravel + Octane on Lambda | 150–400ms (subsequent: <5ms) |
Cold Start Optimization Strategies
- Increase memory allocation — Lambda allocates CPU proportionally to memory. 1024MB or higher reduces cold start time significantly
- Use provisioned concurrency — keeps Lambda instances warm for latency-sensitive endpoints ($0.015/GB-hour)
- Optimize autoloader — run
composer install --optimize-autoloader --no-devin your build step - Minimize dependencies — every Composer package adds to bootstrap time
- Use Laravel Octane — keeps the application bootstrapped in memory between requests, effectively eliminating cold starts after the first request
- Cache configuration and routes —
php artisan config:cache && php artisan route:cache
What Database Strategy Works Best with Serverless Laravel?
Traditional database connections do not work well with Lambda because each Lambda instance opens its own connection — and Lambda can scale to hundreds of concurrent instances, exhausting your database connection limit.
RDS Proxy — The Standard Solution
RDS Proxy sits between Lambda and your MySQL/PostgreSQL database, pooling and reusing connections. This is essential for any serverless Laravel application connecting to a relational database.
# .env for Vapor/Lambda DB_HOST=my-rds-proxy.proxy-xxxx.ap-southeast-1.rds.amazonaws.com DB_PORT=3306 DB_DATABASE=my_saas_dbAurora Serverless
Aurora Serverless scales database capacity automatically based on load. Combined with RDS Proxy, it provides a fully serverless database layer that matches Lambda's auto-scaling behavior — no capacity planning required. If you are running a multi-tenant application, choosing the right multi-tenant database architecture before going serverless is critical to avoid connection pooling issues at scale.
DynamoDB for High-Throughput Workloads
For specific use cases — session storage, activity logs, real-time counters — DynamoDB provides single-digit millisecond reads at any scale without connection limits.
How Do Queues and Caching Work in Serverless Laravel?
Queues — SQS Integration
Both Vapor and Bref integrate with Amazon SQS for queue processing. Vapor handles SQS configuration automatically; with Bref, you configure SQS as a Lambda event source.
# Queue configuration for serverless QUEUE_CONNECTION=sqs SQS_QUEUE=my-laravel-queue SQS_REGION=ap-southeast-1Caching — ElastiCache Redis
Redis via ElastiCache is the standard caching layer for serverless Laravel. Place your ElastiCache cluster in the same VPC as your Lambda functions for lowest latency.
CACHE_DRIVER=redis REDIS_HOST=my-elasticache-cluster.xxxx.cache.amazonaws.com SESSION_DRIVER=redisFile Storage — S3
Lambda has limited ephemeral storage (512MB–10GB). All persistent file storage must use S3. Laravel's filesystem abstraction makes this seamless:
FILESYSTEM_DISK=s3 AWS_BUCKET=my-laravel-uploadsServerless vs Traditional Deployment — Cost and Performance Comparison
| Factor | Serverless (Lambda) | Traditional (VPS/EC2) |
|---|---|---|
| Scaling | Automatic, instant | Manual or auto-scaling groups |
| Idle Cost | Zero (pay per request) | Full server cost 24/7 |
| Cold Starts | 200–800ms initial latency | None |
| Max Request Time | 15 minutes (Lambda limit) | Unlimited |
| Server Maintenance | None | OS patches, security, PHP upgrades |
| Deployment | Zero-downtime, atomic | Requires rolling deploy setup |
| Cost at 100K req/day | ~$20–$40/month (compute only) | ~$50–$100/month (VPS) |
| Cost at 1M req/day | ~$150–$300/month | ~$100–$200/month |
Serverless is cheaper for variable and low-to-moderate traffic. Traditional servers become more cost-effective at consistently high traffic volumes. For more on Laravel performance optimization, see our dedicated speed optimization guide.
If you are considering migrating your Laravel application to serverless or need help with API development on AWS, get in touch for a consultation on the right deployment strategy for your use case.

