Kokil Thapa - Professional Web Developer in Nepal
Freelancer Web Developer in Nepal with 15+ Years of Experience

Kokil Thapa is an experienced full-stack web developer focused on building fast, secure, and scalable web applications. He helps businesses and individuals create SEO-friendly, user-focused digital platforms designed for long-term growth.

Serverless Laravel 2026 — Deploy Scalable APIs on AWS Lambda and Vapor

By Kokil Thapa | Last reviewed: April 2026

Managing servers is the part of Laravel deployment that nobody enjoys — scaling EC2 instances during traffic spikes, configuring load balancers, patching operating systems, and monitoring resource usage at 2 AM when something breaks. Serverless architecture eliminates all of this. As a Laravel developer in Nepal who has deployed production APIs on both traditional VPS and serverless infrastructure, I can confirm that AWS Lambda with Laravel Vapor changes how you think about deployment entirely. This guide covers the two main paths to serverless Laravel — Vapor and Bref — along with cold-start optimization, database strategies, queue handling, and the trade-offs you need to understand before going serverless in 2026.

Quick Answer — What Is Serverless Laravel?

Serverless Laravel runs your application on AWS Lambda instead of traditional servers. Each HTTP request triggers a Lambda function that executes your Laravel code and returns a response. You pay only for actual compute time (no idle server costs), and Lambda auto-scales from zero to thousands of concurrent requests. Laravel Vapor ($39/month) is the official deployment platform; Bref is an open-source alternative for manual AWS Lambda deployment.

Why Should Laravel Developers Consider Serverless in 2026?

Traditional VPS-based Laravel deployments work well for predictable traffic, but they struggle with:

  • Traffic spikes — a viral blog post or marketing campaign overwhelms a fixed-capacity server
  • Idle cost — you pay for server resources 24/7 even when traffic is near zero at night
  • Scaling complexity — horizontal scaling requires load balancers, auto-scaling groups, and health checks
  • Server maintenance — OS patches, security updates, PHP version upgrades, and disk space management

Serverless eliminates these problems by running your code on-demand, scaling automatically, and billing per-millisecond of execution time.

When Serverless Makes Sense

  • API-driven applications with variable or unpredictable traffic (proper API rate limiting becomes critical at scale)
  • Multi-tenant SaaS in Laravel platforms that need auto-scaling without infrastructure management
  • Microservices and webhook handlers with bursty request patterns
  • Startups that want to minimize infrastructure cost during early growth
  • Applications that need zero-downtime deployments

When Serverless Does NOT Make Sense

  • Applications with consistent, high-volume traffic (traditional servers are cheaper at scale)
  • Long-running processes exceeding Lambda's 15-minute timeout
  • Applications requiring persistent WebSocket connections
  • Workloads with heavy file system operations (Lambda has limited ephemeral storage)

What Is Laravel Vapor and How Does It Work?

Laravel Vapor is the official serverless deployment platform built by the Laravel team. It deploys your Laravel application to AWS Lambda with zero server configuration.

How Vapor Works

  1. You write standard Laravel code — no framework changes required
  2. Vapor packages your application and deploys it to AWS Lambda
  3. API Gateway routes HTTP requests to your Lambda function
  4. Lambda executes your Laravel application and returns the response
  5. SQS handles queued jobs, S3 handles file storage, ElastiCache provides Redis

Vapor Pricing

Vapor itself costs $39/month (team plan). AWS costs depend on usage — a typical Laravel API serving 100,000 requests/day costs approximately $15–$30/month in Lambda compute, plus database and caching costs. Compared to a $50–$100/month VPS, the total cost is similar for moderate traffic but scales automatically for spikes.

Getting Started with Vapor

# Install Vapor CLI composer require laravel/vapor-cli --dev # Initialize Vapor in your project php artisan vapor:install # Deploy to staging vapor deploy staging # Deploy to production vapor deploy production

Vapor Configuration (vapor.yml)

id: 12345 name: my-saas-api environments: production: memory: 1024 cli-memory: 512 runtime: php-8.3:al2 build: - 'composer install --no-dev' database: my-aurora-cluster cache: my-redis-cluster queues: - default - notifications

How Do You Deploy Laravel on AWS Lambda with Bref?

Bref is an open-source alternative to Vapor that gives you full control over your AWS Lambda deployment using the Serverless Framework.

When to Choose Bref Over Vapor

  • You want full control over your AWS infrastructure
  • You do not want to pay Vapor's monthly subscription
  • You need custom Lambda layers or runtimes
  • You are deploying non-Laravel PHP applications alongside Laravel

Basic Bref Setup

# Install Bref composer require bref/bref bref/laravel-bridge # serverless.yml configuration service: my-laravel-api provider: name: aws region: ap-southeast-1 runtime: provided.al2 environment: APP_ENV: production functions: web: handler: Bref\LaravelBridge\Http\OctaneHandler timeout: 28 layers: - ${bref-extra:php-83} events: - httpApi: '*' artisan: handler: artisan timeout: 120 layers: - ${bref-extra:php-83} - ${bref-extra:console} # Deploy npx serverless deploy

How Do You Handle Cold Starts in Serverless Laravel?

Cold starts are the biggest performance concern with serverless Laravel. A cold start occurs when Lambda creates a new execution environment — loading PHP, bootstrapping Laravel, and establishing database connections — before handling the first request.

Typical Cold Start Times

ConfigurationCold Start Time
Laravel + Vapor (1024MB)200–500ms
Laravel + Bref (1024MB)300–800ms
Laravel + Octane on Lambda150–400ms (subsequent: <5ms)

Cold Start Optimization Strategies

  • Increase memory allocation — Lambda allocates CPU proportionally to memory. 1024MB or higher reduces cold start time significantly
  • Use provisioned concurrency — keeps Lambda instances warm for latency-sensitive endpoints ($0.015/GB-hour)
  • Optimize autoloader — run composer install --optimize-autoloader --no-dev in your build step
  • Minimize dependencies — every Composer package adds to bootstrap time
  • Use Laravel Octane — keeps the application bootstrapped in memory between requests, effectively eliminating cold starts after the first request
  • Cache configuration and routesphp artisan config:cache && php artisan route:cache

What Database Strategy Works Best with Serverless Laravel?

Traditional database connections do not work well with Lambda because each Lambda instance opens its own connection — and Lambda can scale to hundreds of concurrent instances, exhausting your database connection limit.

RDS Proxy — The Standard Solution

RDS Proxy sits between Lambda and your MySQL/PostgreSQL database, pooling and reusing connections. This is essential for any serverless Laravel application connecting to a relational database.

# .env for Vapor/Lambda DB_HOST=my-rds-proxy.proxy-xxxx.ap-southeast-1.rds.amazonaws.com DB_PORT=3306 DB_DATABASE=my_saas_db

Aurora Serverless

Aurora Serverless scales database capacity automatically based on load. Combined with RDS Proxy, it provides a fully serverless database layer that matches Lambda's auto-scaling behavior — no capacity planning required. If you are running a multi-tenant application, choosing the right multi-tenant database architecture before going serverless is critical to avoid connection pooling issues at scale.

DynamoDB for High-Throughput Workloads

For specific use cases — session storage, activity logs, real-time counters — DynamoDB provides single-digit millisecond reads at any scale without connection limits.

How Do Queues and Caching Work in Serverless Laravel?

Queues — SQS Integration

Both Vapor and Bref integrate with Amazon SQS for queue processing. Vapor handles SQS configuration automatically; with Bref, you configure SQS as a Lambda event source.

# Queue configuration for serverless QUEUE_CONNECTION=sqs SQS_QUEUE=my-laravel-queue SQS_REGION=ap-southeast-1

Caching — ElastiCache Redis

Redis via ElastiCache is the standard caching layer for serverless Laravel. Place your ElastiCache cluster in the same VPC as your Lambda functions for lowest latency.

CACHE_DRIVER=redis REDIS_HOST=my-elasticache-cluster.xxxx.cache.amazonaws.com SESSION_DRIVER=redis

File Storage — S3

Lambda has limited ephemeral storage (512MB–10GB). All persistent file storage must use S3. Laravel's filesystem abstraction makes this seamless:

FILESYSTEM_DISK=s3 AWS_BUCKET=my-laravel-uploads

Serverless vs Traditional Deployment — Cost and Performance Comparison

FactorServerless (Lambda)Traditional (VPS/EC2)
ScalingAutomatic, instantManual or auto-scaling groups
Idle CostZero (pay per request)Full server cost 24/7
Cold Starts200–800ms initial latencyNone
Max Request Time15 minutes (Lambda limit)Unlimited
Server MaintenanceNoneOS patches, security, PHP upgrades
DeploymentZero-downtime, atomicRequires rolling deploy setup
Cost at 100K req/day~$20–$40/month (compute only)~$50–$100/month (VPS)
Cost at 1M req/day~$150–$300/month~$100–$200/month

Serverless is cheaper for variable and low-to-moderate traffic. Traditional servers become more cost-effective at consistently high traffic volumes. For more on Laravel performance optimization, see our dedicated speed optimization guide.

If you are considering migrating your Laravel application to serverless or need help with API development on AWS, get in touch for a consultation on the right deployment strategy for your use case.

Frequently Asked Questions

Serverless Laravel runs your application on AWS Lambda instead of traditional servers, scaling automatically and billing per-request.

Laravel Vapor costs $39/month for the team plan, plus AWS infrastructure costs based on usage.

A cold start occurs when Lambda creates a new execution environment, adding 200–800ms latency to the first request.

Use Vapor if you want managed deployment with minimal configuration — it handles AWS setup automatically and integrates seamlessly with Laravel's ecosystem. Use Bref if you want full control over your AWS infrastructure, need custom Lambda layers, or want to avoid Vapor's monthly subscription. Vapor is faster to set up; Bref gives more flexibility.

Use RDS Proxy between Lambda and your MySQL or PostgreSQL database. Lambda can scale to hundreds of concurrent instances, each opening its own database connection. Without connection pooling, this quickly exhausts your database's connection limit. RDS Proxy pools and reuses connections, solving this problem. Aurora Serverless combined with RDS Proxy provides a fully serverless database layer.

Yes. Both Vapor and Bref integrate with Amazon SQS for queue processing. When a job is pushed to SQS, it triggers a Lambda function that processes the job. This provides auto-scaling queue workers that scale to zero when idle. Set QUEUE_CONNECTION=sqs in your environment configuration and configure the SQS queue ARN in your deployment configuration.

Increase Lambda memory to 1024MB or higher (CPU scales with memory), use provisioned concurrency for latency-sensitive endpoints, optimize the Composer autoloader, minimize dependencies, cache configuration and routes, and consider Laravel Octane to keep the application bootstrapped in memory between requests. These optimizations can reduce cold starts from 800ms to under 200ms.

For variable or low-to-moderate traffic (under 500K requests/day), serverless is typically cheaper because you pay zero for idle time. For consistently high traffic (1M+ requests/day), traditional servers become more cost-effective. The break-even point depends on your request patterns, compute requirements, and database costs. Calculate both scenarios before deciding.

Yes. Use Amazon ElastiCache for Redis as your caching and session layer. Place the ElastiCache cluster in the same VPC as your Lambda functions for lowest latency. Configure CACHE_DRIVER=redis and SESSION_DRIVER=redis in your environment. Vapor configures this automatically; with Bref, you set up ElastiCache and VPC networking in your serverless.yml.

Key limitations include the Lambda 15-minute execution timeout (no long-running processes), limited ephemeral storage (512MB–10GB), cold start latency on first requests, no persistent WebSocket connections, higher complexity for debugging and monitoring, and potential vendor lock-in to AWS services. These limitations matter less for API-driven applications but are significant for applications with long-running processes or heavy file operations.

All persistent file storage must use Amazon S3 because Lambda's ephemeral storage is temporary and limited. Laravel's filesystem abstraction makes this seamless — set FILESYSTEM_DISK=s3 in your environment and use Storage::put() as normal. For large file uploads, use S3 presigned URLs to upload directly from the client to S3, bypassing Lambda entirely to avoid timeout issues.

Yes. Serverless works well for multi-tenant SaaS, especially with the hybrid database model. Lambda auto-scales to handle traffic from all tenants without capacity planning. Use Aurora Serverless for database scaling, SQS for tenant-specific queue processing, and Redis for tenant-scoped caching. Vapor's built-in environment management makes it straightforward to deploy and manage multiple environments.

Vapor deploys new versions of your application as new Lambda function versions. When deployment completes, the API Gateway alias switches to the new version atomically. No requests are dropped because the old version continues serving until all in-flight requests complete. This provides truly zero-downtime deployments without the rolling deploy complexity of traditional server setups.

AWS CloudWatch provides basic Lambda metrics (invocations, duration, errors, throttles). For application-level monitoring, use Laravel Telescope (development), Sentry or Bugsnag for error tracking, and Datadog or New Relic for full APM. Vapor includes a monitoring dashboard showing Lambda performance, queue throughput, and deployment history out of the box.

Start by ensuring your application uses S3 for file storage, Redis for sessions and cache, and SQS for queues — not local filesystem or database-driven sessions. Remove any code that depends on persistent filesystem state. Set up RDS Proxy for your database. Deploy to a staging environment first and run your full test suite. Gradually shift production traffic to the serverless deployment using weighted routing or DNS switching.

Share this article

Quick Contact Options
Choose how you want to connect me: