Cost Optimization
Light Stack Pricing Overview
Free Tier Comparison
┌─────────────────────────────────────────────────────────────┐
│ FREE TIER LIMITS │
├─────────────────────────────────────────────────────────────┤
│ │
│ SUPABASE FREE: │
│ ├── Database: 500 MB │
│ ├── Bandwidth: 5 GB │
│ ├── Storage: 1 GB │
│ ├── Edge Functions: 500K invocations │
│ ├── Realtime: 200 concurrent connections │
│ └── Auth: 50,000 MAU │
│ │
│ CLOUDFLARE FREE: │
│ ├── Workers: 100K requests/day │
│ ├── KV: 100K reads, 1K writes/day │
│ ├── R2: 10 GB storage, 10M reads │
│ ├── Queues: 100K operations/month │
│ └── Cron: Included with Workers │
│ │
│ VERCEL FREE (alternative): │
│ ├── Bandwidth: 100 GB │
│ ├── Builds: 6000 minutes │
│ └── Serverless: 100 GB-hours │
│ │
└─────────────────────────────────────────────────────────────┘
Cost Drivers
Where Costs Come From
┌─────────────────────────────────────────────────────────────┐
│ COST BREAKDOWN │
├─────────────────────────────────────────────────────────────┤
│ │
│ SUPABASE: │
│ │ │
│ ├── Database Size │
│ │ ├── Data storage (rows, indexes) │
│ │ └── Logs, backups │
│ │ │
│ ├── Bandwidth (Egress) │
│ │ ├── API responses │
│ │ └── Realtime messages │
│ │ │
│ ├── Edge Functions │
│ │ ├── Invocations │
│ │ └── Compute time │
│ │ │
│ └── Storage │
│ ├── File storage │
│ └── Image transformations │
│ │
│ CLOUDFLARE: │
│ │ │
│ ├── Workers (minimal - CPU time based) │
│ │ │
│ ├── KV (read/write operations) │
│ │ │
│ ├── R2 (storage + operations) │
│ │ │
│ └── Queues (messages processed) │
│ │
└─────────────────────────────────────────────────────────────┘
Paid Tier Pricing
SUPABASE PRO: $25/month
├── 8 GB database
├── 250 GB bandwidth
├── 100 GB storage
├── 2M Edge Function invocations
├── 500 concurrent Realtime connections
├── Daily backups (7-day retention)
└── Point-in-time recovery
CLOUDFLARE WORKERS PAID: $5/month
├── 10M requests/month included
├── Unlimited KV reads (writes: $0.50/M)
├── Queues: 1M operations included
└── CPU: 10ms (50ms burst)
TOTAL STARTING POINT: ~$30/month
Optimization Strategies
1. Database Optimization
-- ❌ BAD: Selecting all columns
SELECT * FROM tasks;
-- ✅ GOOD: Select only needed columns
SELECT id, title, status FROM tasks;
-- ❌ BAD: N+1 queries
for each project:
SELECT * FROM tasks WHERE project_id = ?
-- ✅ GOOD: Join in single query
SELECT
p.id,
p.name,
COUNT(t.id) as task_count
FROM projects p
LEFT JOIN tasks t ON t.project_id = p.id
GROUP BY p.id;
-- ✅ Add indexes for common queries
CREATE INDEX idx_tasks_project_status
ON tasks(project_id, status);
CREATE INDEX idx_tasks_user_created
ON tasks(user_id, created_at DESC);
// ✅ Supabase: Use embeddings efficiently
const { data } = await supabase
.from('projects')
.select(`
id,
name,
tasks:tasks(id, title, status)
`)
.eq('owner_id', userId);
// Instead of multiple separate queries
2. Caching Strategy
// Cache expensive queries in KV
async function getProjectStats(env: Env, projectId: string) {
const cacheKey = `stats:${projectId}`;
// Check cache first
const cached = await env.KV.get(cacheKey, 'json');
if (cached) {
return cached; // Free KV read vs Supabase query
}
// Compute stats
const supabase = createClient(env.SUPABASE_URL, env.SUPABASE_ANON_KEY);
const { data } = await supabase.rpc('get_project_stats', {
p_project_id: projectId
});
// Cache for 5 minutes
await env.KV.put(cacheKey, JSON.stringify(data), {
expirationTtl: 300
});
return data;
}
// Invalidate on updates
async function invalidateStatsCache(env: Env, projectId: string) {
await env.KV.delete(`stats:${projectId}`);
}
3. Bandwidth Reduction
// ✅ Pagination for large lists
const { data, count } = await supabase
.from('tasks')
.select('id, title, status', { count: 'exact', head: false })
.range(0, 19) // First 20 items
.order('created_at', { ascending: false });
// ✅ Filter early (in database, not client)
const { data } = await supabase
.from('tasks')
.select('id, title')
.eq('status', 'pending')
.gte('created_at', '2024-01-01');
// ✅ Use cursor pagination for realtime feeds
const { data } = await supabase
.from('tasks')
.select('*')
.lt('created_at', lastSeenTimestamp)
.order('created_at', { ascending: false })
.limit(20);
Storage Optimization
Supabase Storage Best Practices
// ✅ Compress images before upload
import imageCompression from 'browser-image-compression';
async function uploadImage(file: File) {
const options = {
maxSizeMB: 1,
maxWidthOrHeight: 1920,
useWebWorker: true
};
const compressedFile = await imageCompression(file, options);
const { data, error } = await supabase.storage
.from('uploads')
.upload(`images/${Date.now()}.jpg`, compressedFile);
return data;
}
// ✅ Use image transformations instead of storing multiple sizes
const { data } = supabase.storage
.from('avatars')
.getPublicUrl('user-123.jpg', {
transform: {
width: 100,
height: 100,
resize: 'cover'
}
});
// ✅ Clean up unused files
// Set up a cron job to remove orphaned files
R2 for High-Traffic Assets
// ✅ Move high-traffic static assets to R2 (zero egress fees)
// Instead of serving from Supabase Storage:
// https://xxx.supabase.co/storage/v1/object/public/assets/logo.png
// Serve from R2:
// https://assets.example.com/logo.png
// wrangler.toml
[[r2_buckets]]
binding = "ASSETS"
bucket_name = "my-assets"
// Worker to serve R2 with caching
app.get('/assets/*', async (c) => {
const key = c.req.path.replace('/assets/', '');
const object = await c.env.ASSETS.get(key);
if (!object) {
return c.notFound();
}
return new Response(object.body, {
headers: {
'Content-Type': object.httpMetadata?.contentType || 'application/octet-stream',
'Cache-Control': 'public, max-age=31536000', // 1 year
},
});
});
Edge Function Optimization
Supabase Edge Functions
// ❌ BAD: Edge Function for simple queries
// Use direct Supabase client instead!
// ✅ GOOD: Edge Function only when needed
// - External API calls (Stripe, SendGrid)
// - Complex processing
// - Webhooks
// ✅ Keep functions fast (< 1 second)
// - Avoid heavy computation
// - Cache external API responses
// - Use batch operations
Cloudflare Workers
// ✅ Use compute-efficient patterns
// Cloudflare charges by CPU time, not request count
// ❌ BAD: CPU-heavy in hot path
app.get('/heavy', async (c) => {
const result = someHeavyComputation(data); // Blocks CPU
return c.json(result);
});
// ✅ GOOD: Offload to queue
app.post('/heavy', async (c) => {
const { data } = await c.req.json();
await c.env.COMPUTE_QUEUE.send({
type: 'heavy_compute',
data
});
return c.json({ status: 'processing' });
});
Queue Cost Optimization
Choose the Right Queue
┌─────────────────────────────────────────────────────────────┐
│ QUEUE COST COMPARISON │
├─────────────────────────────────────────────────────────────┤
│ │
│ SUPABASE pgmq (within DB): │
│ ├── Included in database cost │
│ ├── No extra charges │
│ └── Best for: Transactional workflows │
│ │
│ CLOUDFLARE QUEUES: │
│ ├── Free: 100K operations/month │
│ ├── Paid: $0.40/million operations │
│ └── Best for: High throughput, edge processing │
│ │
│ RECOMMENDATION: │
│ ├── Low volume: pgmq (free with Supabase) │
│ ├── High volume: Cloudflare Queues │
│ └── Transactional: pgmq │
│ │
└─────────────────────────────────────────────────────────────┘
Batch Processing
// ✅ Process messages in batches
export default {
async queue(
batch: MessageBatch<QueueMessage>,
env: Env
): Promise<void> {
// Process up to 100 messages at once
const messages = batch.messages;
// Batch database operations
const supabase = createClient(env.SUPABASE_URL, env.SUPABASE_SERVICE_KEY);
const updates = messages.map(msg => ({
id: msg.body.taskId,
processed_at: new Date().toISOString()
}));
// Single batch update instead of N updates
await supabase.from('tasks').upsert(updates);
// Acknowledge all at once
batch.ackAll();
}
};
Monitoring Costs
Supabase Dashboard
Dashboard → Project → Usage
Monitor:
├── Database size
├── Bandwidth
├── Storage
├── Edge Function invocations
└── Realtime connections
Set alerts for:
├── 80% of database limit
├── Bandwidth spikes
└── Unusual function invocations
Cloudflare Analytics
Dashboard → Workers → Analytics
Monitor:
├── Request count
├── CPU time
├── Error rate
└── Geographic distribution
KV/R2/Queues:
├── Operations count
├── Storage usage
└── Error rates
Cost Estimation Template
Monthly Cost Calculator
PROJECT: [Your Project Name]
USERS: [Expected MAU]
SUPABASE:
├── Plan: Free / Pro ($25)
├── Database overage: $__ (if > limit)
├── Bandwidth overage: $__ (if > limit)
└── Subtotal: $__
CLOUDFLARE:
├── Plan: Free / Paid ($5)
├── Workers (extra): $0.50/M requests
├── KV writes: $0.50/M
├── R2 storage: $0.015/GB
└── Subtotal: $__
TOTAL MONTHLY: $__
Example: TaskFlow App (10K MAU)
SUPABASE PRO: $25/month
├── Database: ~500 MB (within limit)
├── Bandwidth: ~50 GB (within limit)
├── Storage: ~5 GB (within limit)
├── Edge Functions: ~100K (within limit)
└── Overage: $0
CLOUDFLARE FREE:
├── Workers: ~50K requests/day (within limit)
├── KV: ~10K reads/day (within limit)
├── Queues: ~10K/month (within limit)
└── Overage: $0
TOTAL: $25/month
Tổng kết
Cost Optimization Checklist
DATABASE:
☐ Select only needed columns
☐ Use proper indexes
☐ Avoid N+1 queries
☐ Clean up old data
CACHING:
☐ Cache expensive queries
☐ Set appropriate TTL
☐ Invalidate on updates
BANDWIDTH:
☐ Paginate large lists
☐ Filter in database
☐ Compress responses
STORAGE:
☐ Compress images
☐ Use transformations
☐ Clean up unused files
☐ Consider R2 for high traffic
COMPUTE:
☐ Edge Functions only when needed
☐ Keep functions fast
☐ Batch operations
Cost-Effective Stack
| Traffic |
Supabase |
Cloudflare |
Total/month |
| PoC |
Free |
Free |
$0 |
| < 10K MAU |
Pro ($25) |
Free |
$25 |
| < 50K MAU |
Pro ($25) |
Paid ($5) |
$30 |
| > 50K MAU |
Team+ |
Paid |
Custom |
Q&A
- Current spending như thế nào?
- Có bottleneck nào về cost?
- Traffic patterns của ứng dụng?