Free Tier Deployment Options¶
Deploy your Text Classification API on free tier platforms with optimized configurations.
Platform Comparison¶
| Platform | Free Tier Limits | Build Time | Cold Start | Scaling |
|---|---|---|---|---|
| Render | 750 hours/month | Fast | Medium | Manual |
| Railway | $5/month credit | Fast | Fast | Auto |
| Fly.io | 3 shared CPUs | Medium | Fast | Manual |
| Heroku | 550 hours/month | Slow | Slow | Auto |
Render (Recommended)¶
Setup¶
-
Connect Repository
-
Create Render Service
- Go to Render Dashboard
- Click "New" → "Web Service"
- Connect your GitHub repository
- Configure build settings:
Build Settings¶
# render.yaml (optional)
services:
- type: web
name: text-classifier-api
env: python
buildCommand: "pip install -r api/api_requirements.txt"
startCommand: "cd api && python main.py"
envVars:
- key: PORT
value: 10000
- key: MODEL_PATH
value: final_best_model.pkl
- key: VECTORIZER_PATH
value: tfidf_vectorizer.pkl
Environment Variables¶
PORT=10000
MODEL_PATH=final_best_model.pkl
VECTORIZER_PATH=tfidf_vectorizer.pkl
MAX_BATCH_SIZE=25
ENABLE_METRICS=false
File Structure for Render¶
your-repo/
├── api/
│ ├── main.py
│ ├── api_requirements.txt
│ └── Dockerfile (optional)
├── final_best_model.pkl
├── tfidf_vectorizer.pkl
└── render.yaml
Railway¶
Setup¶
-
Install Railway CLI
-
Deploy
-
Environment Variables
Fly.io¶
Setup¶
-
Install Fly CLI
-
Create App
-
Configure fly.toml
app = "text-classifier-api" kill_signal = "SIGINT" kill_timeout = 5 processes = [] [env] PORT = "8080" [experimental] allowed_public_ports = [] auto_rollback = true [[services]] http_checks = [] internal_port = 8080 processes = ["app"] protocol = "tcp" script_checks = [] [services.concurrency] hard_limit = 25 soft_limit = 20 type = "connections" [[services.ports]] force_https = true handlers = ["http"] port = 80 [[services.ports]] handlers = ["tls", "http"] port = 443 [[services.tcp_checks]] grace_period = "1s" interval = "15s" restart_limit = 0 timeout = "2s"
Heroku¶
Setup¶
-
Install Heroku CLI
-
Create App
-
Configure Environment
-
Deploy
Optimization Tips¶
Memory Optimization¶
-
Reduce Model Size
-
Limit Batch Size
-
Disable Metrics in Production
Cold Start Optimization¶
- Pre-load Models
-
Models are loaded on startup to reduce cold start time
-
Use Connection Pooling
-
Built-in async processing reduces connection overhead
-
Optimize Dependencies
- Minimal dependencies for faster installs
Cost Optimization¶
-
Monitor Usage
-
Scale Down When Possible
-
Free tiers often auto-scale based on traffic
-
Use CDN for Static Assets
- Not applicable for API, but consider for frontend
Troubleshooting¶
Common Issues¶
-
Memory Limit Exceeded
-
Timeout Errors
-
Model Loading Failures
Monitoring¶
- Check
/healthendpoint for service status - Monitor response times and error rates
- Set up alerts for high memory usage
Migration Between Platforms¶
From Render to Railway¶
# Export environment variables
railway variables set $(heroku config --json | jq -r 'to_entries[] | "\(.key)=\(.value)"')