Skip to content

Free Tier Deployment Options

Deploy your Text Classification API on free tier platforms with optimized configurations.

Platform Comparison

Platform Free Tier Limits Build Time Cold Start Scaling
Render 750 hours/month Fast Medium Manual
Railway $5/month credit Fast Fast Auto
Fly.io 3 shared CPUs Medium Fast Manual
Heroku 550 hours/month Slow Slow Auto

Setup

  1. Connect Repository

    # Push your code to GitHub
    git add .
    git commit -m "Initial commit"
    git push origin main
    

  2. Create Render Service

  3. Go to Render Dashboard
  4. Click "New" → "Web Service"
  5. Connect your GitHub repository
  6. Configure build settings:

Build Settings

# render.yaml (optional)
services:
  - type: web
    name: text-classifier-api
    env: python
    buildCommand: "pip install -r api/api_requirements.txt"
    startCommand: "cd api && python main.py"
    envVars:
      - key: PORT
        value: 10000
      - key: MODEL_PATH
        value: final_best_model.pkl
      - key: VECTORIZER_PATH
        value: tfidf_vectorizer.pkl

Environment Variables

PORT=10000
MODEL_PATH=final_best_model.pkl
VECTORIZER_PATH=tfidf_vectorizer.pkl
MAX_BATCH_SIZE=25
ENABLE_METRICS=false

File Structure for Render

your-repo/
├── api/
│   ├── main.py
│   ├── api_requirements.txt
│   └── Dockerfile (optional)
├── final_best_model.pkl
├── tfidf_vectorizer.pkl
└── render.yaml

Railway

Setup

  1. Install Railway CLI

    npm install -g @railway/cli
    railway login
    

  2. Deploy

    cd api
    railway init
    railway up
    

  3. Environment Variables

    railway variables set PORT=8000
    railway variables set MODEL_PATH=../final_best_model.pkl
    railway variables set VECTORIZER_PATH=../tfidf_vectorizer.pkl
    

Fly.io

Setup

  1. Install Fly CLI

    # Download from https://fly.io/docs/getting-started/installing-flyctl/
    flyctl auth login
    

  2. Create App

    cd api
    flyctl launch
    

  3. Configure fly.toml

    app = "text-classifier-api"
    kill_signal = "SIGINT"
    kill_timeout = 5
    processes = []
    
    [env]
      PORT = "8080"
    
    [experimental]
      allowed_public_ports = []
      auto_rollback = true
    
    [[services]]
      http_checks = []
      internal_port = 8080
      processes = ["app"]
      protocol = "tcp"
      script_checks = []
    
      [services.concurrency]
        hard_limit = 25
        soft_limit = 20
        type = "connections"
    
      [[services.ports]]
        force_https = true
        handlers = ["http"]
        port = 80
    
      [[services.ports]]
        handlers = ["tls", "http"]
        port = 443
    
      [[services.tcp_checks]]
        grace_period = "1s"
        interval = "15s"
        restart_limit = 0
        timeout = "2s"
    

Heroku

Setup

  1. Install Heroku CLI

    # Download from https://devcenter.heroku.com/articles/heroku-cli
    heroku login
    

  2. Create App

    cd api
    heroku create your-app-name
    

  3. Configure Environment

    heroku config:set MODEL_PATH=../final_best_model.pkl
    heroku config:set VECTORIZER_PATH=../tfidf_vectorizer.pkl
    

  4. Deploy

    git push heroku main
    

Optimization Tips

Memory Optimization

  1. Reduce Model Size

    # Use joblib compression
    joblib.dump(model, 'model.pkl', compress=3)
    

  2. Limit Batch Size

    export MAX_BATCH_SIZE=25
    

  3. Disable Metrics in Production

    export ENABLE_METRICS=false
    

Cold Start Optimization

  1. Pre-load Models
  2. Models are loaded on startup to reduce cold start time

  3. Use Connection Pooling

  4. Built-in async processing reduces connection overhead

  5. Optimize Dependencies

  6. Minimal dependencies for faster installs

Cost Optimization

  1. Monitor Usage

    # Check memory usage
    curl https://your-api.com/health
    

  2. Scale Down When Possible

  3. Free tiers often auto-scale based on traffic

  4. Use CDN for Static Assets

  5. Not applicable for API, but consider for frontend

Troubleshooting

Common Issues

  1. Memory Limit Exceeded

    Solution: Reduce MAX_BATCH_SIZE, disable metrics
    

  2. Timeout Errors

    Solution: Increase timeout limits, optimize model loading
    

  3. Model Loading Failures

    Solution: Ensure model files are in correct paths
    

Monitoring

  • Check /health endpoint for service status
  • Monitor response times and error rates
  • Set up alerts for high memory usage

Migration Between Platforms

From Render to Railway

# Export environment variables
railway variables set $(heroku config --json | jq -r 'to_entries[] | "\(.key)=\(.value)"')

Backup Strategy

# Download model files before migration
curl -O https://your-api.com/final_best_model.pkl
curl -O https://your-api.com/tfidf_vectorizer.pkl