Skip to main content

Deploying Your MCP Server

This guide explains how to deploy your Model Context Protocol (MCP) server to various platforms.

Deploying to production

In this guide, you'll learn how to:

  • Prepare for deployment - Configure your MCP server for a production environment
  • Choose a deployment platform - Explore different hosting options for your server
  • Set up continuous deployment - Automate your deployment process
  • Monitor and maintain - Keep your MCP server running smoothly
  • Troubleshoot issues - Solve common deployment problems

Introduction

After developing your MCP server locally, the next step is to deploy it to a production environment where it can be accessed by your applications and users. This guide covers several deployment options, from simple to advanced.

Preparing for Deployment

Before You Deploy

Before deploying your MCP server to production, ensure you have:

  • Production Configuration - Environment-specific settings for production
  • Security Measures - Authentication, HTTPS, input validation
  • Error Handling - Robust error handling for production
  • Logging - Appropriate logging level for production
  • Performance Optimizations - Caching, efficient resource usage

Production Configuration Checklist

Deployment Checklist

Make sure to complete these steps before deploying:

  • Set NODE_ENV to "production"
  • Configure secure environment variables
  • Enable authentication
  • Configure HTTPS or TLS
  • Set appropriate logging levels
  • Implement rate limiting

Deployment Options

Deploying to TRMX AI Platform

The simplest way to deploy your MCP server is using the TRMX AI platform, which provides:

  • Optimized hosting - Environment specially configured for MCP servers
  • Automatic scaling - Handle varying loads without manual intervention
  • Built-in monitoring - Track performance and usage metrics
  • Easy updates - Simple deployment workflow for new versions
  • Managed security - Automatic security updates and HTTPS

Prerequisites

  • TRMX AI account (sign up at trmx.ai)
  • TRMX CLI installed

Deployment Steps

# Login to your TRMX AI account
trmx login

# Navigate to your MCP server project
cd your-mcp-server

# Configure your application (first time only)
trmx init

# Deploy your application
trmx deploy

The deployment process typically takes a few minutes. Once completed, you'll receive a unique URL for your MCP server.

Configuration Options

You can configure your deployment using a trmx.config.js file:

// trmx.config.js
module.exports = {
name: 'my-mcp-server',
region: 'us-east-1',
environment: {
NODE_ENV: 'production',
LOG_LEVEL: 'info',
},
scaling: {
minInstances: 1,
maxInstances: 5,
targetCpuUtilization: 70,
},
database: {
type: 'mongodb',
version: '5.0',
},
};

2. Docker Deployment

Deploying with Docker provides consistency across environments and is suitable for various hosting platforms.

Prerequisites

  • Docker installed
  • Docker registry access (Docker Hub, GitHub Container Registry, etc.)

Dockerfile

Create a Dockerfile in your project root:

FROM node:18-alpine

# Create app directory
WORKDIR /app

# Install app dependencies
COPY package*.json ./
RUN npm ci --only=production

# Bundle app source
COPY dist/ ./dist/

# Set environment variables
ENV NODE_ENV=production
ENV PORT=3000

# Expose port
EXPOSE 3000

# Start the application
CMD ["node", "dist/index.js"]

Build and Deploy

# Build Docker image
docker build -t your-registry/my-mcp-server:latest .

# Push to registry
docker push your-registry/my-mcp-server:latest

# Run container
docker run -p 3000:3000 --env-file .env.production your-registry/my-mcp-server:latest

3. Cloud Provider Deployment

Cloud Deployment Options

MCP servers can be deployed to major cloud providers using various services:

  • AWS - Elastic Beanstalk, Lambda + API Gateway, ECS
  • Google Cloud - App Engine, Cloud Run, GKE
  • Microsoft Azure - App Service, Container Instances, AKS
  • Digital Ocean - App Platform, Droplets with Docker

AWS Deployment

AWS Elastic Beanstalk

Elastic Beanstalk is a simple way to deploy and manage applications on AWS.

# Install EB CLI
pip install awsebcli

# Initialize EB application
eb init

# Create environment
eb create production-environment

# Deploy application
eb deploy
AWS Lambda with API Gateway

For serverless deployment:

  1. Create a Lambda function wrapper:
// lambda.js
const { createServer, proxy } = require('aws-serverless-express');
const { app } = require('./dist/app');

const server = createServer(app);

exports.handler = (event, context) => {
return proxy(server, event, context);
};
  1. Package your application:
zip -r function.zip dist/ node_modules/ lambda.js package.json
  1. Create Lambda function and API Gateway using AWS CLI or console

Google Cloud Platform

Google App Engine
  1. Create app.yaml:
runtime: nodejs16

env_variables:
NODE_ENV: "production"
LOG_LEVEL: "info"
  1. Deploy:
gcloud app deploy
Google Cloud Run
  1. Build Docker image:
gcloud builds submit --tag gcr.io/your-project/my-mcp-server
  1. Deploy to Cloud Run:
gcloud run deploy my-mcp-server \
--image gcr.io/your-project/my-mcp-server \
--platform managed \
--allow-unauthenticated

Microsoft Azure

Azure App Service
  1. Package your application:
zip -r package.zip dist/ package.json
  1. Create and deploy to App Service:
az webapp up --runtime "NODE|16-lts" --name my-mcp-server --resource-group my-resource-group

4. Traditional VPS Deployment

For VPS providers like DigitalOcean, Linode, or Vultr:

Setup Process

  1. Provision a VPS with Ubuntu 20.04 or newer

  2. Install Node.js:

curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
  1. Install PM2 process manager:
npm install -g pm2
  1. Clone your repository:
git clone https://github.com/yourusername/your-mcp-server.git
cd your-mcp-server
  1. Install dependencies and build:
npm ci
npm run build
  1. Create environment file:
nano .env.production
# Add your environment variables
  1. Start with PM2:
pm2 start dist/index.js --name my-mcp-server
pm2 save
pm2 startup
  1. Configure Nginx as a reverse proxy:
sudo apt-get install -y nginx
sudo nano /etc/nginx/sites-available/my-mcp-server

Nginx configuration:

server {
listen 80;
server_name your-domain.com;

location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

Enable the configuration:

sudo ln -s /etc/nginx/sites-available/my-mcp-server /etc/nginx/sites-enabled/
sudo nginx -t
sudo systemctl restart nginx
  1. Set up SSL with Let's Encrypt:
sudo apt-get install -y certbot python3-certbot-nginx
sudo certbot --nginx -d your-domain.com

Continuous Deployment

Automating Deployment

Continuous deployment provides these benefits for your MCP server:

  • Faster releases - Automatically deploy code changes after tests pass
  • Consistency - Eliminate manual deployment steps and human error
  • Version control - Keep track of what's deployed and when
  • Easy rollbacks - Quickly revert to previous versions if needed

GitHub Actions

Create a workflow file at .github/workflows/deploy.yml:

name: Deploy MCP Server

on:
push:
branches: [ main ]

jobs:
deploy:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v3

- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'

- name: Install dependencies
run: npm ci

- name: Build
run: npm run build

- name: Install TRMX CLI
run: npm install -g @trmx/cli

- name: Deploy to TRMX AI
run: trmx deploy
env:
TRMX_TOKEN: ${{ secrets.TRMX_TOKEN }}

GitLab CI/CD

Create a .gitlab-ci.yml file:

stages:
- build
- test
- deploy

build:
stage: build
image: node:18
script:
- npm ci
- npm run build
artifacts:
paths:
- dist/
- node_modules/

test:
stage: test
image: node:18
script:
- npm test

deploy:
stage: deploy
image: node:18
script:
- npm install -g @trmx/cli
- trmx login --token $TRMX_TOKEN
- trmx deploy
only:
- main

Post-Deployment

Monitoring

Monitoring Solutions

Monitor your deployed MCP server using these tools:

  • TRMX Dashboard - For servers deployed on TRMX AI
  • CloudWatch - For AWS deployments
  • Prometheus & Grafana - For custom monitoring
  • Datadog or New Relic - For comprehensive application monitoring
  • Sentry - For error tracking and performance monitoring

Scaling

Horizontal Scaling

Increase the number of instances to handle more traffic:

// TRMX AI scaling configuration
module.exports = {
scaling: {
minInstances: 2,
maxInstances: 10,
targetCpuUtilization: 70,
}
};

Vertical Scaling

Increase resources (CPU, memory) for your instances:

// TRMX AI resource configuration
module.exports = {
resources: {
cpu: '1',
memory: '2Gi',
}
};

Maintenance

Database Backups

For MongoDB:

# Create backup
mongodump --uri="mongodb://username:password@host:port/database" --out=backup

# Restore backup
mongorestore --uri="mongodb://username:password@host:port/database" backup

Log Rotation

Configure log rotation to prevent disk space issues:

# PM2 log rotation
pm2 install pm2-logrotate
pm2 set pm2-logrotate:max_size 10M
pm2 set pm2-logrotate:retain 7

Troubleshooting Deployment Issues

Common Deployment Issues

When troubleshooting deployment problems:

  1. Check environment variables - Missing or incorrect values often cause startup failures
  2. Verify network configuration - Ensure firewalls and security groups allow necessary traffic
  3. Examine resource constraints - Memory limits or CPU throttling can cause unexpected behavior
  4. Review logs thoroughly - Application logs, server logs, and deployment logs
  5. Test incrementally - Deploy minimal changes to isolate the source of problems

Common Issues and Solutions

IssuePossible CauseSolution
Application won't startMissing environment variablesCheck .env files and server environment configuration
Connection refusedFirewall blocking accessConfigure firewall to allow traffic on your application port
Database connection errorsWrong connection string or credentialsVerify database connection parameters
High memory usageMemory leaks or insufficient resourcesIncrease memory allocation or fix memory leaks in code
Slow response timesInsufficient resources or inefficient codeProfile application, optimize resource intensive operations

Deployment Logs

Always check logs for issues:

# TRMX AI logs
trmx logs my-mcp-server

# PM2 logs
pm2 logs my-mcp-server

# Docker logs
docker logs container_id

# AWS Elastic Beanstalk logs
eb logs

Best Practices

Deployment Best Practices

Follow these guidelines for reliable MCP server deployments:

  • Use Environment Variables - Never hardcode sensitive information
  • Implement Health Checks - Add monitoring endpoints to detect issues
  • Version Control Configuration - Track all deployment configurations
  • Blue-Green Deployments - Use zero-downtime deployment strategies
  • Automate Testing - Run comprehensive tests before each deployment
  • Set Up Alerting - Configure notifications for critical issues
  • Document Everything - Maintain detailed deployment documentation

Next Steps

Continue Learning

Now that you've deployed your MCP server, you might want to: