Overview

This guide walks you through migrating your Dgraph database from managed cloud services to a self-hosted environment. It covers step-by-step instructions for deployment using various cloud providers and methods, supporting goals like cost savings, increased control, and compliance.

What You’ll Learn

Migration Planning

Assessment and planning strategies—including downtime minimization, resource calculations, and risk mitigation

Data Export & Import

Instructions for exporting data, backing up schema and ACLs, loading via bulk or live methods, and verifying data integrity

Infrastructure Deployment

Deployment guidance for Kubernetes clusters, Docker Compose on Linux, and VPS configurations with load balancing and storage setup

Operations & Monitoring

Monitoring and alerting, backup and disaster recovery, performance optimization, and security hardening practices.

Deployment Options

When migrating to self-hosted Dgraph, your deployment choice depends on several key factors: data size, team expertise, budget constraints, and control requirements. Here’s how these factors influence your deployment decision: Data Size Considerations:
  • Under 100GB: Docker Compose or Linux VPS are suitable options
  • 100GB to 1TB: Kubernetes or Linux VPS can handle the load
  • Over 1TB: Kubernetes is required for proper scaling and management
Team Expertise Factors:
  • High Kubernetes Experience: Kubernetes deployment is recommended
  • Limited Kubernetes Experience: Docker Compose or Linux VPS are more approachable
Budget Constraints:
  • Cost Optimized: Linux VPS provides the most economical option
  • Balanced: Docker Compose offers a good middle ground
  • Enterprise: Kubernetes provides enterprise-grade features
Control Requirements:
  • Maximum Control: Linux VPS gives you full control over the environment
  • Managed Infrastructure: Kubernetes provides managed infrastructure capabilities
Available Deployment Methods:
  • Kubernetes: Best for large-scale deployments, enterprise environments, and teams with K8s expertise
  • Docker Compose: Ideal for development, testing, and smaller production workloads
  • Linux VPS: Perfect for cost-conscious deployments and teams wanting maximum control

Prerequisites

Before starting your migration, ensure you have the necessary tools, access, and resources.

Required Tools

Command Line Tools

  • kubectl (v1.24+)
  • helm (v3.8+)
  • dgraph CLI tool
  • curl or similar HTTP client
  • Cloud provider CLI tools

Development Tools

  • Docker (for local testing)
  • Git (for configuration management)
  • Text editor or IDE
  • SSH client

Access Requirements

  • Dgraph Cloud: Admin access to export data
  • Hypermode Graph: Database access credentials
  • Network: Ability to download large datasets

Minimum resource requirements

  • Nodes: 3 x 4 vCPU, 8 GB RAM
  • Storage: 200 GB solid-state drive per node
  • Network: 1 Gbps bandwidth
  • Estimated Cost: $150-300/month

Data export from source systems

The first step in migration is safely exporting your data from your current managed service. This section covers export procedures for both Dgraph Cloud and Hypermode Graphs.

Exporting from Dgraph Cloud

Dgraph Cloud provides several methods for exporting your data, including admin API endpoints and the web interface.

Method 1: Using the Web Interface

1

Access Export Function

Log into your Dgraph Cloud dashboard and navigate to your cluster.Dgraph Cloud Dashboard
Screenshot placeholder: Show the main dashboard with cluster overview
2

Navigate to Export

Click on the “Export” tab in your cluster management interface. Export Tab
Location
Screenshot placeholder: Highlight the export tab in the navigation
3

Configure Export Settings

Select your export format and destination. Dgraph Cloud supports: - Format: JSON or RDF - Destination: Download link or cloud storage - Compression: Optional gzip compression Export
Configuration
Screenshot placeholder: Show export configuration options
4

Start Export Process

Click “Start Export” and monitor the progress. Large datasets may take several hours. Export Progress
Screenshot placeholder: Show export progress indicator
5

Download Exported Data

Once complete, download your exported data files.Export Download
Screenshot placeholder: Show download links for completed export

Method 2: Using Admin API

curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
  -H "Content-Type: application/json" \
  -d '{"query": "{ state { groups { id members { id addr leader lastUpdate } } } }"}'

Method 3: Bulk Export for Large Datasets

For datasets larger than 10GB, use the bulk export feature:
curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
  -H "Content-Type: application/json" \
  -d '{
    "query": "mutation { 
      export(input: {
        destination: \"s3://your-backup-bucket/$(date +%Y-%m-%d)\",
        format: \"rdf\",
        namespace: 0
      }) { 
        response { 
          message 
          code 
        } 
      } 
    }"
  }'

Exporting from Hypermode Graphs

Hypermode Graphs provides native export capabilities through both API and interface methods.

Using admin endpoint

curl --location 'https://<YOUR_CLUSTER_NAME>.hypermode.host/dgraph/admin' \
--header 'Content-Type: application/json' \
--header 'Dg-Auth: ••••••' \
--data '{"query":"mutation {\n  export(input: { format: \"rdf\" }) {\n    response {\n      message\n      code\n    }\n  }\n}","variables":{}}'

Export Validation and Preparation

Always validate your exported data before proceeding with the migration.

Data Integrity Checks

# Check file sizes and contents
ls -lah exported_data/
file exported_data/*

# For RDF exports, count triples

if [[-f "exported_data/g01.rdf.gz"]]; then zcat exported_data/g01.rdf.gz | wc -l
fi

# For JSON exports, validate structure

if [[-f "exported_data/g01.json.gz"]]; then zcat exported_data/g01.json.gz | jq
'.[] | keys' | head -10 fi

Prepare for Transfer

1

Organize Export Files

# Create organized directory structure
mkdir -p migration_data/{data,schema,acl,scripts}

# Move files to appropriate directories
mv exported_data/*.rdf.gz migration_data/data/
mv schema* migration_data/schema/
mv acl* migration_data/acl/
2

Create Checksums

# Generate checksums for integrity verification cd migration_data find
. -type f -name "*.gz" -exec sha256sum {} \; > checksums.txt find . -type f
-name "*.json" -exec sha256sum {} \; >> checksums.txt ```
</Step>

<Step title="Compress for Transfer">
  ```bash
  # Create final migration package
  cd ..
  tar -czf migration_package_$(date +%Y%m%d).tar.gz migration_data/
  
  # Verify package
  tar -tzf migration_package_*.tar.gz | head -10

Pre-Migration Planning

Proper planning is crucial for a successful migration. This section helps you assess your current environment and plan the migration strategy.

1. Assess Current Environment

# For Dgraph Cloud
curl -X POST https://your-cluster.grpc.cloud.dgraph.io/admin \
  -H "Content-Type: application/json" \
  -d '{"query": "{ state { groups { id checksum tablets { predicate space } } } }"}'

# For Hypermode Graph

hypermode graph stats --detailed

2. Infrastructure Sizing

CPU Requirements

Alpha Nodes: 2-4 cores per 1M edges Zero Nodes: 1-2 cores (coordination only) Load Balancer: 2-4 cores Monitoring: 1-2 cores

Memory Requirements

Alpha Nodes: 4-8 GB base + 1 GB per 10M edges Zero Nodes: 2-4 GB (metadata storage) Load Balancer: 2-4 GB Monitoring: 4-8 GB

Storage Requirements

Data Volume: 3-5x compressed export size WAL Logs: 20-50 GB per node Backup Space: 2x data volume Monitoring: 50-100 GB

Network Requirements

Internal: 1 Gbps minimum between nodes External: 100 Mbps minimum for clients Bandwidth: Plan for 3x normal traffic during migration Latency: <10 ms between data nodes

AWS Deployment

Deploy your self-hosted Dgraph cluster on Amazon Web Services using Elastic Kubernetes Service (EKS).

1. Infrastructure Setup

EKS Cluster Creation

aws eks create-cluster \
  --name dgraph-cluster \
  --version 1.28 \
  --role-arn arn:aws:iam::ACCOUNT:role/eks-service-role \
  --resources-vpc-config subnetIds=subnet-12345,securityGroupIds=sg-12345

Storage Class Configuration

aws-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: dgraph-storage
provisioner: ebs.csi.aws.com
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true

2. Dgraph Deployment on AWS

1

Apply Storage Class

bash kubectl apply -f aws-storage-class.yaml
2

Add Helm Repository

bash helm repo add dgraph https://charts.dgraph.io helm repo update
3

Create Namespace

bash kubectl create namespace dgraph
4

Deploy Dgraph

helm install dgraph dgraph/dgraph \
  --namespace dgraph \
  --set image.tag="v23.1.0" \
  --set alpha.persistence.storageClass="dgraph-storage" \
  --set alpha.persistence.size="500Gi" \
  --set zero.persistence.storageClass="dgraph-storage" \
  --set zero.persistence.size="100Gi" \
  --set alpha.replicaCount=3 \
  --set zero.replicaCount=3 \
  --set alpha.resources.requests.memory="8Gi" \
  --set alpha.resources.requests.cpu="2000m"

3. Load Balancer Configuration

aws-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dgraph-ingress
  namespace: dgraph
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:REGION:ACCOUNT:certificate/CERT-ID
spec:
  rules:
    - host: dgraph.yourdomain.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: dgraph-dgraph-alpha
                port:
                  number: 8080

Google Cloud Platform Deployment

Deploy your self-hosted Dgraph cluster on Google Cloud Platform using Google Kubernetes Engine (GKE).

1. GKE Cluster Setup

gcloud container clusters create dgraph-cluster \
  --zone=us-central1-a \
  --machine-type=e2-standard-4 \
  --num-nodes=3 \
  --disk-size=100GB \
  --disk-type=pd-ssd \
  --enable-autoscaling \
  --min-nodes=3 \
  --max-nodes=9 \
  --enable-autorepair \
  --enable-autoupgrade

2. Deploy Dgraph on GKE

# Create namespace
kubectl create namespace dgraph

# Deploy with Helm
helm install dgraph dgraph/dgraph \
  --namespace dgraph \
  --set alpha.persistence.storageClass="dgraph-storage" \
  --set zero.persistence.storageClass="dgraph-storage" \
  --set alpha.persistence.size="500Gi" \
  --set zero.persistence.size="100Gi" \
  --set alpha.replicaCount=3 \
  --set zero.replicaCount=3

3. Load Balancer Setup

gcp-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dgraph-ingress
  namespace: dgraph
  annotations:
    kubernetes.io/ingress.global-static-ip-name: dgraph-ip
    networking.gke.io/managed-certificates: dgraph-ssl-cert
spec:
  rules:
    - host: dgraph.yourdomain.com
      http:
        paths:
          - path: /*
            pathType: ImplementationSpecific
            backend:
              service:
                name: dgraph-dgraph-alpha
                port:
                  number: 8080

Azure Deployment

Deploy your self-hosted Dgraph cluster on Microsoft Azure using Azure Kubernetes Service (AKS).

1. AKS Cluster Creation

az group create --name dgraph-rg --location eastus

2. Deploy Dgraph on AKS

# Create namespace
kubectl create namespace dgraph

# Deploy with Helm
helm install dgraph dgraph/dgraph \
  --namespace dgraph \
  --set alpha.persistence.storageClass="dgraph-storage" \
  --set zero.persistence.storageClass="dgraph-storage" \
  --set alpha.persistence.size="500Gi" \
  --set zero.persistence.size="100Gi"

Digital Ocean Deployment

Kubernetes Deployment (DOKS)

1. DOKS Cluster Setup

doctl kubernetes cluster create dgraph-cluster \
  --region nyc1 \
  --version 1.28.2-do.0 \
  --node-pool="name=worker-pool;size=s-4vcpu-8gb;count=3;auto-scale=true;min-nodes=3;max-nodes=9"

2. Deploy Dgraph on DOKS

# Create namespace
kubectl create namespace dgraph

# Deploy with Helm
helm install dgraph dgraph/dgraph \
  --namespace dgraph \
  --set alpha.persistence.storageClass="dgraph-storage" \
  --set zero.persistence.storageClass="dgraph-storage" \
  --set alpha.persistence.size="500Gi" \
  --set zero.persistence.size="100Gi"

Docker Compose Deployment

1. Prepare Docker Compose Environment

1

Create Directory Structure

mkdir -p dgraph-compose/{data,config,backups,nginx}
cd dgraph-compose
2

Create Docker Compose File

docker-compose.yml
version: '3.8'

services:
  # Dgraph Zero nodes
  dgraph-zero-1:
    image: dgraph/dgraph:v23.1.0
    container_name: dgraph-zero-1
    ports:
      - "5080:5080"
      - "6080:6080"
    volumes:
      - ./data/zero1:/dgraph
    command: dgraph zero --my=dgraph-zero-1:5080 --replicas=3 --idx=1
    restart: unless-stopped
    networks:
      - dgraph-network

  dgraph-zero-2:
    image: dgraph/dgraph:v23.1.0
    container_name: dgraph-zero-2
    ports:
      - "5081:5080"
      - "6081:6080"
    volumes:
      - ./data/zero2:/dgraph
    command: dgraph zero --my=dgraph-zero-2:5080 --replicas=3 --peer=dgraph-zero-1:5080 --idx=2
    restart: unless-stopped
    networks:
      - dgraph-network
    depends_on:
      - dgraph-zero-1

  dgraph-zero-3:
    image: dgraph/dgraph:v23.1.0
    container_name: dgraph-zero-3
    ports:
      - "5082:5080"
      - "6082:6080"
    volumes:
      - ./data/zero3:/dgraph
    command: dgraph zero --my=dgraph-zero-3:5080 --replicas=3 --peer=dgraph-zero-1:5080 --idx=3
    restart: unless-stopped
    networks:
      - dgraph-network
    depends_on:
      - dgraph-zero-1

  # Dgraph Alpha nodes
  dgraph-alpha-1:
    image: dgraph/dgraph:v23.1.0
    container_name: dgraph-alpha-1
    ports:
      - "8080:8080"
      - "9080:9080"
    volumes:
      - ./data/alpha1:/dgraph
    command: dgraph alpha --my=dgraph-alpha-1:7080 --zero=dgraph-zero-1:5080,dgraph-zero-2:5080,dgraph-zero-3:5080 --security whitelist=0.0.0.0/0
    restart: unless-stopped
    networks:
      - dgraph-network
    depends_on:
      - dgraph-zero-1
      - dgraph-zero-2
      - dgraph-zero-3

  dgraph-alpha-2:
    image: dgraph/dgraph:v23.1.0
    container_name: dgraph-alpha-2
    ports:
      - "8081:8080"
      - "9081:9080"
    volumes:
      - ./data/alpha2:/dgraph
    command: dgraph alpha --my=dgraph-alpha-2:7080 --zero=dgraph-zero-1:5080,dgraph-zero-2:5080,dgraph-zero-3:5080 --security whitelist=0.0.0.0/0
    restart: unless-stopped
    networks:
      - dgraph-network
    depends_on:
      - dgraph-zero-1
      - dgraph-zero-2
      - dgraph-zero-3

  dgraph-alpha-3:
    image: dgraph/dgraph:v23.1.0
    container_name: dgraph-alpha-3
    ports:
      - "8082:8080"
      - "9082:9080"
    volumes:
      - ./data/alpha3:/dgraph
    command: dgraph alpha --my=dgraph-alpha-3:7080 --zero=dgraph-zero-1:5080,dgraph-zero-2:5080,dgraph-zero-3:5080 --security whitelist=0.0.0.0/0
    restart: unless-stopped
    networks:
      - dgraph-network
    depends_on:
      - dgraph-zero-1
      - dgraph-zero-2
      - dgraph-zero-3

  # Load Balancer
  nginx:
    image: nginx:alpine
    container_name: dgraph-nginx
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/ssl:/etc/nginx/ssl
    restart: unless-stopped
    networks:
      - dgraph-network
    depends_on:
      - dgraph-alpha-1
      - dgraph-alpha-2
      - dgraph-alpha-3

  # Monitoring
  prometheus:
    image: prom/prometheus:latest
    container_name: dgraph-prometheus
    ports:
      - "9090:9090"
    volumes:
      - ./config/prometheus.yml:/etc/prometheus/prometheus.yml
    restart: unless-stopped
    networks:
      - dgraph-network

  grafana:
    image: grafana/grafana:latest
    container_name: dgraph-grafana
    ports:
      - "3000:3000"
    volumes:
      - ./data/grafana:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_PASSWORD=admin
    restart: unless-stopped
    networks:
      - dgraph-network

networks:
  dgraph-network:
    driver: bridge

volumes:
  dgraph-data:
3

Create Nginx Configuration

nginx/nginx.conf
events {
    worker_connections 1024;
}

http {
    upstream dgraph_alpha {
        least_conn;
        server dgraph-alpha-1:8080;
        server dgraph-alpha-2:8080;
        server dgraph-alpha-3:8080;
    }
    
    upstream dgraph_grpc {
        least_conn;
        server dgraph-alpha-1:9080;
        server dgraph-alpha-2:9080;
        server dgraph-alpha-3:9080;
    }

    server {
        listen 80;
        server_name localhost;

        location / {
            proxy_pass http://dgraph_alpha;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
    
    # HTTPS configuration (uncomment and configure as needed)
    # server {
    #     listen 443 ssl http2;
    #     server_name your-domain.com;
    #     
    #     ssl_certificate /etc/nginx/ssl/cert.pem;
    #     ssl_certificate_key /etc/nginx/ssl/key.pem;
    #     
    #     location / {
    #         proxy_pass http://dgraph_alpha;
    #         proxy_set_header Host $host;
    #         proxy_set_header X-Real-IP $remote_addr;
    #         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    #         proxy_set_header X-Forwarded-Proto $scheme;
    #     }
    # }
}
4

Create Prometheus Configuration

config/prometheus.yml
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'dgraph-alpha'
    static_configs:
      - targets: 
        - 'dgraph-alpha-1:8080'
        - 'dgraph-alpha-2:8080'
        - 'dgraph-alpha-3:8080'
    metrics_path: '/debug/prometheus_metrics'
    
  - job_name: 'dgraph-zero'
    static_configs:
      - targets: 
        - 'dgraph-zero-1:6080'
        - 'dgraph-zero-2:6080'
        - 'dgraph-zero-3:6080'
    metrics_path: '/debug/prometheus_metrics'

2. Deploy and Manage Docker Compose Cluster

# Start the entire cluster
docker-compose up -d

# Check status

docker-compose ps

# View logs

docker-compose logs -f dgraph-alpha-1

Linux VPS Deployment

1. VPS Infrastructure Setup

1

Provision VPS Instances

Create 3-5 VPS instances with the following specifications: - CPU: 4-8 cores - RAM: 16-32GB - Storage: 500GB+ SSD - OS: Ubuntu 22.04 LTS - Network: Private networking enabled
2

Configure Base System

# Update system (run on all nodes) sudo apt update && sudo apt upgrade
-y # Install required packages sudo apt install -y curl wget unzip htop iotop
# Configure firewall sudo ufw allow ssh sudo ufw allow 8080 # Dgraph Alpha
HTTP sudo ufw allow 9080 # Dgraph Alpha gRPC sudo ufw allow 5080 # Dgraph Zero
sudo ufw allow 6080 # Dgraph Zero HTTP sudo ufw enable # Set up swap (if
needed) sudo fallocate -l 4G /swapfile sudo chmod 600 /swapfile sudo mkswap
/swapfile sudo swapon /swapfile echo '/swapfile none swap sw 0 0' | sudo tee
-a /etc/fstab ```
</Step>

<Step title="Install Dgraph">
  ```bash
  # Download and install Dgraph (run on all nodes)
  curl -sSf https://get.dgraph.io | bash
  
  # Move to system path
  sudo mv dgraph /usr/local/bin/
  
  # Create dgraph user
  sudo useradd -r -s /bin/false dgraph
  
  # Create directories
  sudo mkdir -p /opt/dgraph/{data,logs}
  sudo chown -R dgraph:dgraph /opt/dgraph

2. Configure Dgraph Services

# Create systemd service for Zero
sudo tee /etc/systemd/system/dgraph-zero.service << 'EOF'
[Unit]
Description=Dgraph Zero
After=network.target

[Service]
Type=simple
User=dgraph
Group=dgraph
ExecStart=/usr/local/bin/dgraph zero --my=10.0.1.10:5080 --replicas=3 --idx=1 --wal=/opt/dgraph/data/zw --bindall
WorkingDirectory=/opt/dgraph
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=dgraph-zero

[Install]
WantedBy=multi-user.target
EOF

# Create systemd service for Alpha
sudo tee /etc/systemd/system/dgraph-alpha.service << 'EOF'
[Unit]
Description=Dgraph Alpha
After=network.target dgraph-zero.service
Requires=dgraph-zero.service

[Service]
Type=simple
User=dgraph
Group=dgraph
ExecStart=/usr/local/bin/dgraph alpha --my=10.0.1.10:7080 --zero=10.0.1.10:5080,10.0.1.11:5080,10.0.1.12:5080 --postings=/opt/dgraph/data/p --wal=/opt/dgraph/data/w --bindall --security whitelist=0.0.0.0/0
WorkingDirectory=/opt/dgraph
Restart=always
RestartSec=5
StandardOutput=journal
StandardError=journal
SyslogIdentifier=dgraph-alpha

[Install]
WantedBy=multi-user.target
EOF

# Enable and start services
sudo systemctl daemon-reload
sudo systemctl enable dgraph-zero dgraph-alpha
sudo systemctl start dgraph-zero
sleep 10
sudo systemctl start dgraph-alpha

Data Migration and Import

1. Verify Cluster Status

kubectl get pods -n dgraph
kubectl port-forward -n dgraph svc/dgraph-dgraph-alpha 8080:8080 &
curl http://localhost:8080/health

2. Import Schema


kubectl port-forward -n dgraph svc/dgraph-dgraph-alpha 8080:8080 &
curl -X POST localhost:8080/admin/schema -H "Content-Type: application/json"
\ -d @schema_backup.json

3. Import Data


kubectl run dgraph-live-loader \
--image=dgraph/dgraph:v23.1.0 \
--restart=Never \
--namespace=dgraph \
--command -- dgraph live \
--files
/data/export.rdf.gz \
--alpha dgraph-dgraph-alpha:9080 \ --zero
dgraph-dgraph-zero:5080

4. Restore ACL Configuration

# Replace with your actual endpoint
DGRAPH_ENDPOINT="localhost:8080"  # Adjust for your deployment

curl -X POST $DGRAPH_ENDPOINT/admin \
-H "Content-Type: application/json" \
-d '{"query": "mutation { addUser(input: {name: \"admin\", password:
\"password\"}) { user { name } } }"}'


Post-Migration Verification

Data Integrity Checklist

  • Count total nodes and compare with original - Verify specific data samples - Test query performance - Validate application connections

1. Data Integrity Check

curl -X POST localhost:8080/query \
-H "Content-Type: application/json" \
-d '{"query": "{ nodeCount(func: has(_predicate_)) { count(uid) } }"}'

2. Performance Testing

time curl -X POST localhost:8080/query \
  -H "Content-Type: application/json" \
  -d '{"query": "{ users(func: allofterms(name, \"john\")) { name email } }"}'

3. Application Connection Testing

kubectl run test-connection \
  --image=appropriate/curl \
  --restart=Never \
  --namespace=dgraph \
  --command -- curl -X POST dgraph-dgraph-alpha:8080/health

Deployment Comparison and Best Practices

Deployment Method Comparison

Kubernetes

Best For: Production environments, auto-scaling, enterprise deployments Pros: - Automatic scaling and healing - Rolling updates - Resource management - Service discovery - Built-in monitoring Cons: - Complex setup - Higher resource overhead - Learning curve - Vendor lock-in potential

Docker Compose

Best For: Development, testing, small to medium production Pros: - Simple deployment - Easy local development - Version control friendly - Quick setup and teardown - Cost-effective Cons: - Limited scaling options - No automatic failover - Single host limitation - Manual monitoring setup

Linux VPS

Best For: Full control, cost optimization, legacy environments Pros:
  • Maximum control - Cost-effective - No abstraction layers - Predictable performance - Easy debugging Cons: - Manual scaling - More maintenance overhead - No built-in redundancy - Manual backup management

Cost Analysis

AWS EKS

Small: 300500/monthEKScluster:300-500/month - EKS cluster: 72 - 3 x t3.large: 150EBSstorage:150 - EBS storage: 50 - Load balancer: 25Medium:25 **Medium**: 800-1200/month - EKS cluster: 726xt3.xlarge:72 - 6 x t3.xlarge: 600 - EBS storage: 150Loadbalancer:150 - Load balancer: 25 Large: 20003000/monthEKScluster:2000-3000/month - EKS cluster: 72 - 9+ x m5.2xlarge: 1800EBSstorage:1800 - EBS storage: 300 - Load balancer: $50

Docker Compose VPS

Small: 50100/month1x8vCPU/32GB:50-100/month - 1 x 8vCPU/32GB: 80 Medium: 150250/month1x16vCPU/64GB:150-250/month - 1 x 16vCPU/64GB: 160 - Or 2 x smaller: 120Large:120 **Large**: 300-500/month - 3 x 12vCPU/48GB: $360

Linux VPS

Small: 100150/month3x4vCPU/16GB:100-150/month - 3 x 4vCPU/16GB: 120 - 1 x load balancer: 20Medium:20 **Medium**: 250-400/month - 3 x 8vCPU/32GB: 3001xloadbalancer:300 - 1 x load balancer: 40 Large: 500800/month6x8vCPU/32GB:500-800/month - 6 x 8vCPU/32GB: 600 - 2 x load balancers: 801xmonitoring:80 - 1 x monitoring: 40

Monitoring and Maintenance

1. Setup Monitoring Stack

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install monitoring prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace

2. Backup Strategy

Set up automated daily backups to ensure data protection.
kubectl create cronjob dgraph-backup \
  --image=dgraph/dgraph:v23.1.0 \
  --schedule="0 2 * * *" \
  --restart=OnFailure \
  --namespace=dgraph \
  -- dgraph export \
  --alpha dgraph-dgraph-alpha:9080 \
  --destination s3://your-backup-bucket/$(date +%Y-%m-%d)

Troubleshooting


Additional Resources

Dgraph Operational Runbooks

The following runbooks provide operational guidance for various scenarios you may encounter during and after migration.

Migration Validation Checklist

Post-Migration Validation

Use this checklist to ensure your migration was successful:Data Integrity
  • Total node count matches source
  • Random data samples verified
  • Schema imported correctly
  • Indexes functioning properly
Performance
  • Query response times acceptable
  • Throughput meets requirements
  • Resource utilization within limits
  • No memory leaks detected
Operations
  • Monitoring and alerting active
  • Backup procedures tested
  • Scaling mechanisms verified
  • Security policies enforced
Application Integration
  • All clients connecting successfully
  • Authentication working
  • API endpoints responding
  • Load balancing functional

Conclusion

Migration Success Factors

  • Thorough planning and testing
  • Proper resource provisioning
  • Systematic data migration
  • Comprehensive monitoring setup
  • Regular backup procedures
This migration guide provides comprehensive steps for moving from Dgraph Cloud and Hypermode Graph to self-hosted clusters across major cloud providers. The combination of detailed export procedures, multiple deployment options, and extensive operational guidance ensures a successful migration regardless of your technical requirements or constraints.
This migration guide is a living document. Please contribute improvements, report issues, or share your experiences to help the community. For additional support, join the Dgraph community or consult the operational runbooks in the Hypermode ops-runbooks repository.