選択できるのは25トピックまでです。 トピックは、先頭が英数字で、英数字とダッシュ('-')を使用した35文字以内のものにしてください。
 
 
 
 

22 KiB

Deployment Guide

Overview

This guide covers the deployment of the AFA Systems Presence Detection system in various environments, from development setups to production deployments.

System Requirements

Minimum Requirements

  • CPU: 2 cores
  • Memory: 4GB RAM
  • Storage: 20GB available space
  • Network: Reliable MQTT broker connection
  • Operating System: Linux (Ubuntu 20.04+, CentOS 8+, RHEL 8+), macOS, or Windows 10+
  • CPU: 4 cores
  • Memory: 8GB RAM
  • Storage: 50GB SSD
  • Network: Gigabit ethernet, low-latency MQTT connection
  • Operating System: Ubuntu 22.04 LTS or RHEL 9+

Software Dependencies

  • Docker: 20.10+ with Docker Compose v2.0+
  • Git: For source code management
  • Go: 1.24+ (for local development)
  • MQTT Broker: Compatible with BLE gateway devices (Mosquitto, EMQX, HiveMQ)

Deployment Options

The Docker Compose deployment provides a complete, production-ready environment with all services included.

Quick Start

  1. Clone the repository:

    git clone https://github.com/AFASystems/presence.git
    cd presence
    
  2. Create environment configuration:

    cp .env.example .env
    # Edit .env with your configuration
    
  3. Start all services:

    cd build
    docker-compose up -d
    
  4. Verify deployment:

    docker-compose ps
    docker-compose logs -f
    

Environment Configuration

Create a .env file in the project root:

# Web Server Configuration
HTTP_HOST_PATH=:8080
HTTP_WS_HOST_PATH=:8081

# MQTT Configuration
MQTT_HOST=tcp://your-mqtt-broker:1883
MQTT_USERNAME=your_mqtt_username
MQTT_PASSWORD=your_mqtt_password
MQTT_CLIENT_ID=presence_system

# Kafka Configuration
KAFKA_URL=kafka:29092
KAFKA_BOOTSTRAP_SERVERS=kafka:29092
KAFKA_GROUP_ID=presence_group

# Database Configuration
DB_PATH=./volumes/presence.db

# Redis Configuration
REDIS_URL=redis:6379
REDIS_PASSWORD=

# Security
CORS_ORIGINS=http://localhost:3000,http://localhost:8080
API_SECRET_KEY=your-secret-key-here

# Logging
LOG_LEVEL=info
LOG_FORMAT=json

Docker Compose Services

The default docker-compose.yaml includes:

Service Description Ports Resources
kafka Apache Kafka message broker 9092, 9093 2GB RAM
kafdrop Kafka monitoring UI 9000 512MB RAM
presence-bridge MQTT to Kafka bridge - 256MB RAM
presence-decoder BLE beacon decoder - 512MB RAM
presence-location Location calculation service - 512MB RAM
presence-server HTTP API & WebSocket server 8080, 8081 1GB RAM
redis Caching and session storage 6379 512MB RAM

Production Docker Compose

For production deployment, create docker-compose.prod.yaml:

version: "3.8"

services:
  kafka:
    image: apache/kafka:3.9.0
    restart: always
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: INTERNAL://:29092,EXTERNAL://:9092,CONTROLLER://:9093
      KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:29092,EXTERNAL://kafka.example.com:9092
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 3
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 2
      KAFKA_NUM_PARTITIONS: 6
      KAFKA_DEFAULT_REPLICATION_FACTOR: 3
      KAFKA_MIN_INSYNC_REPLICAS: 2
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true"
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_LOG_RETENTION_HOURS: 168
      KAFKA_LOG_RETENTION_BYTES: 1073741824
    volumes:
      - kafka-data:/var/lib/kafka/data
    deploy:
      resources:
        limits:
          memory: 4G
        reservations:
          memory: 2G

  presence-bridge:
    build:
      context: ..
      dockerfile: build/package/Dockerfile.bridge
    restart: always
    environment:
      - HTTP_HOST_PATH=${HTTP_HOST_PATH}
      - MQTT_HOST=${MQTT_HOST}
      - MQTT_USERNAME=${MQTT_USERNAME}
      - MQTT_PASSWORD=${MQTT_PASSWORD}
      - KAFKA_URL=${KAFKA_URL}
      - LOG_LEVEL=${LOG_LEVEL}
    volumes:
      - ../volumes:/app/volumes
    depends_on:
      kafka:
        condition: service_healthy
    deploy:
      resources:
        limits:
          memory: 512M
        reservations:
          memory: 256M

  presence-decoder:
    build:
      context: ..
      dockerfile: build/package/Dockerfile.decoder
    restart: always
    environment:
      - HTTP_HOST_PATH=${HTTP_HOST_PATH}
      - KAFKA_URL=${KAFKA_URL}
      - REDIS_URL=${REDIS_URL}
      - LOG_LEVEL=${LOG_LEVEL}
    volumes:
      - ../volumes:/app/volumes
    depends_on:
      kafka:
        condition: service_healthy
      redis:
        condition: service_healthy
    deploy:
      resources:
        limits:
          memory: 1G
        reservations:
          memory: 512M

  presence-location:
    build:
      context: ..
      dockerfile: build/package/Dockerfile.location
    restart: always
    environment:
      - HTTP_HOST_PATH=${HTTP_HOST_PATH}
      - KAFKA_URL=${KAFKA_URL}
      - REDIS_URL=${REDIS_URL}
      - LOG_LEVEL=${LOG_LEVEL}
    volumes:
      - ../volumes:/app/volumes
    depends_on:
      kafka:
        condition: service_healthy
      redis:
        condition: service_healthy
    deploy:
      resources:
        limits:
          memory: 1G
        reservations:
          memory: 512M

  presence-server:
    build:
      context: ..
      dockerfile: build/package/Dockerfile.server
    restart: always
    environment:
      - HTTP_HOST_PATH=${HTTP_HOST_PATH}
      - HTTP_WS_HOST_PATH=${HTTP_WS_HOST_PATH}
      - KAFKA_URL=${KAFKA_URL}
      - REDIS_URL=${REDIS_URL}
      - DB_PATH=/app/volumes/presence.db
      - CORS_ORIGINS=${CORS_ORIGINS}
      - API_SECRET_KEY=${API_SECRET_KEY}
      - LOG_LEVEL=${LOG_LEVEL}
    ports:
      - "8080:8080"
      - "8081:8081"
    volumes:
      - ../volumes:/app/volumes
      - ../web:/app/web
    depends_on:
      kafka:
        condition: service_healthy
      redis:
        condition: service_healthy
    deploy:
      resources:
        limits:
          memory: 2G
        reservations:
          memory: 1G

  redis:
    image: redis:7-alpine
    restart: always
    command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
    volumes:
      - redis-data:/data
    deploy:
      resources:
        limits:
          memory: 1G
        reservations:
          memory: 512M

  kafdrop:
    image: obsidiandynamics/kafdrop
    restart: always
    environment:
      KAFKA_BROKERCONNECT: "kafka:29092"
      JVM_OPTS: "-Xms256M -Xmx512M"
    ports:
      - "9000:9000"
    depends_on:
      kafka:
        condition: service_healthy

  nginx:
    image: nginx:alpine
    restart: always
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx.conf:/etc/nginx/nginx.conf
      - ./ssl:/etc/nginx/ssl
    depends_on:
      - presence-server
    deploy:
      resources:
        limits:
          memory: 256M
        reservations:
          memory: 128M

volumes:
  kafka-data:
  redis-data:

2. Kubernetes Deployment

For larger-scale deployments, use Kubernetes with Helm charts.

Prerequisites

  • Kubernetes cluster (v1.24+)
  • Helm 3.0+
  • kubectl configured
  • Persistent storage provisioner

Installation

  1. Create namespace:

    kubectl create namespace presence-system
    
  2. Add Helm repository:

    helm repo add presence-system https://charts.afasystems.com/presence
    helm repo update
    
  3. Install with custom values:

    helm install presence presence-system/presence-system \
      --namespace presence-system \
      --values values.prod.yaml
    

Custom Values (values.prod.yaml)

global:
  imageRegistry: your-registry.com
  imagePullSecrets: [your-registry-secret]

  mqtt:
    host: "tcp://mqtt-broker:1883"
    username: "your_username"
    password: "your_password"

kafka:
  enabled: true
  replicaCount: 3
  persistence:
    enabled: true
    size: 100Gi
  resources:
    requests:
      memory: "2Gi"
      cpu: "1"
    limits:
      memory: "4Gi"
      cpu: "2"

redis:
  enabled: true
  auth:
    enabled: true
    password: "your-redis-password"
  master:
    persistence:
      enabled: true
      size: 20Gi
  resources:
    requests:
      memory: "512Mi"
      cpu: "0.5"
    limits:
      memory: "1Gi"
      cpu: "1"

bridge:
  replicaCount: 2
  resources:
    requests:
      memory: "256Mi"
      cpu: "0.25"
    limits:
      memory: "512Mi"
      cpu: "0.5"

decoder:
  replicaCount: 3
  resources:
    requests:
      memory: "512Mi"
      cpu: "0.5"
    limits:
      memory: "1Gi"
      cpu: "1"

location:
  replicaCount: 3
  resources:
    requests:
      memory: "512Mi"
      cpu: "0.5"
    limits:
      memory: "1Gi"
      cpu: "1"

server:
  replicaCount: 2
  service:
    type: LoadBalancer
    ports:
      http: 80
      https: 443
      websocket: 8081
  ingress:
    enabled: true
    className: "nginx"
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /
    hosts:
      - host: presence.example.com
        paths:
          - path: /
            pathType: Prefix
    tls:
      - secretName: presence-tls
        hosts:
          - presence.example.com
  resources:
    requests:
      memory: "1Gi"
      cpu: "0.5"
    limits:
      memory: "2Gi"
      cpu: "1"

monitoring:
  enabled: true
  prometheus:
    enabled: true
  grafana:
    enabled: true

autoscaling:
  enabled: true
  bridge:
    minReplicas: 2
    maxReplicas: 10
    targetCPUUtilizationPercentage: 70
  decoder:
    minReplicas: 3
    maxReplicas: 20
    targetCPUUtilizationPercentage: 70
  location:
    minReplicas: 3
    maxReplicas: 20
    targetCPUUtilizationPercentage: 70
  server:
    minReplicas: 2
    maxReplicas: 10
    targetCPUUtilizationPercentage: 70

3. Manual Deployment

For custom environments or specific requirements.

Building from Source

  1. Prerequisites:

    # Install Go 1.24+
    wget https://go.dev/dl/go1.24.linux-amd64.tar.gz
    sudo tar -C /usr/local -xzf go1.24.linux-amd64.tar.gz
    export PATH=$PATH:/usr/local/go/bin
    
    # Install Node.js (for web assets)
    curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
    sudo apt-get install -y nodejs
    
  2. Clone and build:

    git clone https://github.com/AFASystems/presence.git
    cd presence
    
    # Build web assets
    cd web
    npm install
    npm run build
    
    # Build Go applications
    cd ..
    go mod download
    go build -o bin/bridge ./cmd/bridge
    go build -o bin/decoder ./cmd/decoder
    go build -o bin/location ./cmd/location
    go build -o bin/server ./cmd/server
    
  3. System service files (/etc/systemd/system/presence-bridge.service):

    [Unit]
    Description=Presence Bridge Service
    After=network.target
    
    [Service]
    Type=simple
    User=presence
    Group=presence
    WorkingDirectory=/opt/presence
    ExecStart=/opt/presence/bin/bridge
    Restart=always
    RestartSec=10
    Environment=HTTP_HOST_PATH=:8080
    Environment=MQTT_HOST=tcp://mqtt-broker:1883
    Environment=MQTT_USERNAME=your_username
    Environment=MQTT_PASSWORD=your_password
    Environment=KAFKA_URL=kafka:29092
    Environment=LOG_LEVEL=info
    
    [Install]
    WantedBy=multi-user.target
    

    Create similar service files for decoder, location, and server services.

  4. Enable and start services:

    sudo systemctl daemon-reload
    sudo systemctl enable presence-bridge
    sudo systemctl enable presence-decoder
    sudo systemctl enable presence-location
    sudo systemctl enable presence-server
    
    sudo systemctl start presence-bridge
    sudo systemctl start presence-decoder
    sudo systemctl start presence-location
    sudo systemctl start presence-server
    

Network Configuration

Firewall Rules

Ensure these ports are open:

Port Service Description
80 HTTP Web interface (if using nginx)
443 HTTPS Secure web interface
8080 HTTP API REST API server
8081 WebSocket Real-time updates
9000 Kafdrop Kafka monitoring UI
1883 MQTT MQTT broker (if external)
9092 Kafka Kafka broker

Load Balancer Configuration

HAProxy Example

frontend presence_frontend
    bind *:80
    default_backend presence_backend

backend presence_backend
    balance roundrobin
    server presence1 10.0.1.10:8080 check
    server presence2 10.0.1.11:8080 check

frontend presence_ws_frontend
    bind *:8081
    default_backend presence_ws_backend

backend presence_ws_backend
    balance roundrobin
    server presence1 10.0.1.10:8081 check
    server presence2 10.0.1.11:8081 check

Nginx Example

upstream presence_backend {
    server 10.0.1.10:8080;
    server 10.0.1.11:8080;
}

upstream presence_ws_backend {
    server 10.0.1.10:8081;
    server 10.0.1.11:8081;
}

server {
    listen 80;
    server_name presence.example.com;

    location / {
        proxy_pass http://presence_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /ws {
        proxy_pass http://presence_ws_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

SSL/TLS Configuration

Let’s Encrypt with Certbot

  1. Install Certbot:

    sudo apt-get update
    sudo apt-get install certbot python3-certbot-nginx
    
  2. Generate certificates:

    sudo certbot --nginx -d presence.example.com
    
  3. Auto-renewal:

    sudo crontab -e
    # Add: 0 12 * * * /usr/bin/certbot renew --quiet
    

Nginx SSL Configuration

server {
    listen 443 ssl http2;
    server_name presence.example.com;

    ssl_certificate /etc/letsencrypt/live/presence.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/presence.example.com/privkey.pem;

    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512;
    ssl_prefer_server_ciphers off;
    ssl_session_cache shared:SSL:10m;

    location / {
        proxy_pass http://presence_backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location /ws {
        proxy_pass http://presence_ws_backend;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

Monitoring and Logging

Prometheus Configuration

Create prometheus.yml:

global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'presence-server'
    static_configs:
      - targets: ['presence-server:8080']
    metrics_path: /metrics
    scrape_interval: 5s

  - job_name: 'kafka'
    static_configs:
      - targets: ['kafka:9092']

  - job_name: 'redis'
    static_configs:
      - targets: ['redis:6379']

Grafana Dashboard

Import the provided Grafana dashboard or create custom visualizations:

  1. System Metrics: CPU, Memory, Disk usage
  2. Kafka Metrics: Throughput, lag, topic sizes
  3. Beacon Metrics: Active beacons, location updates, battery levels
  4. API Metrics: Request rates, response times, error rates
  5. WebSocket Metrics: Active connections, message rates

Log Aggregation with ELK Stack

Elasticsearch Configuration

version: '3.8'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
    environment:
      - discovery.type=single-node
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
    ports:
      - "9200:9200"
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data

  logstash:
    image: docker.elastic.co/logstash/logstash:8.5.0
    ports:
      - "5044:5044"
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
    depends_on:
      - elasticsearch

  kibana:
    image: docker.elastic.co/kibana/kibana:8.5.0
    ports:
      - "5601:5601"
    environment:
      - ELASTICSEARCH_HOSTS=http://elasticsearch:9200
    depends_on:
      - elasticsearch

volumes:
  elasticsearch-data:

Backup and Recovery

Database Backup

#!/bin/bash
# backup.sh

BACKUP_DIR="/backups/presence"
DATE=$(date +%Y%m%d_%H%M%S)
DB_FILE="/opt/presence/volumes/presence.db"

mkdir -p $BACKUP_DIR

# Backup BoltDB
cp $DB_FILE $BACKUP_DIR/presence_$DATE.db

# Compress old backups
find $BACKUP_DIR -name "presence_*.db" -mtime +7 -exec gzip {} \;

# Keep only last 30 days
find $BACKUP_DIR -name "presence_*.db.gz" -mtime +30 -delete

Restore from Backup

#!/bin/bash
# restore.sh

BACKUP_FILE=$1
DB_FILE="/opt/presence/volumes/presence.db"

if [ -z "$BACKUP_FILE" ]; then
    echo "Usage: $0 <backup_file>"
    exit 1
fi

# Stop services
sudo systemctl stop presence-server presence-decoder presence-location

# Restore database
sudo cp $BACKUP_FILE $DB_FILE
sudo chown presence:presence $DB_FILE

# Start services
sudo systemctl start presence-server presence-decoder presence-location

Kafka Backup

#!/bin/bin/bash
# kafka_backup.sh

BACKUP_DIR="/backups/kafka"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p $BACKUP_DIR

# Backup Kafka topics
kafka-topics.sh --bootstrap-server localhost:9092 --list > $BACKUP_DIR/topics_$DATE.txt

# Backup topic data (example)
for topic in rawbeacons alertbeacons locevents; do
    kafka-console-consumer.sh --bootstrap-server localhost:9092 \
      --topic $topic --from-beginning > $BACKUP_DIR/${topic}_$DATE.json
done

Performance Tuning

Kafka Optimization

# Kafka server.properties
num.network.threads=8
num.io.threads=16
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

Redis Optimization

# redis.conf
maxmemory 2gb
maxmemory-policy allkeys-lru
save 900 1
save 300 10
save 60 10000

Application Tuning

Environment variables for performance:

# Go runtime
GOMAXPROCS=4
GOGC=100

# Application settings
BEACON_METRICS_SIZE=50
LOCATION_CONFIDENCE=90
KAFKA_CONSUMER_FETCH_MAX_BYTES=1048576
REDIS_POOL_SIZE=50

Security Hardening

Network Security

  1. Network Segmentation:

    # Create dedicated network for presence system
    docker network create --driver bridge presence-network
    docker network connect presence-network presence-server
    
  2. Port Security:

    # Only expose necessary ports
    ufw allow 80/tcp
    ufw allow 443/tcp
    ufw allow 8080/tcp
    ufw deny 9092/tcp  # Kafka internal only
    

Application Security

  1. Environment Variable Security:

    # Use secrets management
    export API_SECRET_KEY=$(openssl rand -base64 32)
    export MQTT_PASSWORD_FILE=/run/secrets/mqtt_password
    
  2. Container Security:

    # Docker Compose security settings
    services:
      presence-server:
        security_opt:
          - no-new-privileges:true
        read_only: true
        tmpfs:
          - /tmp
        user: "1000:1000"
        cap_drop:
          - ALL
        cap_add:
          - NET_BIND_SERVICE
    

Troubleshooting

Common Issues

Service Won’t Start

# Check logs
docker-compose logs presence-server
sudo journalctl -u presence-bridge -f

# Check configuration
docker-compose config

Kafka Connection Issues

# Check Kafka health
docker-compose exec kafka kafka-topics.sh --bootstrap-server localhost:9092 --list

# Check network connectivity
docker network ls
docker network inspect presence_build

High Memory Usage

# Monitor memory usage
docker stats

# Check Go garbage collection
docker-compose exec presence-server pprof http://localhost:6060/debug/pprof/heap

WebSocket Connection Issues

# Test WebSocket connection
wscat -c ws://localhost:8080/ws/broadcast

# Check logs for connection issues
docker-compose logs presence-server | grep websocket

Performance Diagnostics

System Monitoring

# System resources
htop
iostat -x 1
netstat -i

# Application performance
curl http://localhost:8080/api/health
docker-compose exec presence-server curl http://localhost:8080/api/health

Database Performance

# BoltDB inspection
boltstats ./volumes/presence.db

# Redis monitoring
redis-cli info memory
redis-cli info stats

Migration and Upgrades

Version Upgrade Procedure

  1. Backup current system:

    ./scripts/backup.sh
    
  2. Update source code:

    git fetch origin
    git checkout v2.0.0
    
  3. Build new images:

    docker-compose build --no-cache
    
  4. Run database migrations:

    docker-compose run --rm presence-server ./migrate up
    
  5. Restart services with zero downtime:

    docker-compose up -d --scale presence-server=2
    # Wait for health checks
    docker-compose up -d --scale presence-server=1
    

Configuration Migration

Create migration scripts for configuration changes:

#!/bin/bash
# migrate_v1_to_v2.sh

# Backup old config
cp .env .env.backup

# Add new configuration variables
echo "NEW_FEATURE_ENABLED=true" >> .env
echo "API_RATE_LIMIT=100" >> .env

# Update deprecated settings
sed -i 's/OLD_SETTING/NEW_SETTING/g' .env

echo "Migration completed. Please review .env file."

This comprehensive deployment guide should help you successfully deploy and manage the AFA Systems Presence Detection system in various environments.