Progetto Container Docker ble-ai-localizer Ambiente: MN reslevis 192.168.1.3 m1.MajorNet-x64.6.6.0-60.bin:03 October 2025 server linux gentoo kernl: 6.6.74-gentoo-x86_64 Python default version 3.10.16
cd /data/service/
########################################################################################### Passo 1 Creazione struttura progetto (sul server) mkdir -p ble-ai-localizer/{app,config,data,models} cd /data/service/ble-ai-localizer
touch Dockerfile requirements.txt entrypoint.sh docker-compose.yml
touch app/{__init__.py,main.py,settings.py,auth.py,mqtt_client.py,train.py,infer.py}
touch config/config.yaml config/secrets.yaml
Cosa hai ottenuto
app/ codice python (training + inference)
config/ configurazione (config.yaml versionabile + secrets.yaml non versionabile)
data/ dataset fingerprint (volume)
models/ modello salvato (volume)
########################################################################################### Passo 2 .gitignore (consigliato subito) Se usi git: cat > .gitignore « ‘EOF’
config/secrets.yaml data/ models/
pycache/ *.pyc .venv/
.vscode/ .idea/ EOF
########################################################################################### Passo 3 Verifica prerequisiti Docker (Gentoo)
Esegui:
docker --version docker compose version
Se docker compose non esiste, di solito su Gentoo:
o hai il plugin docker-cli-compose
o usi docker-compose (legacy). In tal caso dimmelo e ti adatto i comandi.
Quando hai eseguito i comandi del Passo 1–3, nel prossimo step ti faccio creare:
requirements.txt minimale + pinned
Dockerfile compatibile (python:3.10-slim)
docker-compose.yml con volumi (config/, data/, models/) e variabile MODE=train|infer
Se vuoi andare avanti subito senza feedback, incolla qui l’output di:
docker --version docker compose version
docker compose version
Docker version 27.4.1, build b9d17eaebb
Docker Compose version v2.28.1
Perfetto, sei a posto (Docker 27.4.1 + Compose v2.28.1).
########################################################################################### Passo 4 — Popoliamo i file base (requirements, Dockerfile, entrypoint, compose) 4.1 requirements.txt
Metti dentro (pinned e minimale per il tuo caso):
cat > requirements.txt « ‘EOF’ pandas==2.2.2 numpy==1.26.4 scikit-learn==1.5.1 joblib==1.4.2
paho-mqtt==2.1.0 requests==2.32.3 PyYAML==6.0.2
fastapi==0.115.0 uvicorn[standard]==0.30.6
matplotlib==3.9.2 seaborn==0.13.2 EOF
(Se poi decidiamo che plot non servono in produzione, li togliamo.)
4.2 Dockerfile (Python “fissato”, non-root, robusto) cat > Dockerfile « ‘EOF’ FROM python:3.10-slim
RUN apt-get update && apt-get install -y --no-install-recommends
build-essential gcc g++
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY requirements.txt /app/requirements.txt RUN pip install --no-cache-dir -r /app/requirements.txt
COPY app/ /app/app/ COPY entrypoint.sh /app/entrypoint.sh RUN chmod +x /app/entrypoint.sh
RUN useradd -m appuser USER appuser
RUN mkdir -p /home/appuser/data /home/appuser/models ENV DATA_DIR=/home/appuser/data ENV MODELS_DIR=/home/appuser/models
ENTRYPOINT ["/app/entrypoint.sh”] EOF
4.3 entrypoint.sh (MODE=train|infer|api) cat > entrypoint.sh « ‘EOF’ #!/usr/bin/env bash set -euo pipefail
MODE="${MODE:-infer}”
case “$MODE” in train) exec python -m app.main train ;; infer) exec python -m app.main infer ;; api) exec uvicorn app.main:api --host 0.0.0.0 --port “${PORT:-8080}” ;; *) echo “MODE non valido: $MODE (usa train|infer|api)” exit 1 ;; esac EOF
Poi rendilo eseguibile:
chmod +x entrypoint.sh
4.4 docker-compose.yml (volumi + config esterna) cat > docker-compose.yml « ‘EOF’ services: ble-ai-localizer: build: . image: ble-ai-localizer:0.1.0 environment: MODE: “infer” CONFIG_FILE: “/app/config/config.yaml” SECRETS_FILE: “/app/config/secrets.yaml” volumes: - ./config:/app/config:ro - ./data:/data - ./models:/models restart: unless-stopped EOF
4.5 config/config.yaml (scheletro) cat > config/config.yaml « ‘EOF’ mqtt: host: “mosquitto” port: 1883 topic: “ble/raw”
api: get_gateways_url: “https://APIHOST:5050/reslevis/getGateways” verify_tls: false refresh_seconds: 300
oidc: token_url: “https://KEYCLOAK/realms/REALM/protocol/openid-connect/token” client_id: “Fastapi” audience: “Fastapi”
paths: dataset: “/data/fingerprint.parquet” model: “/models/model.joblib”
ml:
method: “knn” k: 7 weights: “distance” metric: “euclidean” EOF
4.6 config/secrets.yaml (placeholder, non versionare) cat > config/secrets.yaml « ‘EOF’ oidc: client_secret: “CHANGE_ME” username: “CHANGE_ME” password: “CHANGE_ME” EOF
########################################################################################### Passo 5 Metti un main minimale per verificare che il container parte app/main.py cat > app/main.py « ‘EOF’ from fastapi import FastAPI from .settings import load_settings
api = FastAPI()
@api.get("/health”) def health(): return {“status”: “ok”}
def main(): import sys settings = load_settings() print(“Settings loaded. Keys:", list(settings.keys()))
if len(sys.argv) < 2:
raise SystemExit("Usage: python -m app.main [train|infer]")
cmd = sys.argv[1].lower()
if cmd == "train":
print("TRAIN mode (placeholder)")
elif cmd == "infer":
print("INFER mode (placeholder)")
else:
raise SystemExit("Unknown command")
if name == “main": main() EOF
app/settings.py cat > app/settings.py « ‘EOF’ import os from pathlib import Path import yaml
def _read_yaml(path: str) -> dict: with open(path, “r”, encoding="utf-8”) as f: return yaml.safe_load(f) or {}
def deep_merge(a: dict, b: dict) -> dict: out = dict(a or {}) for k, v in (b or {}).items(): if isinstance(v, dict) and isinstance(out.get(k), dict): out[k] = deep_merge(out[k], v) else: out[k] = v return out
def load_settings() -> dict: cfg_path = os.getenv(“CONFIG_FILE”, “/app/config/config.yaml”) settings = _read_yaml(cfg_path)
secrets_path = os.getenv("SECRETS_FILE", "")
if secrets_path and Path(secrets_path).exists():
secrets = _read_yaml(secrets_path)
settings = deep_merge(settings, secrets)
# fallback paths (coerenti con compose)
settings.setdefault("paths", {})
settings["paths"].setdefault("dataset", os.getenv("DATASET_PATH", "/data/fingerprint.parquet"))
settings["paths"].setdefault("model", os.getenv("MODEL_PATH", "/models/model.joblib"))
return settings
EOF
(Gli altri file per ora possono restare vuoti.)
########################################################################################### Passo 6 Build & test smoke test docker compose build
###########################################################################################
###########################################################################################
###########################################################################################
Come avviare SOLO ble-ai-localizer (senza impattare gli altri)
Vai nella directory del progetto: cd /data/service/ble-ai-localizer
Avvio in background: docker compose -p ble-ai-localizer up -d Perché -p ble-ai-localizer? Imposta esplicitamente il project name, così sei sicuro al 100% di nonagganciare per errore un altro compose.
Verifica stato (solo di questo progetto): docker compose -p ble-ai-localizer ps
Log (solo di questo progetto): docker compose -p ble-ai-localizer logs -f
Come stoppare/riavviare SOLO ble-ai-localizer
Stop (non rimuove container): docker compose -p ble-ai-localizer stop
Restart: docker compose -p ble-ai-localizer restart
Stop + rimozione container/network del progetto (NON tocca volumi bind ./data, ./models): docker compose -p ble-ai-localizer down
Aggiornare solo il tuo container (codice o Dockerfile cambiato)
Ricostruisci l’immagine: docker compose -p ble-ai-localizer build
Riavvia applicando l’immagine nuova: docker compose -p ble-ai-localizer up -d
Se vuoi forzare rebuild+restart in un colpo: docker compose -p ble-ai-localizer up -d --build
Esportare l’immagine (backup o deploy su altro server) Esempio export in tar (meglio gzippato): docker save ble-ai-localizer:0.1.0 | gzip > ble-ai-localizer_0.1.0.tar.gz Su altro server: gzip -dc ble-ai-localizer_0.1.0.tar.gz | docker load
#Debug mosquitto_sub -v -h 192.168.1.101 -p 1883 -t ‘#’ -V mqttv311 | grep publish_out
docker compose -p ble-ai-localizer exec -T ble-ai-localizer ls -l /data/config/
#Caso di gateway che non rileva nessun beacon
publish_out/ac233fc1dcd3 [{"timestamp":"2026-01-30T13:40:08.885Z","type":"Gateway","mac":"AC233FC1DCD3","nums":0}]
#Verifica del modello in uso: #Ver 1 docker compose -p ble-ai-localizer exec -T ble-ai-localizer python - «'PY’ import hashlib, joblib, os p=”/data/model/model.joblib” b=open(p,“rb”).read() print(f"FILE: {p}") print(f"sha256={hashlib.sha256(b).hexdigest()[:12]} size={len(b)} bytes”) m=joblib.load(p) print(“TYPE:", type(m)) for k in [“version”,“nan_fill”,“k_floor”,“k_xy”,“weights”,“metric”,“floors”]: print(f”{k}:", getattr(m,k,None)) gws=getattr(m,“feature_gateways”,[]) print(“gateways:", len(gws), “first:", gws[:5]) regs=getattr(m,“xy_regs”,{}) print(“xy_regs floors:", sorted(list(regs.keys()))) PY service “ble-ai-localizer” is not running
#Ver 2 docker compose -p ble-ai-localizer exec -T ble-ai-localizer python - «'PY’ import joblib, pprint m = joblib.load("/data/model/model.joblib”) keys = [ “created_at_utc”,“sklearn_version”,“numpy_version”, “gateways_order”,“nan_fill”,“k_floor”,“k_xy”,“weights”,“metric”,“floors” ] pprint.pprint({k: m.get(k) for k in keys}) PY
#Esempio utilizzo server API:
Ottenimento del token:
TOKEN=$(
curl -k -s -X POST “https://10.251.0.30:10002/realms/API.Server.local/protocol/openid-connect/token”
-H “Content-Type: application/x-www-form-urlencoded”
-d “grant_type=password”
-d “client_id=Fastapi”
-d “client_secret=wojuoB7Z5xhlPFrF2lIxJSSdVHCApEgC”
-d “username=core”
-d “password=C0r3_us3r_Cr3d3nt14ls”
-d “audience=Fastapi”
| jq -r ‘.access_token’
)
Utilizzare il token in un API:
curl -k -X ‘GET’
‘https://10.251.0.30:5050/reslevis/getTrackers’
-H ‘accept: application/json’
-H ‘Authorization: Bearer $TOKEN’
#Aggiornamento del software cd /data/service/ble-ai-localizer docker compose up -d --build docker compose -p ble-ai-localizer build docker compose -p ble-ai-localizer up -d --build docker system prune
Gestione Sart/Stop Container cd /data/service/ble-ai-localizer docker compose -p ble-ai-localizer up -d docker compose -p ble-ai-localizer logs -f --tail=200 --timestamps docker compose -p ble-ai-localizer stop docker compose -p ble-ai-localizer restart docker compose -p ble-ai-localizer down
Accesso Web MajorNET ResLevis: https://10.251.0.30/frontend/app_reslevis/app.html#home
Accesso Web a Container ble-ai-localizer URL: http://0.0.0.0:8501 http://192.168.1.3:8501/ username e password da file composer: docker-compose.yml UI_USER: “Admin” UI_PASSWORD: “pwdadmin1” <-- facilitate per accesso da mobile
NOTE:
Il valore di k utilizzato rappresenta la retta (k=2) o il piano (k>2) tra i beacon di trainig, in addestramento deve coincidere come minimo con il numero di beacon per stanza, ma è un parametro globale per cui va fatto rispettare per tutte le stanze, per cui ad esempio se si decide di catturare 5 misure per stanza k conviene metterlo a 3
almeno a 1m dalle pareti della stanza (evitare attaccato al muro su pareti adiacenti)
per piani simemtrici le misure di ogni piano conviene farle conincidere
aggingere timestamp in fingerprint e registarre traffico mqtt raw (a parita’ di finestra di registrazione potranno essere rivalutati i valori letti)
Tempo di tx nel beacon 200 ms
potenze 0, -4 ,-8, -12
slot di raccolta 30s
time 1400
Inferenza a 7 sec
durante la raccolta occorre almeno un beacon di test per potenza che veiene escluso nell’addestarmento ma usato per l’inferenza con il traffico registrato nella fase di raccolta.
testare prima gw e mgtt se regge 200 ms