This document provides an extended analysis of the four microservices (server, location, bridge, decoder), their current structure, and a concrete refactoring plan for better reusability, separation of concerns, and maintainability.
cmd/server/main.go (~211 lines)Role: HTTP API + Kafka consumers + event loop. Central API for gateways, zones, trackers, parser configs, settings, and tracks; consumes location events and alert beacons; runs a ticker to publish tracker list to MQTT topic.
What lives in main() today:
| Section | Lines (approx) | Responsibility |
|---|---|---|
| Bootstrap | 36–46 | Load config, create AppState, init Kafka manager, create logger, signal context |
| DB + CORS | 48–55 | Connect DB, build CORS options |
| Kafka writers | 56–59 | Populate writers: apibeacons, alert, mqtt, settings, parser |
| Config load & DB seed | 60–89 | Open config file, unmarshal JSON, create configs in DB, find configs, send each to parser topic, call UpdateDB |
| Kafka readers | 96–105 | Populate readers: locevents, alertbeacons; create channels; start 2 consumer goroutines |
| Router setup | 107–136 | Mux router + ~25 route registrations (handlers get db, writer, ctx directly) |
| HTTP server | 138–150 | CORS wrapper, handler, http.Server, ListenAndServe in goroutine |
| Event loop | 154–191 | select: ctx.Done, chLoc → LocationToBeaconService, chEvents → update tracker battery/temp in DB, beaconTicker → marshal trackers, write to mqtt topic |
| Shutdown | 193–210 | Server shutdown, wait group, clean Kafka, cleanup logger |
Pain points:
main instead of a dedicated component.db, *kafka.Writer, context.Context passed into every handler; no shared “server” or “app” struct.wg: Package-level sync.WaitGroup instead of a scoped lifecycle object.cmd/location/main.go (~246 lines)Role: Location algorithm service. Consumes raw beacons and settings from Kafka; on a ticker runs either “filter” (score-based) or “ai” (HTTP inference) and writes location events to Kafka.
What lives in main() and adjacent:
| Section | Lines (approx) | Responsibility |
|---|---|---|
| Bootstrap | 26–38 | AppState, config, Kafka manager, logger, signal context |
| Kafka | 39–54 | Readers: rawbeacons, settings; writer: locevents; channels; 2 consumer goroutines |
| Event loop | 56–90 | ctx.Done, locTicker (get settings, branch filter vs ai), chRaw → assignBeaconToList, chSettings → UpdateSettings |
| Shutdown | 92–100 | Break, wg.Wait, clean Kafka, cleanup |
Logic outside main but still in this package:
getAI (102–122): HTTP client, TLS skip verify, get token, infer position (API calls).getLikelyLocations (124–203): Full “filter” algorithm: iterate beacons, score by RSSI/seen, confidence, write HTTPLocation to Kafka. ~80 lines.assignBeaconToList (205–244): Append metric to beacon in AppState, sliding window.Pain points:
getLikelyLocations and assignBeaconToList are core domain logic but live under cmd/location; not reusable or testable in isolation.main; should be behind an interface (e.g. “LocationInference”) for testing and reuse.999, 1.5, 0.75, etc. in getLikelyLocations; should be config or named constants.cmd/bridge/main.go (~212 lines)Role: MQTT ↔ Kafka bridge. Subscribes to MQTT; converts messages to Kafka (rawbeacons); consumes apibeacons, alert, mqtt from Kafka and publishes to MQTT.
What lives in main() and adjacent:
| Section | Lines (approx) | Responsibility |
|---|---|---|
| Bootstrap | 99–118 | AppState, config, Kafka, logger, context |
| Kafka | 112–127 | Readers: apibeacons, alert, mqtt; writer: rawbeacons; channels; 3 consumer goroutines |
| MQTT client | 129–150 | Options, client ID, handlers, connect, sub(client) |
| Event loop | 152–188 | ctx.Done, chApi (POST/DELETE → lookup), chAlert → Publish /alerts, chMqtt → Publish /trackers |
| Shutdown | 190–203 | Break, wg.Wait, Kafka cleanup, MQTT disconnect, cleanup |
Logic in package:
mqtthandler (27–84): Parse JSON array of RawReading or CSV; for JSON, map MAC→ID via AppState, build BeaconAdvertisement, write to Kafka. CSV branch does nothing useful after parse (dead code).messagePubHandler, connectHandler, connectLostHandler: MQTT callbacks.sub(client): Subscribe to publish_out/#.Pain points:
mqtthandler is central to the bridge but lives in main.go; hard to unit test and reuse.mqtthandler(writer, topic, message, appState) and package-level messagePubHandler close over writer and appState; no injectable “BridgeHandler” or service.strings.Split(topic, "/")[1] can panic if topic format changes.cmd/decoder/main.go (~139 lines)Role: Decode raw beacon payloads using a parser registry; consume rawbeacons and parser config updates; produce alertbeacons.
What lives in main() and adjacent:
| Section | Lines (approx) | Responsibility |
|---|---|---|
| Bootstrap | 25–55 | AppState, config, parser registry, logger, context, Kafka readers/writers, channels, 2 consumers |
| Event loop | 57–76 | ctx.Done, chRaw → processIncoming (decodeBeacon), chParser → add/delete/update registry |
| Shutdown | 78–86 | Break, wg.Wait, Kafka cleanup, cleanup |
Logic in package:
processIncoming (88–95): Wraps decodeBeacon, logs errors.decodeBeacon (97–138): Hex decode, remove flags, parse AD structures, run parser registry, dedupe by event hash, write to alertbeacons. This is core decoder logic.Pain points:
decodeBeacon belongs in a decoder or parser service/package under internal, not in cmd.main.goAppState (or not, server doesn’t use it for the same purpose), init KafkaManager, create logger, signal.NotifyContext.PopulateKafkaManager for readers/writers, create channels, wg.Add(N), go Consume(...).wg.Wait(), CleanKafkaReaders/Writers, optional MQTT disconnect, logger cleanup.This suggests a small runtime/bootstrap package that returns config, logger, Kafka manager, and context (and optionally an “App” struct that owns lifecycle).
internal/pkg/controller but take raw *gorm.DB and *kafka.Writer.cmd/location; no internal location or algorithm package.cmd/bridge; no internal bridge or mqtt handler package.cmd/decoder; parser registry is in model, but “process raw → alert” is in main.So today, a lot of “service logic” is either in main or in cmd/<service> instead of in internal behind clear interfaces.
cmd/* import from internal/pkg/* (config, logger, kafkaclient, model, service, controller, database, apiclient, appcontext).*gorm.DB, *kafka.Writer). No interfaces for “store” or “message writer,” so testing and swapping implementations require mocks at the concrete type level.model is a single large namespace (beacons, parser, trackers, gateways, zones, settings, etc.); no split by bounded context (e.g. beacon vs parser vs location).Goal: keep cmd/<service>/main.go as a thin composition layer that only wires config, infra, and “app” components, and runs the process. All reusable logic and “where things live” should be clear from the directory structure.
internal/
├── pkg/
│ ├── config/ # Keep; optional: split LoadServer/LoadLocation into configs subpackage or env schema
│ ├── logger/ # Keep
│ │
│ ├── domain/ # NEW: shared domain types and interfaces (no infra)
│ │ ├── beacon.go # Beacon, BeaconEvent, BeaconMetric, BeaconAdvertisement, BeaconsList, etc.
│ │ ├── parser.go # Config, KafkaParser, BeaconParser, ParserRegistry (or keep registry in service)
│ │ ├── location.go # HTTPLocation, location scoring constants
│ │ ├── trackers.go # Tracker, ApiUpdate, Alert, etc.
│ │ └── types.go # RawReading, Settings, and other shared DTOs
│ │
│ ├── store/ # NEW: in-memory / app state (optional rename of appcontext)
│ │ └── appstate.go # AppState (move from common/appcontext), same API
│ │
│ ├── messaging/ # NEW: Kafka (and optionally MQTT) behind interfaces
│ │ ├── kafka.go # Manager, Consume, Writer/Reader interfaces, implementation
│ │ └── interfaces.go # MessageWriter, MessageReader for tests
│ │
│ ├── db/ # Rename from database; single place for GORM
│ │ ├── postgres.go # Connect(cfg) (*gorm.DB, error)
│ │ └── models.go # GORM model structs only (Tracker, Gateway, Zone, etc.)
│ │
│ ├── client/ # Rename from apiclient; external HTTP (auth, infer, etc.)
│ │ ├── auth.go
│ │ ├── data.go
│ │ └── updatedb.go
│ │
│ ├── api/ # NEW: HTTP surface for server only
│ │ ├── handler/ # Handlers (move from controller); receive a Server or deps struct
│ │ │ ├── gateways.go
│ │ │ ├── zones.go
│ │ │ ├── trackers.go
│ │ │ ├── trackerzones.go
│ │ │ ├── parser.go
│ │ │ ├── settings.go
│ │ │ ├── tracks.go
│ │ │ └── health.go # /health, /ready
│ │ ├── middleware/ # CORS, logging, recovery, request ID
│ │ └── response/ # JSON success/error helpers
│ │
│ ├── service/ # Keep; make depend on interfaces
│ │ ├── beacon.go # LocationToBeaconService (depends on DB + Writer interfaces)
│ │ ├── parser.go
│ │ └── location.go # NEW: Filter algorithm, AssignBeaconToList (from cmd/location)
│ │
│ ├── location/ # NEW: location service internals
│ │ ├── filter.go # getLikelyLocations logic (score, confidence, write)
│ │ ├── assign.go # assignBeaconToList
│ │ └── inference.go # Interface for “get AI position”; adapter over client
│ │
│ ├── bridge/ # NEW: bridge-specific processing
│ │ ├── mqtt.go # MQTT client options, connect, subscribe (thin wrapper)
│ │ └── handler.go # MQTT message → Kafka (mqtthandler logic)
│ │
│ └── decoder/ # NEW: decoder-specific processing (or under service/)
│ │ ├── process.go # ProcessIncoming, DecodeBeacon (from cmd/decoder)
│ │ └── registry.go # Optional: wrap ParserRegistry with add/delete/update
│ │
├── app/ # NEW (optional): per-service composition / “application” layer
│ ├── server/
│ │ ├── app.go # ServerApp: config, db, kafka, router, event loop, shutdown
│ │ ├── routes.go # Register all routes with deps
│ │ └── events.go # RunEventLoop(ctx): location, alertbeacons, ticker
│ ├── location/
│ │ ├── app.go # LocationApp: config, kafka, store, filter/inference, run loop
│ │ └── loop.go # Run(ctx): ticker + channels
│ ├── bridge/
│ │ ├── app.go # BridgeApp: config, kafka, mqtt, store, run loop
│ │ └── loop.go
│ └── decoder/
│ ├── app.go # DecoderApp: config, kafka, registry, run loop
│ └── loop.go
You can adopt this incrementally: e.g. first add internal/app/server and move event loop + route registration there, then do the same for location/bridge/decoder.
cmd/<service>/main.go Becomescmd/server/main.go:
Load config (or exit on error), create logger, call serverapp.New(cfg, logger) (or bootstrap), then app.Run(ctx) and app.Shutdown(). No DB seed or parser sync in main—move those into ServerApp constructor or a ServerApp.Init(ctx).
cmd/location/main.go:
Load config, create logger, create AppState, call locationapp.New(cfg, logger, appState) (and optionally Kafka manager from bootstrap), then app.Run(ctx) and app.Shutdown().
cmd/bridge/main.go:
Same idea: bootstrap, then bridgeapp.New(...), Run(ctx), Shutdown().
cmd/decoder/main.go:
Bootstrap, then decoderapp.New(...) with parser registry, Run(ctx), Shutdown().
So each main.go is on the order of 20–40 lines: config + logger + optional bootstrap, build app, run, shutdown.
internal/pkg/bootstrap (or runtime):
Bootstrap(ctx) (cfg *config.Config, log *slog.Logger, kafka *kafkaclient.KafkaManager, cleanup func()) for a given service type (or one function per service that returns what that service needs).main.go so that “create logger + kafka + context” is one place.Shutdown(ctx, kafkaManager, cleanup) so each main just calls it after breaking the loop.After this, each main is: load config → bootstrap → build Kafka/channels (or get from app) → create “App” (see Phase 2) → run loop → shutdown.
internal/appinternal/app/server: ServerApp struct holding cfg, db, kafkaManager, channels, router, server, wg.NewServerApp(cfg, logger) or NewServerApp(...).Init(ctx).RegisterRoutes(app *ServerApp) (or app.Routes() that returns http.Handler).ServerApp.RunEventLoop(ctx).main becomes: config → bootstrap → NewServerApp → Init → go ListenAndServe → RunEventLoop(ctx) → Shutdown.internal/app/location: LocationApp with kafkaManager, appState, channels, filter algo, inference client.getLikelyLocations and assignBeaconToList into internal/pkg/service/location.go or internal/pkg/location/filter.go and assign.go.getAI behind an interface LocationInferencer in internal/pkg/location; implement with client (auth + infer).LocationApp.Run(ctx).internal/app/bridge: BridgeApp with kafkaManager, mqtt client, appState, channels.mqtthandler and MQTT subscribe into internal/pkg/bridge/handler.go and mqtt.go; call from app.BridgeApp.Run(ctx).internal/app/decoder: DecoderApp with kafkaManager, parser registry, channels.processIncoming and decodeBeacon into internal/pkg/decoder/process.go.DecoderApp.Run(ctx).This removes “too much inside main” and gives a single place per service to add features (the app and the packages it uses).
internal/pkg/messaging (or keep kafkaclient and add interfaces there), define e.g. MessageWriter and MessageReader interfaces.KafkaManager (or a thin wrapper) implement them so handlers and services accept interfaces; tests can inject fakes.appcontext; if you introduce store, have AppState implement e.g. BeaconStore / SettingsStore so location and bridge depend on interfaces.db, writer, ctx to each handler, introduce a Server or HandlerEnv struct that holds DB, writers, and optionally logger; handlers become methods or receive this struct. Then you can add health checks and middleware in one place.internal/pkg/domain and move shared types from model into domain subpackages (beacon, parser, location, trackers, etc.). Keep model as an alias or migrate imports gradually so that “core types” live under domain and “GORM models” stay under db if you split them.controller to internal/pkg/api/handler; add api/response for JSON and errors; add api/middleware for CORS, logging, recovery. Register routes in app/server using these handlers.| Service | Current main responsibilities | After refactor: main does | New home for logic |
|---|---|---|---|
| server | Bootstrap, DB seed, routes, loop, shutdown | Config, bootstrap, NewServerApp, Run, Shutdown | app/server (event loop, routes), api/handler, service |
| location | Bootstrap, Kafka, loop, shutdown | Config, bootstrap, NewLocationApp, Run, Shutdown | app/location, service/location or pkg/location (filter, assign, inference) |
| bridge | Bootstrap, Kafka, MQTT, loop, shutdown | Config, bootstrap, NewBridgeApp, Run, Shutdown | app/bridge, pkg/bridge (mqtt, handler) |
| decoder | Bootstrap, Kafka, loop, shutdown | Config, bootstrap, NewDecoderApp, Run, Shutdown | app/decoder, pkg/decoder (process, decode) |
internal/pkg and can be tested and reused without running a full main.cmd only composes and runs; internal/app owns per-service lifecycle and event loops; internal/pkg holds domain, store, messaging, API, and services.*ServerApp or *DecoderApp with test doubles.Run(ctx).You can implement Phase 1 and Phase 2 first (bootstrap + app with event loops and moved logic), then Phase 3 (interfaces) and Phase 4 (domain + API) as follow-ups.