選択できるのは25トピックまでです。 トピックは、先頭が英数字で、英数字とダッシュ('-')を使用した35文字以内のものにしてください。
 
 
 
 

20 KiB

Microservices Refactoring Overview

This document provides an extended analysis of the four microservices (server, location, bridge, decoder), their current structure, and a concrete refactoring plan for better reusability, separation of concerns, and maintainability.


1. Extended Overview of Each Service

1.1 cmd/server/main.go (~211 lines)

Role: HTTP API + Kafka consumers + event loop. Central API for gateways, zones, trackers, parser configs, settings, and tracks; consumes location events and alert beacons; runs a ticker to publish tracker list to MQTT topic.

What lives in main() today:

Section Lines (approx) Responsibility
Bootstrap 36–46 Load config, create AppState, init Kafka manager, create logger, signal context
DB + CORS 48–55 Connect DB, build CORS options
Kafka writers 56–59 Populate writers: apibeacons, alert, mqtt, settings, parser
Config load & DB seed 60–89 Open config file, unmarshal JSON, create configs in DB, find configs, send each to parser topic, call UpdateDB
Kafka readers 96–105 Populate readers: locevents, alertbeacons; create channels; start 2 consumer goroutines
Router setup 107–136 Mux router + ~25 route registrations (handlers get db, writer, ctx directly)
HTTP server 138–150 CORS wrapper, handler, http.Server, ListenAndServe in goroutine
Event loop 154–191 select: ctx.Done, chLoc → LocationToBeaconService, chEvents → update tracker battery/temp in DB, beaconTicker → marshal trackers, write to mqtt topic
Shutdown 193–210 Server shutdown, wait group, clean Kafka, cleanup logger

Pain points:

  • Heavy main: Config loading, DB seeding, and “sync parser configs to Kafka” are one-off startup logic mixed with wiring.
  • Event loop in main: Business logic (location→beacon, decoder event→DB update, ticker→mqtt) lives in main instead of a dedicated component.
  • Handlers take 3–4 args: db, *kafka.Writer, context.Context passed into every handler; no shared “server” or “app” struct.
  • Global wg: Package-level sync.WaitGroup instead of a scoped lifecycle object.

1.2 cmd/location/main.go (~246 lines)

Role: Location algorithm service. Consumes raw beacons and settings from Kafka; on a ticker runs either “filter” (score-based) or “ai” (HTTP inference) and writes location events to Kafka.

What lives in main() and adjacent:

Section Lines (approx) Responsibility
Bootstrap 26–38 AppState, config, Kafka manager, logger, signal context
Kafka 39–54 Readers: rawbeacons, settings; writer: locevents; channels; 2 consumer goroutines
Event loop 56–90 ctx.Done, locTicker (get settings, branch filter vs ai), chRaw → assignBeaconToList, chSettings → UpdateSettings
Shutdown 92–100 Break, wg.Wait, clean Kafka, cleanup

Logic outside main but still in this package:

  • getAI (102–122): HTTP client, TLS skip verify, get token, infer position (API calls).
  • getLikelyLocations (124–203): Full “filter” algorithm: iterate beacons, score by RSSI/seen, confidence, write HTTPLocation to Kafka. ~80 lines.
  • assignBeaconToList (205–244): Append metric to beacon in AppState, sliding window.

Pain points:

  • Algorithm in cmd: getLikelyLocations and assignBeaconToList are core domain logic but live under cmd/location; not reusable or testable in isolation.
  • getAI in main pkg: HTTP and TLS setup and API calls are in main; should be behind an interface (e.g. “LocationInference”) for testing and reuse.
  • Magic numbers: 999, 1.5, 0.75, etc. in getLikelyLocations; should be config or named constants.
  • Duplicate bootstrap: Same Kafka/logger/context pattern as server and bridge.

1.3 cmd/bridge/main.go (~212 lines)

Role: MQTT ↔ Kafka bridge. Subscribes to MQTT; converts messages to Kafka (rawbeacons); consumes apibeacons, alert, mqtt from Kafka and publishes to MQTT.

What lives in main() and adjacent:

Section Lines (approx) Responsibility
Bootstrap 99–118 AppState, config, Kafka, logger, context
Kafka 112–127 Readers: apibeacons, alert, mqtt; writer: rawbeacons; channels; 3 consumer goroutines
MQTT client 129–150 Options, client ID, handlers, connect, sub(client)
Event loop 152–188 ctx.Done, chApi (POST/DELETE → lookup), chAlert → Publish /alerts, chMqtt → Publish /trackers
Shutdown 190–203 Break, wg.Wait, Kafka cleanup, MQTT disconnect, cleanup

Logic in package:

  • mqtthandler (27–84): Parse JSON array of RawReading or CSV; for JSON, map MAC→ID via AppState, build BeaconAdvertisement, write to Kafka. CSV branch does nothing useful after parse (dead code).
  • messagePubHandler, connectHandler, connectLostHandler: MQTT callbacks.
  • sub(client): Subscribe to publish_out/#.

Pain points:

  • MQTT and Kafka logic in cmd: mqtthandler is central to the bridge but lives in main.go; hard to unit test and reuse.
  • Handler signature: mqtthandler(writer, topic, message, appState) and package-level messagePubHandler close over writer and appState; no injectable “BridgeHandler” or service.
  • Topic parsing: strings.Split(topic, "/")[1] can panic if topic format changes.
  • Dead CSV branch: Parses CSV but never produces Kafka messages; either implement or remove.

1.4 cmd/decoder/main.go (~139 lines)

Role: Decode raw beacon payloads using a parser registry; consume rawbeacons and parser config updates; produce alertbeacons.

What lives in main() and adjacent:

Section Lines (approx) Responsibility
Bootstrap 25–55 AppState, config, parser registry, logger, context, Kafka readers/writers, channels, 2 consumers
Event loop 57–76 ctx.Done, chRaw → processIncoming (decodeBeacon), chParser → add/delete/update registry
Shutdown 78–86 Break, wg.Wait, Kafka cleanup, cleanup

Logic in package:

  • processIncoming (88–95): Wraps decodeBeacon, logs errors.
  • decodeBeacon (97–138): Hex decode, remove flags, parse AD structures, run parser registry, dedupe by event hash, write to alertbeacons. This is core decoder logic.

Pain points:

  • Decode logic in cmd: decodeBeacon belongs in a decoder or parser service/package under internal, not in cmd.
  • Parser registry in main: Registry is created and updated in main; could be a component that main wires and passes into a “DecoderService” or “EventProcessor”.

2. Cross-Cutting Observations

2.1 Duplication Across All Four main.go

  • Bootstrap: Each service does: load service-specific config, create AppState (or not, server doesn’t use it for the same purpose), init KafkaManager, create logger, signal.NotifyContext.
  • Kafka pattern: PopulateKafkaManager for readers/writers, create channels, wg.Add(N), go Consume(...).
  • Shutdown: Break loop, wg.Wait(), CleanKafkaReaders/Writers, optional MQTT disconnect, logger cleanup.

This suggests a small runtime/bootstrap package that returns config, logger, Kafka manager, and context (and optionally an “App” struct that owns lifecycle).

2.2 Where Business Logic Lives

  • Server: Event loop in main (location→beacon, decoder event→DB, ticker→mqtt); handlers in internal/pkg/controller but take raw *gorm.DB and *kafka.Writer.
  • Location: Filter algorithm and “assign beacon to list” in cmd/location; no internal location or algorithm package.
  • Bridge: MQTT message handling in cmd/bridge; no internal bridge or mqtt handler package.
  • Decoder: Decode and registry handling in cmd/decoder; parser registry is in model, but “process raw → alert” is in main.

So today, a lot of “service logic” is either in main or in cmd/<service> instead of in internal behind clear interfaces.

2.3 Dependency Direction Today

  • All cmd/* import from internal/pkg/* (config, logger, kafkaclient, model, service, controller, database, apiclient, appcontext).
  • Controllers and services take concrete types (*gorm.DB, *kafka.Writer). No interfaces for “store” or “message writer,” so testing and swapping implementations require mocks at the concrete type level.
  • model is a single large namespace (beacons, parser, trackers, gateways, zones, settings, etc.); no split by bounded context (e.g. beacon vs parser vs location).

3. Proposed Directory and Package Layout

Goal: keep cmd/<service>/main.go as a thin composition layer that only wires config, infra, and “app” components, and runs the process. All reusable logic and “where things live” should be clear from the directory structure.

internal/
├── pkg/
│   ├── config/              # Keep; optional: split LoadServer/LoadLocation into configs subpackage or env schema
│   ├── logger/              # Keep
│   │
│   ├── domain/              # NEW: shared domain types and interfaces (no infra)
│   │   ├── beacon.go        # Beacon, BeaconEvent, BeaconMetric, BeaconAdvertisement, BeaconsList, etc.
│   │   ├── parser.go        # Config, KafkaParser, BeaconParser, ParserRegistry (or keep registry in service)
│   │   ├── location.go      # HTTPLocation, location scoring constants
│   │   ├── trackers.go      # Tracker, ApiUpdate, Alert, etc.
│   │   └── types.go         # RawReading, Settings, and other shared DTOs
│   │
│   ├── store/               # NEW: in-memory / app state (optional rename of appcontext)
│   │   └── appstate.go      # AppState (move from common/appcontext), same API
│   │
│   ├── messaging/           # NEW: Kafka (and optionally MQTT) behind interfaces
│   │   ├── kafka.go         # Manager, Consume, Writer/Reader interfaces, implementation
│   │   └── interfaces.go    # MessageWriter, MessageReader for tests
│   │
│   ├── db/                  # Rename from database; single place for GORM
│   │   ├── postgres.go      # Connect(cfg) (*gorm.DB, error)
│   │   └── models.go        # GORM model structs only (Tracker, Gateway, Zone, etc.)
│   │
│   ├── client/              # Rename from apiclient; external HTTP (auth, infer, etc.)
│   │   ├── auth.go
│   │   ├── data.go
│   │   └── updatedb.go
│   │
│   ├── api/                 # NEW: HTTP surface for server only
│   │   ├── handler/         # Handlers (move from controller); receive a Server or deps struct
│   │   │   ├── gateways.go
│   │   │   ├── zones.go
│   │   │   ├── trackers.go
│   │   │   ├── trackerzones.go
│   │   │   ├── parser.go
│   │   │   ├── settings.go
│   │   │   ├── tracks.go
│   │   │   └── health.go     # /health, /ready
│   │   ├── middleware/      # CORS, logging, recovery, request ID
│   │   └── response/        # JSON success/error helpers
│   │
│   ├── service/             # Keep; make depend on interfaces
│   │   ├── beacon.go        # LocationToBeaconService (depends on DB + Writer interfaces)
│   │   ├── parser.go
│   │   └── location.go      # NEW: Filter algorithm, AssignBeaconToList (from cmd/location)
│   │
│   ├── location/            # NEW: location service internals
│   │   ├── filter.go        # getLikelyLocations logic (score, confidence, write)
│   │   ├── assign.go        # assignBeaconToList
│   │   └── inference.go    # Interface for “get AI position”; adapter over client
│   │
│   ├── bridge/             # NEW: bridge-specific processing
│   │   ├── mqtt.go          # MQTT client options, connect, subscribe (thin wrapper)
│   │   └── handler.go       # MQTT message → Kafka (mqtthandler logic)
│   │
│   └── decoder/            # NEW: decoder-specific processing (or under service/)
│   │   ├── process.go       # ProcessIncoming, DecodeBeacon (from cmd/decoder)
│   │   └── registry.go      # Optional: wrap ParserRegistry with add/delete/update
│   │
├── app/                    # NEW (optional): per-service composition / “application” layer
│   ├── server/
│   │   ├── app.go           # ServerApp: config, db, kafka, router, event loop, shutdown
│   │   ├── routes.go        # Register all routes with deps
│   │   └── events.go        # RunEventLoop(ctx): location, alertbeacons, ticker
│   ├── location/
│   │   ├── app.go           # LocationApp: config, kafka, store, filter/inference, run loop
│   │   └── loop.go          # Run(ctx): ticker + channels
│   ├── bridge/
│   │   ├── app.go           # BridgeApp: config, kafka, mqtt, store, run loop
│   │   └── loop.go
│   └── decoder/
│       ├── app.go           # DecoderApp: config, kafka, registry, run loop
│       └── loop.go

You can adopt this incrementally: e.g. first add internal/app/server and move event loop + route registration there, then do the same for location/bridge/decoder.

3.2 What Each cmd/<service>/main.go Becomes

  • cmd/server/main.go:
    Load config (or exit on error), create logger, call serverapp.New(cfg, logger) (or bootstrap), then app.Run(ctx) and app.Shutdown(). No DB seed or parser sync in main—move those into ServerApp constructor or a ServerApp.Init(ctx).

  • cmd/location/main.go:
    Load config, create logger, create AppState, call locationapp.New(cfg, logger, appState) (and optionally Kafka manager from bootstrap), then app.Run(ctx) and app.Shutdown().

  • cmd/bridge/main.go:
    Same idea: bootstrap, then bridgeapp.New(...), Run(ctx), Shutdown().

  • cmd/decoder/main.go:
    Bootstrap, then decoderapp.New(...) with parser registry, Run(ctx), Shutdown().

So each main.go is on the order of 20–40 lines: config + logger + optional bootstrap, build app, run, shutdown.


4. Refactoring Steps (Concrete)

Phase 1: Extract bootstrap and shrink main (high impact, low risk)

  1. Add internal/pkg/bootstrap (or runtime):
    • Bootstrap(ctx) (cfg *config.Config, log *slog.Logger, kafka *kafkaclient.KafkaManager, cleanup func()) for a given service type (or one function per service that returns what that service needs).
    • Use it from all four main.go so that “create logger + kafka + context” is one place.
  2. Move shutdown sequence into a single place: e.g. Shutdown(ctx, kafkaManager, cleanup) so each main just calls it after breaking the loop.

After this, each main is: load config → bootstrap → build Kafka/channels (or get from app) → create “App” (see Phase 2) → run loop → shutdown.

Phase 2: Move event loops and “server wiring” into internal/app

  1. Server
    • Add internal/app/server: ServerApp struct holding cfg, db, kafkaManager, channels, router, server, wg.
    • Move config load + DB connect + parser sync + Kafka reader setup into NewServerApp(cfg, logger) or NewServerApp(...).Init(ctx).
    • Move route registration into RegisterRoutes(app *ServerApp) (or app.Routes() that returns http.Handler).
    • Move the event loop (select over chLoc, chEvents, ticker) into ServerApp.RunEventLoop(ctx).
    • main becomes: config → bootstrap → NewServerApp → Init → go ListenAndServe → RunEventLoop(ctx) → Shutdown.
  2. Location
    • Add internal/app/location: LocationApp with kafkaManager, appState, channels, filter algo, inference client.
    • Move getLikelyLocations and assignBeaconToList into internal/pkg/service/location.go or internal/pkg/location/filter.go and assign.go.
    • Move getAI behind an interface LocationInferencer in internal/pkg/location; implement with client (auth + infer).
    • Event loop in LocationApp.Run(ctx).
  3. Bridge
    • Add internal/app/bridge: BridgeApp with kafkaManager, mqtt client, appState, channels.
    • Move mqtthandler and MQTT subscribe into internal/pkg/bridge/handler.go and mqtt.go; call from app.
    • Event loop in BridgeApp.Run(ctx).
  4. Decoder
    • Add internal/app/decoder: DecoderApp with kafkaManager, parser registry, channels.
    • Move processIncoming and decodeBeacon into internal/pkg/decoder/process.go.
    • Event loop in DecoderApp.Run(ctx).

This removes “too much inside main” and gives a single place per service to add features (the app and the packages it uses).

Phase 3: Interfaces and dependency injection

  1. Messaging
    • In internal/pkg/messaging (or keep kafkaclient and add interfaces there), define e.g. MessageWriter and MessageReader interfaces.
    • Have KafkaManager (or a thin wrapper) implement them so handlers and services accept interfaces; tests can inject fakes.
  2. Store
    • Rename or keep appcontext; if you introduce store, have AppState implement e.g. BeaconStore / SettingsStore so location and bridge depend on interfaces.
  3. Server handlers
    • Instead of passing db, writer, ctx to each handler, introduce a Server or HandlerEnv struct that holds DB, writers, and optionally logger; handlers become methods or receive this struct. Then you can add health checks and middleware in one place.

Phase 4: Domain and API clarity

  1. Domain
    • Create internal/pkg/domain and move shared types from model into domain subpackages (beacon, parser, location, trackers, etc.). Keep model as an alias or migrate imports gradually so that “core types” live under domain and “GORM models” stay under db if you split them.
  2. API
    • Move HTTP handlers from controller to internal/pkg/api/handler; add api/response for JSON and errors; add api/middleware for CORS, logging, recovery. Register routes in app/server using these handlers.

5. Summary Table

Service Current main responsibilities After refactor: main does New home for logic
server Bootstrap, DB seed, routes, loop, shutdown Config, bootstrap, NewServerApp, Run, Shutdown app/server (event loop, routes), api/handler, service
location Bootstrap, Kafka, loop, shutdown Config, bootstrap, NewLocationApp, Run, Shutdown app/location, service/location or pkg/location (filter, assign, inference)
bridge Bootstrap, Kafka, MQTT, loop, shutdown Config, bootstrap, NewBridgeApp, Run, Shutdown app/bridge, pkg/bridge (mqtt, handler)
decoder Bootstrap, Kafka, loop, shutdown Config, bootstrap, NewDecoderApp, Run, Shutdown app/decoder, pkg/decoder (process, decode)

6. Benefits After Refactoring

  • Reusability: Location algorithm, bridge MQTT handling, and decoder logic live in internal/pkg and can be tested and reused without running a full main.
  • Separation: cmd only composes and runs; internal/app owns per-service lifecycle and event loops; internal/pkg holds domain, store, messaging, API, and services.
  • Maintainability: Adding a new route or a new Kafka consumer is “add to ServerApp and register”; adding a new algorithm is “add to location package and call from LocationApp.”
  • Testability: Event loops and handlers can be unit-tested with fake writers/stores; integration tests can build *ServerApp or *DecoderApp with test doubles.
  • Consistency: One bootstrap and one shutdown pattern across all four services; same style of “App” struct and Run(ctx).

You can implement Phase 1 and Phase 2 first (bootstrap + app with event loops and moved logic), then Phase 3 (interfaces) and Phase 4 (domain + API) as follow-ups.