Date: 2026-01-16 Total Codebase: ~3,391 lines of Go code across 4 services
After analyzing the codebase across the 4 main services (bridge, decoder, location, server), I’ve identified significant code duplication, inconsistent patterns, and maintenance challenges. This document outlines a structured refactoring approach to improve maintainability, reduce duplication, and establish clear architectural patterns.
All 4 services (bridge/main.go:118-131, decoder/main.go:36-44, location/main.go:31-39, server/main.go:45-53) contain identical code for:
Impact: Any change to logging or signal handling requires updating 4 files.
Duplication Factor: ~60 lines × 4 services = 240 lines of duplicated code
Each service manually creates channels, adds to waitgroups, and starts consumers in the same pattern:
chRaw := make(chan model.BeaconAdvertisement, 2000)
wg.Add(1)
go kafkaclient.Consume(rawReader, chRaw, ctx, &wg)
This pattern appears in bridge/main.go:147-154, decoder/main.go:57-62, location/main.go:55-60, server/main.go:110-115.
83 lines of commented CSV parsing code remain in the codebase. This:
In bridge/main.go:38:
var wg sync.WaitGroup
This package-level variable is used but would be better as a struct field in a service object.
Across services, there are at least 3 different error handling patterns:
Silent continuation (bridge/main.go:35-37):
if err != nil {
log.Printf("Error parsing JSON: %v", err)
return // or continue
}
Panic on error (bridge/main.go:169-171):
if token := client.Connect(); token.Wait() && token.Error() != nil {
panic(token.Error())
}
Fatal termination (server/main.go:60-62):
if err != nil {
log.Fatalf("Failed to open database connection: %v\n", err)
}
Impact: Inconsistent behavior makes debugging difficult and error handling unpredictable.
All main functions are doing too much:
Impact: Hard to test, hard to reason about, high cyclomatic complexity.
Each service manually:
This is a perfect candidate for an abstraction.
server/main.go:75: Hardcoded config file path "/app/cmd/server/config.json"bridge/main.go:227: Hardcoded MQTT topic "publish_out/#"server/main.go:238: Hardcoded ping ticker calculation (60 * 9) / 10 * time.Secondserver/main.go:147: Hardcoded beacon ticker 2 * time.SecondImpact: Difficult to configure without code changes.
internal/pkg/model/parser.go:74:
// TODO: change this to be dynamic, maybe event is interface with no predefined properties
This should be addressed to make the parser more flexible.
In location/main.go:113-119:
locList := make(map[string]float64)
for _, metric := range beacon.BeaconMetrics {
res := seenW + (rssiW * (1.0 - (float64(metric.RSSI) / -100.0)))
locList[metric.Location] += res
}
If BeaconMetrics grows unbounded, this could become a performance issue. However, current implementation limits this via BeaconMetricSize setting.
File: internal/pkg/server/service.go
package server
import (
"context"
"io"
"log"
"log/slog"
"os"
"os/signal"
"sync"
"syscall"
)
type Service struct {
name string
cfg Config
logger *slog.Logger
ctx context.Context
cancel context.CancelFunc
wg sync.WaitGroup
kafkaMgr *KafkaManager
}
func NewService(name string, cfg Config) (*Service, error) {
// Initialize logger
// Setup signal handling
// Create Kafka manager
}
func (s *Service) Logger() *slog.Logger {
return s.logger
}
func (s *Service) Context() context.Context {
return s.ctx
}
func (s *Service) WaitGroup() *sync.WaitGroup {
return &s.wg
}
func (s *Service) Start() {
// Start event loop
}
func (s *Service) Shutdown() {
// Handle graceful shutdown
}
Benefits:
File: internal/pkg/server/logger.go
package server
import (
"io"
"log"
"log/slog"
"os"
)
func InitLogger(logPath string) (*slog.Logger, io.Closer, error) {
logFile, err := os.OpenFile(logPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
if err != nil {
return nil, nil, err
}
w := io.MultiWriter(os.Stderr, logFile)
logger := slog.New(slog.NewJSONHandler(w, nil))
slog.SetDefault(logger)
return logger, logFile, nil
}
Benefits:
File: internal/pkg/server/kafka.go
package server
import (
"context"
"sync"
"github.com/AFASystems/presence/internal/pkg/kafkaclient"
"github.com/AFASystems/presence/internal/pkg/model"
"github.com/segmentio/kafka-go"
)
type KafkaManager struct {
readers []*kafka.Reader
writers []*kafka.Writer
lock sync.RWMutex
}
func (km *KafkaManager) CreateReader(url, topic, groupID string) *kafka.Reader
func (km *KafkaManager) CreateWriter(url, topic string) *kafka.Writer
func (km *KafkaManager) StartConsumer[T any](reader *kafka.Reader, ch chan<- T, ctx context.Context)
func (km *KafkaManager) Close()
Benefits:
Current Issues:
Refactored Structure:
cmd/bridge/
├── main.go (50 lines - just setup)
├── service.go (BridgeService struct)
├── mqtthandler/
│ ├── handler.go (MQTT message handling)
│ └── parser.go (Parse MQTT messages)
└── kafkaevents/
└── handlers.go (Kafka event handlers)
Actions:
Current Issues:
Refactored Structure:
cmd/decoder/
├── main.go (30 lines - just setup)
├── service.go (DecoderService struct)
├── processor/
│ ├── beacon.go (Beacon decoding logic)
│ └── registry.go (Parser registry management)
└── kafkaevents/
└── handlers.go (Kafka event handlers)
Actions:
decodeBeacon logic to processor packageCurrent Issues:
Refactored Structure:
cmd/location/
├── main.go (30 lines - just setup)
├── service.go (LocationService struct)
├── algorithms/
│ ├── interface.go (LocationAlgorithm interface)
│ ├── filter.go (Current filter algorithm)
│ └── ai.go (Future AI algorithm)
└── beacon/
└── tracker.go (Beacon tracking logic)
Actions:
Current Issues:
Refactored Structure:
cmd/server/
├── main.go (40 lines - just setup)
├── service.go (ServerService struct)
├── http/
│ ├── server.go (HTTP server setup)
│ ├── routes.go (Route registration)
│ └── middleware.go (CORS, logging, etc.)
├── websocket/
│ ├── handler.go (WebSocket upgrade)
│ ├── writer.go (WebSocket write logic)
│ └── reader.go (WebSocket read logic)
└── kafkaevents/
└── handlers.go (Kafka event handlers)
Actions:
File: internal/pkg/errors/errors.go
package errors
import (
"fmt"
"log/slog"
)
// Wrap wraps an error with context
func Wrap(err error, message string) error {
return fmt.Errorf("%s: %w", message, err)
}
// LogAndReturn logs an error and returns it
func LogAndReturn(err error, message string) error {
slog.Error(message, "error", err)
return fmt.Errorf("%s: %w", message, err)
}
// Must panics if err is not nil (for initialization only)
func Must(err error, message string) {
if err != nil {
panic(fmt.Sprintf("%s: %v", message, err))
}
}
Policy:
LogAndReturn for recoverable errors in event loopsMust for initialization failures that prevent startupWrap to add context to errors before returningFile: internal/pkg/config/bridge.go (one per service)
package config
type BridgeConfig struct {
// Kafka settings
KafkaURL string
// MQTT settings
MQTTUrl string
MQTTPort int
MQTTTopics []string
MQTTClientID string
// Logging
LogPath string
// Channels
ChannelBuffer int
}
func LoadBridge() (*BridgeConfig, error) {
cfg := Load() // Load base config
return &BridgeConfig{
KafkaURL: cfg.KafkaURL,
MQTTUrl: cfg.MQTTHost,
MQTTPort: 1883,
MQTTTopics: []string{"publish_out/#"},
MQTTClientID: "go_mqtt_client",
LogPath: "server.log",
ChannelBuffer: 200,
}, nil
}
Benefits:
Create interfaces for all external dependencies:
MQTTClient interfaceKafkaReader interfaceKafkaWriter interfaceDatabase interfaceBenefits:
Target coverage: 70%+
Priority:
File: cmd/bridge/main.go:76-103
File: cmd/bridge/main.go:25
var wg sync.WaitGroupFile: internal/pkg/model/parser.go:74
Current: Random channel buffer sizes (200, 500, 2000)
internal/pkg/config/constants.goconst (
DefaultChannelBuffer = 200
LargeChannelBuffer = 2000
)
Current: Some operations have no timeout Examples:
bridge/main.go:69: Kafka write has no timeoutbridge/main.go:158: MQTT connection has no explicit timeoutAction: Add timeouts to all I/O operations
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
err = writer.WriteMessages(ctx, msg)
The current codebase suffers from significant duplication and lacks clear architectural boundaries. By implementing this refactoring plan incrementally, you can:
The key is to refactor incrementally while maintaining backward compatibility and adding tests at each step.
Recommended First Step: Begin with Phase 1.1 (Service Lifecycle Framework) as it provides the foundation for all other refactoring work.