Вы не можете выбрать более 25 тем Темы должны начинаться с буквы или цифры, могут содержать дефисы(-) и должны содержать не более 35 символов.
 
 
 
 

18 KiB

Refactoring Plan for AFASystems Presence Detection System

Date: 2026-01-16 Total Codebase: ~3,391 lines of Go code across 4 services

Executive Summary

After analyzing the codebase across the 4 main services (bridge, decoder, location, server), I’ve identified significant code duplication, inconsistent patterns, and maintenance challenges. This document outlines a structured refactoring approach to improve maintainability, reduce duplication, and establish clear architectural patterns.


Critical Issues Identified

1. Massive Code Duplication (Priority: HIGH)

Problem: Identical Boilerplate in All Services

All 4 services (bridge/main.go:118-131, decoder/main.go:36-44, location/main.go:31-39, server/main.go:45-53) contain identical code for:

  • Log file creation
  • Multi-writer setup (stderr + file)
  • Logger initialization with JSON handler
  • Context setup with signal handling

Impact: Any change to logging or signal handling requires updating 4 files.

Duplication Factor: ~60 lines × 4 services = 240 lines of duplicated code

Problem: Kafka Consumer Pattern Duplication

Each service manually creates channels, adds to waitgroups, and starts consumers in the same pattern:

chRaw := make(chan model.BeaconAdvertisement, 2000)
wg.Add(1)
go kafkaclient.Consume(rawReader, chRaw, ctx, &wg)

This pattern appears in bridge/main.go:147-154, decoder/main.go:57-62, location/main.go:55-60, server/main.go:110-115.


2. Dead Code (Priority: MEDIUM)

Problem: Commented-out Code in bridge/main.go:76-103

83 lines of commented CSV parsing code remain in the codebase. This:

  • Reduces readability
  • Creates confusion about what functionality is active
  • Should be removed or moved to version control history

Problem: Unused Variables

In bridge/main.go:38:

var wg sync.WaitGroup

This package-level variable is used but would be better as a struct field in a service object.


3. Inconsistent Error Handling (Priority: HIGH)

Problem: Mixed Error Handling Patterns

Across services, there are at least 3 different error handling patterns:

  1. Silent continuation (bridge/main.go:35-37):

    if err != nil {
        log.Printf("Error parsing JSON: %v", err)
        return  // or continue
    }
    
  2. Panic on error (bridge/main.go:169-171):

    if token := client.Connect(); token.Wait() && token.Error() != nil {
        panic(token.Error())
    }
    
  3. Fatal termination (server/main.go:60-62):

    if err != nil {
        log.Fatalf("Failed to open database connection: %v\n", err)
    }
    

Impact: Inconsistent behavior makes debugging difficult and error handling unpredictable.


4. Monolithic main() Functions (Priority: HIGH)

Problem: Single Large Function Does Everything

All main functions are doing too much:

  • bridge/main.go:118-224 (106 lines): Setup, MQTT connection, event loop, Kafka handling, shutdown
  • server/main.go:41-219 (178 lines): DB setup, Kafka setup, HTTP server, WebSocket, event loop, shutdown
  • decoder/main.go:27-91 (64 lines): Kafka setup, parser registry, event loop, processing
  • location/main.go:26-90 (64 lines): Kafka setup, ticker management, event loop, location algorithm

Impact: Hard to test, hard to reason about, high cyclomatic complexity.


5. Lack of Abstraction for Common Patterns (Priority: MEDIUM)

Problem: No Service Lifecycle Management

Each service manually:

  1. Creates logger
  2. Sets up signal context
  3. Creates Kafka readers/writers
  4. Starts consumers
  5. Runs event loop
  6. Handles shutdown
  7. Closes Kafka connections

This is a perfect candidate for an abstraction.


6. Hardcoded Configuration (Priority: MEDIUM)

Problem: Hardcoded Paths and Values

  • server/main.go:75: Hardcoded config file path "/app/cmd/server/config.json"
  • bridge/main.go:227: Hardcoded MQTT topic "publish_out/#"
  • server/main.go:238: Hardcoded ping ticker calculation (60 * 9) / 10 * time.Second
  • server/main.go:147: Hardcoded beacon ticker 2 * time.Second

Impact: Difficult to configure without code changes.


7. Missing TODO Resolution (Priority: LOW)

Outstanding TODO

internal/pkg/model/parser.go:74:

// TODO: change this to be dynamic, maybe event is interface with no predefined properties

This should be addressed to make the parser more flexible.


8. Inefficient Memory Usage (Priority: LOW)

Problem: Unbounded Map Growth Potential

In location/main.go:113-119:

locList := make(map[string]float64)
for _, metric := range beacon.BeaconMetrics {
    res := seenW + (rssiW * (1.0 - (float64(metric.RSSI) / -100.0)))
    locList[metric.Location] += res
}

If BeaconMetrics grows unbounded, this could become a performance issue. However, current implementation limits this via BeaconMetricSize setting.


Refactoring Recommendations

Phase 1: Create Common Infrastructure (Immediate)

1.1 Create Service Lifecycle Framework

File: internal/pkg/server/service.go

package server

import (
    "context"
    "io"
    "log"
    "log/slog"
    "os"
    "os/signal"
    "sync"
    "syscall"
)

type Service struct {
    name        string
    cfg         Config
    logger      *slog.Logger
    ctx         context.Context
    cancel      context.CancelFunc
    wg          sync.WaitGroup
    kafkaMgr    *KafkaManager
}

func NewService(name string, cfg Config) (*Service, error) {
    // Initialize logger
    // Setup signal handling
    // Create Kafka manager
}

func (s *Service) Logger() *slog.Logger {
    return s.logger
}

func (s *Service) Context() context.Context {
    return s.ctx
}

func (s *Service) WaitGroup() *sync.WaitGroup {
    return &s.wg
}

func (s *Service) Start() {
    // Start event loop
}

func (s *Service) Shutdown() {
    // Handle graceful shutdown
}

Benefits:

  • Single place for lifecycle management
  • Consistent startup/shutdown across all services
  • Easier testing with mock dependencies

1.2 Extract Logger Initialization

File: internal/pkg/server/logger.go

package server

import (
    "io"
    "log"
    "log/slog"
    "os"
)

func InitLogger(logPath string) (*slog.Logger, io.Closer, error) {
    logFile, err := os.OpenFile(logPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
    if err != nil {
        return nil, nil, err
    }

    w := io.MultiWriter(os.Stderr, logFile)
    logger := slog.New(slog.NewJSONHandler(w, nil))
    slog.SetDefault(logger)

    return logger, logFile, nil
}

Benefits:

  • Reusable across all services
  • Consistent logging format
  • Easier to change logging strategy

1.3 Create Kafka Manager

File: internal/pkg/server/kafka.go

package server

import (
    "context"
    "sync"

    "github.com/AFASystems/presence/internal/pkg/kafkaclient"
    "github.com/AFASystems/presence/internal/pkg/model"
    "github.com/segmentio/kafka-go"
)

type KafkaManager struct {
    readers []*kafka.Reader
    writers []*kafka.Writer
    lock    sync.RWMutex
}

func (km *KafkaManager) CreateReader(url, topic, groupID string) *kafka.Reader
func (km *KafkaManager) CreateWriter(url, topic string) *kafka.Writer
func (km *KafkaManager) StartConsumer[T any](reader *kafka.Reader, ch chan<- T, ctx context.Context)
func (km *KafkaManager) Close()

Benefits:

  • Centralized Kafka lifecycle management
  • Type-safe consumer creation
  • Automatic cleanup on shutdown

Phase 2: Refactor Individual Services (Short-term)

2.1 Bridge Service Refactoring

Current Issues:

  • Large monolithic main (106 lines)
  • MQTT handler mixed with Kafka logic
  • Commented dead code

Refactored Structure:

cmd/bridge/
├── main.go                 (50 lines - just setup)
├── service.go              (BridgeService struct)
├── mqtthandler/
│   ├── handler.go          (MQTT message handling)
│   └── parser.go           (Parse MQTT messages)
└── kafkaevents/
    └── handlers.go         (Kafka event handlers)

Actions:

  1. Remove dead code (lines 76-103)
  2. Extract MQTT handling to separate package
  3. Create BridgeService struct with lifecycle methods
  4. Use common Service framework from Phase 1

2.2 Decoder Service Refactoring

Current Issues:

  • Processing logic mixed with event loop
  • Parser registry embedded in main

Refactored Structure:

cmd/decoder/
├── main.go                 (30 lines - just setup)
├── service.go              (DecoderService struct)
├── processor/
│   ├── beacon.go           (Beacon decoding logic)
│   └── registry.go         (Parser registry management)
└── kafkaevents/
    └── handlers.go         (Kafka event handlers)

Actions:

  1. Extract decodeBeacon logic to processor package
  2. Create Processor interface for different beacon types
  3. Separate parser registry into its own file

2.3 Location Service Refactoring

Current Issues:

  • Location algorithm embedded in event loop
  • No abstraction for different algorithms

Refactored Structure:

cmd/location/
├── main.go                 (30 lines - just setup)
├── service.go              (LocationService struct)
├── algorithms/
│   ├── interface.go        (LocationAlgorithm interface)
│   ├── filter.go           (Current filter algorithm)
│   └── ai.go               (Future AI algorithm)
└── beacon/
    └── tracker.go          (Beacon tracking logic)

Actions:

  1. Define LocationAlgorithm interface
  2. Move filter algorithm to separate file
  3. Add factory pattern for algorithm selection
  4. Extract beacon tracking logic

2.4 Server Service Refactoring

Current Issues:

  • Largest main function (178 lines)
  • Mixed concerns: HTTP, WebSocket, Kafka, Database
  • Deeply nested handler setup

Refactored Structure:

cmd/server/
├── main.go                 (40 lines - just setup)
├── service.go              (ServerService struct)
├── http/
│   ├── server.go           (HTTP server setup)
│   ├── routes.go           (Route registration)
│   └── middleware.go       (CORS, logging, etc.)
├── websocket/
│   ├── handler.go          (WebSocket upgrade)
│   ├── writer.go           (WebSocket write logic)
│   └── reader.go           (WebSocket read logic)
└── kafkaevents/
    └── handlers.go         (Kafka event handlers)

Actions:

  1. Extract HTTP server to separate package
  2. Move WebSocket logic to dedicated package
  3. Create route registration table
  4. Separate Kafka event handlers

Phase 3: Standardize Error Handling (Medium-term)

3.1 Define Error Handling Policy

File: internal/pkg/errors/errors.go

package errors

import (
    "fmt"
    "log/slog"
)

// Wrap wraps an error with context
func Wrap(err error, message string) error {
    return fmt.Errorf("%s: %w", message, err)
}

// LogAndReturn logs an error and returns it
func LogAndReturn(err error, message string) error {
    slog.Error(message, "error", err)
    return fmt.Errorf("%s: %w", message, err)
}

// Must panics if err is not nil (for initialization only)
func Must(err error, message string) {
    if err != nil {
        panic(fmt.Sprintf("%s: %v", message, err))
    }
}

Policy:

  • Use LogAndReturn for recoverable errors in event loops
  • Use Must for initialization failures that prevent startup
  • Use Wrap to add context to errors before returning
  • Never use silent log-and-continue without explicit comments

Phase 4: Configuration Management (Medium-term)

4.1 Centralize Configuration

File: internal/pkg/config/bridge.go (one per service)

package config

type BridgeConfig struct {
    // Kafka settings
    KafkaURL string

    // MQTT settings
    MQTTUrl      string
    MQTTPort     int
    MQTTTopics   []string
    MQTTClientID string

    // Logging
    LogPath string

    // Channels
    ChannelBuffer int
}

func LoadBridge() (*BridgeConfig, error) {
    cfg := Load() // Load base config

    return &BridgeConfig{
        KafkaURL:     cfg.KafkaURL,
        MQTTUrl:      cfg.MQTTHost,
        MQTTPort:     1883,
        MQTTTopics:   []string{"publish_out/#"},
        MQTTClientID: "go_mqtt_client",
        LogPath:      "server.log",
        ChannelBuffer: 200,
    }, nil
}

Benefits:

  • No more hardcoded values
  • Easy to add environment variable overrides
  • Clear configuration schema per service
  • Easier testing with different configs

Phase 5: Testing Infrastructure (Long-term)

5.1 Add Interface Definitions

Create interfaces for all external dependencies:

  • MQTTClient interface
  • KafkaReader interface
  • KafkaWriter interface
  • Database interface

Benefits:

  • Easy to mock for testing
  • Clear contracts between components
  • Better documentation

5.2 Add Unit Tests

Target coverage: 70%+

Priority:

  1. Business logic (location algorithms, beacon parsing)
  2. Service lifecycle (startup, shutdown)
  3. Error handling paths
  4. Kafka message processing

Specific Code Improvements

Remove Dead Code

File: cmd/bridge/main.go:76-103

  • Action: Delete the 83 lines of commented CSV code
  • Reason: Dead code, maintained in git history if needed

Fix Package-Level Variables

File: cmd/bridge/main.go:25

  • Current: var wg sync.WaitGroup
  • Action: Move to BridgeService struct field
  • Reason: Avoid global state, enable multiple service instances

Resolve TODO

File: internal/pkg/model/parser.go:74

  • Current: Hardcoded beacon event structure
  • Action: Make BeaconEvent use flexible map or interface
  • Reason: Support different beacon types without struct changes

Improve Channel Buffering

Current: Random channel buffer sizes (200, 500, 2000)

  • Action: Define constant or configuration value
  • File: internal/pkg/config/constants.go
const (
    DefaultChannelBuffer = 200
    LargeChannelBuffer   = 2000
)

Add Context Timeouts

Current: Some operations have no timeout Examples:

  • bridge/main.go:69: Kafka write has no timeout
  • bridge/main.go:158: MQTT connection has no explicit timeout

Action: Add timeouts to all I/O operations

ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
err = writer.WriteMessages(ctx, msg)

Implementation Priority

Week 1: Foundation

  1. Create service lifecycle framework
  2. Extract logger initialization
  3. Remove dead code from bridge

Week 2-3: Service Refactoring

  1. Refactor bridge service
  2. Refactor decoder service
  3. Refactor location service
  4. Refactor server service

Week 4: Error Handling & Config

  1. Standardize error handling
  2. Centralize configuration
  3. Add configuration validation

Week 5+: Testing & Documentation

  1. Add unit tests for core logic
  2. Add integration tests
  3. Update documentation
  4. Create architecture diagrams

Success Metrics

Code Quality

  • Before: 240 lines of duplicated code
  • After: < 50 lines of shared infrastructure
  • Reduction: 80% reduction in duplication

Maintainability

  • Before: Changes require updating 4 files
  • After: Changes to shared code update once
  • Impact: Faster development, fewer bugs

Testing

  • Before: No unit tests (based on provided files)
  • After: 70%+ code coverage
  • Impact: Catches regressions early

File Sizes

  • Before: main.go files 106-178 lines
  • After: main.go files < 50 lines
  • Impact: Easier to understand, better separation of concerns

Migration Strategy

Incremental Refactoring

  1. DO NOT rewrite everything at once
  2. Extract common code without changing behavior
  3. Add tests before refactoring
  4. Run existing tests after each change
  5. Use feature flags for major changes

Backward Compatibility

  • Keep Kafka topic names unchanged
  • Keep API endpoints unchanged
  • Keep database schema unchanged
  • Allow old and new code to coexist during migration

Testing During Migration

  1. Run existing services in parallel
  2. Compare outputs
  3. Load test with production-like traffic
  4. Monitor for differences
  5. Gradual rollout

Additional Recommendations

Documentation

  1. Add godoc comments to all exported functions
  2. Create architecture diagrams showing data flow
  3. Document Kafka message formats
  4. Add runbook for common operations

Monitoring

  1. Add Prometheus metrics
  2. Add structured logging with correlation IDs
  3. Add health check endpoints
  4. Add performance tracing

Development Workflow

  1. Add pre-commit hooks
  2. Add linting (golangci-lint)
  3. Add formatting checks (gofmt, goimports)
  4. Add dependency scanning

Conclusion

The current codebase suffers from significant duplication and lacks clear architectural boundaries. By implementing this refactoring plan incrementally, you can:

  1. Reduce duplication by 80% through shared infrastructure
  2. Improve maintainability through consistent patterns
  3. Enable testing through proper abstractions
  4. Reduce bugs through standardized error handling
  5. Accelerate development through clearer structure

The key is to refactor incrementally while maintaining backward compatibility and adding tests at each step.


Next Steps

  1. Review this document with your team
  2. Prioritize phases based on your pain points
  3. Create tracking issues for each phase
  4. Start with Phase 1 (common infrastructure)
  5. Measure success using the metrics above

Recommended First Step: Begin with Phase 1.1 (Service Lifecycle Framework) as it provides the foundation for all other refactoring work.