| @@ -0,0 +1,481 @@ | |||
| # Code Review -- AFASystems Presence | |||
| **Date:** 2026-03-11 | |||
| **Scope:** All 4 services (`bridge`, `server`, `decoder`, `location`) and every internal package they depend on (~50 Go source files). | |||
| **Excluded:** Security (authentication, authorization, TLS hardening) -- per agreement, the system will operate behind a firewall. | |||
| --- | |||
| ## Production Readiness Rating: 5.5 / 10 | |||
| | Category | Score | Notes | | |||
| |---|---|---| | |||
| | Architecture & layout | 8/10 | Clean microservice split, standard Go project layout, good separation of concerns | | |||
| | Error handling | 4/10 | Multiple swallowed errors, panics instead of errors, `fmt.Println` in production paths | | |||
| | Resource management | 4/10 | HTTP response bodies leaked, no server timeouts, no request-size limits | | |||
| | Concurrency safety | 5/10 | Good use of `sync.RWMutex` in most places, but 2 confirmed race conditions | | |||
| | Observability | 4/10 | Structured logging is mostly in place, but no metrics, no tracing, health reporting has a memory leak | | |||
| | API design | 6/10 | Consistent response helpers, middleware chain present, but no input validation and mixed route naming | | |||
| | Testing | 5/10 | Good test structure and SQLite-based controller tests, but test code is stale and e2e is a stub | | |||
| | Operational readiness | 5/10 | Dockerfiles and compose exist, but no Makefile, no graceful shutdown timeout, `AutoMigrate` in production | | |||
| **Bottom line:** The architecture is solid and the code is well-organized. The project needs a focused round of fixes on resource leaks, race conditions, error handling, and observability before it is production ready. | |||
| --- | |||
| ## Critical Issues (Must Fix) | |||
| ### C1. HTTP response bodies are never closed | |||
| `apiclient/auth.go`, `apiclient/data.go`, and `apiclient/utils.go` read from `res.Body` but never call `res.Body.Close()`. Each leaked body holds an open TCP connection and file descriptor. Under sustained load this will exhaust the process's file descriptor limit and crash the service. | |||
| ```go | |||
| // apiclient/auth.go -- current | |||
| res, err := client.Do(req) | |||
| if err != nil { ... } | |||
| var j response | |||
| json.NewDecoder(res.Body).Decode(&j) // Body never closed | |||
| // fix: add defer immediately after the error check | |||
| res, err := client.Do(req) | |||
| if err != nil { ... } | |||
| defer res.Body.Close() | |||
| ``` | |||
| The same pattern applies to every function in `data.go` (`GetTrackers`, `GetGateways`, `GetTrackerZones`, `GetZones`, `InferPosition`) and `utils.go` (`getRequest` -- the caller must close, or close inside). | |||
| --- | |||
| ### C2. No HTTP server timeouts -- Slowloris vulnerability | |||
| `internal/app/server/app.go:121-124`: | |||
| ```go | |||
| a.Server = &http.Server{ | |||
| Addr: a.Cfg.HTTPAddr, | |||
| Handler: a.RegisterRoutes(), | |||
| } | |||
| ``` | |||
| No `ReadTimeout`, `WriteTimeout`, or `IdleTimeout` is set. A single malicious (or slow) client can hold a goroutine and connection indefinitely. | |||
| **Fix:** | |||
| ```go | |||
| a.Server = &http.Server{ | |||
| Addr: a.Cfg.HTTPAddr, | |||
| Handler: a.RegisterRoutes(), | |||
| ReadTimeout: 15 * time.Second, | |||
| WriteTimeout: 15 * time.Second, | |||
| IdleTimeout: 60 * time.Second, | |||
| } | |||
| ``` | |||
| --- | |||
| ### C3. SQL injection in `LocationToBeaconServiceAI` | |||
| `internal/pkg/service/beacon_service.go:110`: | |||
| ```go | |||
| db.Order(fmt.Sprintf("POW(x - %f, 2) + POW(y - %f, 2)", msg.X, msg.Y)).First(&gw) | |||
| ``` | |||
| `msg.X` and `msg.Y` come from a deserialized Kafka message. Although they are `float32`, a manipulated Kafka payload could exploit this string-interpolated ORDER BY clause. Use parameterized queries or GORM's `Expr`: | |||
| ```go | |||
| db.Order(gorm.Expr("POW(x - ?, 2) + POW(y - ?, 2)", msg.X, msg.Y)).First(&gw) | |||
| ``` | |||
| --- | |||
| ### C4. `InsecureSkipVerify` is always `true` despite config flag | |||
| `internal/pkg/location/inference.go:27`: | |||
| ```go | |||
| func NewDefaultInferencer(skipTLSVerify bool) *DefaultInferencer { | |||
| tr := &http.Transport{} | |||
| tr.TLSClientConfig = &tls.Config{InsecureSkipVerify: true} // ignores the parameter! | |||
| ... | |||
| } | |||
| ``` | |||
| The `skipTLSVerify` parameter is accepted but never used. Same issue in `apiclient/updatedb.go:22`. Fix: use the parameter value. | |||
| --- | |||
| ### C5. `ActiveBeacons` slice grows without bound (memory leak) | |||
| `internal/pkg/common/appcontext/health.go:49-53`: | |||
| ```go | |||
| func (b *BaseHealth) GetActiveBeacons(m *AppState) { | |||
| beacons := m.GetAllBeacons() | |||
| for beacon := range beacons { | |||
| b.ActiveBeacons = append(b.ActiveBeacons, beacon) | |||
| } | |||
| } | |||
| ``` | |||
| This appends on every health tick but never resets the slice. Over hours the slice grows to millions of duplicate entries. | |||
| **Fix:** reset the slice before appending: | |||
| ```go | |||
| func (b *BaseHealth) GetActiveBeacons(m *AppState) { | |||
| beacons := m.GetAllBeacons() | |||
| b.ActiveBeacons = make([]string, 0, len(beacons)) | |||
| for beacon := range beacons { | |||
| b.ActiveBeacons = append(b.ActiveBeacons, beacon) | |||
| } | |||
| } | |||
| ``` | |||
| --- | |||
| ### C6. Race condition on `Settings` | |||
| - `AppState.GetSettings()` (`context.go:142`) returns a **pointer** to the settings struct. Any goroutine can read/write the pointed-to fields concurrently. | |||
| - `AppState.UpdateSettings()` (`context.go:240`) calls `mapstructure.Decode` on `m.settings` **without holding the mutex**. Meanwhile `GetSettingsValue()` reads it without a lock too. | |||
| **Fix:** hold `m.mu` in `UpdateSettings` and `GetSettingsValue`, and either return a copy from `GetSettings` or document that callers must hold the lock. | |||
| --- | |||
| ### C7. Race condition on `ParserRegistry` iteration | |||
| `internal/pkg/common/utils/beacons.go:51`: | |||
| ```go | |||
| for name, parser := range parserRegistry.ParserList { ... } | |||
| ``` | |||
| This iterates the map without holding the registry's `rw` read lock, while `Register`/`Unregister` modify the map under a write lock. This is a textbook data race. | |||
| **Fix:** either acquire `parserRegistry.rw.RLock()` around the loop, or pass a snapshot copy. | |||
| --- | |||
| ### C8. `context.Background()` used where parent context should be propagated | |||
| In three hot paths the parent (cancellable) context is replaced with `context.Background()`, preventing graceful shutdown from cancelling in-flight writes: | |||
| | File | Line | Call | | |||
| |---|---|---| | |||
| | `decoder/process.go` | 62 | `kafkaclient.Write(context.Background(), ...)` | | |||
| | `bridge/handler.go` | 53 | `kafkaclient.Write(context.Background(), ...)` | | |||
| | `location/filter.go` | 26 | `ctx := context.Background()` | | |||
| **Fix:** thread the parent `ctx` through these functions so shutdown signal propagates. | |||
| --- | |||
| ### C9. `getEnvPanic` crashes the process on missing env vars | |||
| `internal/pkg/config/config.go:50-55`: | |||
| ```go | |||
| func getEnvPanic(key string) string { | |||
| if v := os.Getenv(key); v != "" { return v } | |||
| panic(fmt.Sprintf("environment variable %s is not set", key)) | |||
| } | |||
| ``` | |||
| A `panic` produces an unstructured stack trace and no chance for cleanup (log flushing, Kafka writer close, etc). Return an error from the `Load*` functions instead. | |||
| --- | |||
| ## High Priority Issues | |||
| ### H1. No input validation on API endpoints | |||
| All controllers (`gateways_controller.go`, `trackers_controller.go`, `zone_controller.go`, etc.) deserialize JSON into model structs and pass them directly to GORM `Create`/`Save` with zero validation. Missing `ID` fields, empty names, malformed MACs, or extremely long strings all go straight to the database. | |||
| At minimum validate required fields and length limits before `db.Create`. | |||
| --- | |||
| ### H2. `DB.Create` errors silently swallowed during init | |||
| `internal/app/server/app.go:89-91`: | |||
| ```go | |||
| for _, c := range configs { | |||
| a.DB.Create(&c) // error discarded | |||
| } | |||
| ``` | |||
| If any config row fails to insert (e.g. duplicate primary key), the error is lost. Check and log each error. | |||
| --- | |||
| ### H3. Kafka consumer channel can block indefinitely | |||
| `internal/pkg/kafkaclient/consumer.go:34`: | |||
| ```go | |||
| ch <- data | |||
| ``` | |||
| If the event-loop goroutine is slow and the channel fills up, the consumer goroutine blocks here forever. It will not respond to `ctx.Done()` because the select only checks context in the outer loop. Options: | |||
| - Use a `select` with `ctx.Done()` when sending to the channel. | |||
| - Or implement backpressure / drop-oldest strategy. | |||
| --- | |||
| ### H4. `context.Context` stored in struct field | |||
| `internal/app/server/app.go:35`: | |||
| ```go | |||
| type ServerApp struct { | |||
| ... | |||
| ctx context.Context | |||
| ... | |||
| } | |||
| ``` | |||
| Go's own documentation says: "Do not store Contexts inside a struct type." The context is then passed to every route handler at registration time, meaning request-scoped contexts (timeouts, cancellation, values) cannot be used. Pass `ctx` as a function parameter instead. | |||
| --- | |||
| ### H5. No request body size limits | |||
| None of the HTTP handlers call `http.MaxBytesReader`. A single POST with a multi-gigabyte body will be read into memory in full. Add a body-size middleware or limit in each handler: | |||
| ```go | |||
| r.Body = http.MaxBytesReader(w, r.Body, 1<<20) // 1 MB | |||
| ``` | |||
| --- | |||
| ### H6. `twosComp` silently returns wrong value on bad input | |||
| `internal/pkg/common/utils/distance.go:24-26`: | |||
| ```go | |||
| func twosComp(inp string) int64 { | |||
| i, _ := strconv.ParseInt("0x"+inp, 0, 64) | |||
| return i - 256 | |||
| } | |||
| ``` | |||
| If `inp` is not valid hex, `i` is 0 and the function returns -256, producing nonsensical distance. Check the error. | |||
| --- | |||
| ### H7. `AutoMigrate` in production | |||
| `internal/pkg/database/database.go:29` runs `db.AutoMigrate(...)` on every startup. In production this can silently add columns but never removes or renames them. It can also fail or lock tables depending on DB state. Use a versioned migration tool (`golang-migrate`, `goose`, `atlas`). | |||
| --- | |||
| ### H8. No graceful HTTP shutdown timeout | |||
| `internal/app/server/app.go:141`: | |||
| ```go | |||
| a.Server.Shutdown(context.Background()) | |||
| ``` | |||
| `context.Background()` means the shutdown waits forever for in-flight requests. Use a deadline: | |||
| ```go | |||
| ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second) | |||
| defer cancel() | |||
| a.Server.Shutdown(ctx) | |||
| ``` | |||
| --- | |||
| ### H9. `syncTable` panics on models without string `ID` field | |||
| `internal/pkg/apiclient/updatedb.go:90`: | |||
| ```go | |||
| v := reflect.ValueOf(item).FieldByName("ID").String() | |||
| ``` | |||
| If any model passed to `syncTable` doesn't have a field named `ID`, or if `ID` isn't a string, this panics at runtime. Use an interface constraint or handle the error. | |||
| --- | |||
| ### H10. `fmt.Println` / `fmt.Printf` used in production code | |||
| The following files use `fmt.Println` or `fmt.Printf` instead of `slog`: | |||
| - `internal/pkg/common/utils/beacons.go:49,54` | |||
| - `internal/pkg/apiclient/auth.go:29,37` | |||
| - `internal/pkg/apiclient/data.go:15,22,79,80,91,94,100` | |||
| - `internal/pkg/apiclient/utils.go:17` | |||
| - `internal/pkg/apiclient/updatedb.go:67` | |||
| `fmt.Println` goes to stdout (not the structured log pipeline), has no timestamp, level, or request context, and is invisible to log aggregation. Replace all with `slog.Error` / `slog.Info`. | |||
| --- | |||
| ## Medium Priority Issues | |||
| ### M1. `Consume` logs errors on context cancellation | |||
| `internal/pkg/kafkaclient/consumer.go:22-24`: | |||
| ```go | |||
| msg, err := r.ReadMessage(ctx) | |||
| if err != nil { | |||
| slog.Error("error reading message", "error", err) | |||
| continue | |||
| } | |||
| ``` | |||
| When `ctx` is cancelled (normal shutdown), `ReadMessage` returns an error. This logs a spurious error on every shutdown. Check `ctx.Err()` first: | |||
| ```go | |||
| if err != nil { | |||
| if ctx.Err() != nil { | |||
| return | |||
| } | |||
| slog.Error(...) | |||
| continue | |||
| } | |||
| ``` | |||
| --- | |||
| ### M2. Database port hardcoded | |||
| `internal/pkg/database/database.go:17`: | |||
| ```go | |||
| "host=%s user=%s password=%s dbname=%s port=5432 sslmode=disable" | |||
| ``` | |||
| Port 5432, sslmode=disable are hardcoded. Both should be configurable via `config.Config`. | |||
| --- | |||
| ### M3. Constant naming convention | |||
| Go convention uses `CamelCase` for exported identifiers. The current naming: | |||
| ```go | |||
| SMALL_CHANNEL_SIZE = 200 | |||
| MEDIUM_CHANNEL_SIZE = 500 | |||
| LARGE_CHANNEL_SIZE = 2000 | |||
| SMALL_TICKER_INTERVAL = 1 * time.Second | |||
| MEDIUM_TICKER_INTERVAL = 2 * time.Second | |||
| LARGE_TICKER_INTERVAL = 5 * time.Second | |||
| ``` | |||
| Should be `SmallChannelSize`, `MediumChannelSize`, etc. | |||
| --- | |||
| ### M4. Inconsistent route naming | |||
| The API mixes two styles: | |||
| - RPC-style: `/reslevis/getGateways`, `/reslevis/postTracker` | |||
| - REST-style: `/configs/beacons`, `/health`, `/ready` | |||
| Pick one style and be consistent. REST convention: use nouns + HTTP verbs (`GET /gateways`, `POST /gateways`, `DELETE /gateways/{id}`). | |||
| --- | |||
| ### M5. `getEnv` duplicated | |||
| Both `internal/pkg/config/config.go` and `internal/pkg/common/appcontext/context.go` define their own `getEnv` helper. Extract to a shared utility. | |||
| --- | |||
| ### M6. Logger writes to CWD-relative file | |||
| `internal/pkg/logger/logger.go:13`: | |||
| ```go | |||
| f, err := os.OpenFile(fname, ...) | |||
| ``` | |||
| `fname` is just `"bridge.log"` etc., so the file goes wherever the binary's CWD happens to be. In a container this is fine; outside containers it is unpredictable. Consider making the log directory configurable. | |||
| --- | |||
| ### M7. No metrics or tracing | |||
| For production observability, consider adding: | |||
| - **Prometheus metrics**: request count, latency histograms, Kafka consumer lag, error rates. | |||
| - **Distributed tracing**: propagate trace IDs through Kafka messages and HTTP requests (OpenTelemetry). | |||
| The current system is log-only, making it hard to diagnose performance issues and set alerts. | |||
| --- | |||
| ### M8. CORS allows all origins | |||
| `internal/pkg/api/middleware/cors.go:13`: | |||
| ```go | |||
| if len(origins) == 0 { | |||
| origins = []string{"*"} | |||
| } | |||
| ``` | |||
| The default is `*` (all origins). For production, restrict to known frontend origins, even behind a firewall. | |||
| --- | |||
| ### M9. No Makefile | |||
| There is no `Makefile` or `Taskfile` for common operations (build, test, lint, docker-compose up). Adding one standardizes the developer workflow and CI pipeline. | |||
| --- | |||
| ## Minor / Style Issues | |||
| | # | File | Issue | | |||
| |---|---|---| | |||
| | S1 | `beacons.go:62` | Commented-out debug line `// fmt.Println("no parser can parse: ", beacon)` | | |||
| | S2 | `kafkaclient/manager.go` | `KafkaReadersMap` and `KafkaWritersMap` export their `Lock` and map fields but the package is internal; keep unexported | | |||
| | S3 | `controller/*` | Parameter named `context` shadows the `context` package import; use `ctx` | | |||
| | S4 | `alert_service.go:18` | `DeleteAlertByTrackerID` deletes by `id = ?` not `tracker_id = ?` -- function name is misleading | | |||
| | S5 | `model/parser.go:82` | `Parse` uses unsafe type assertions (`.(uint16)`, `.(float64)`) that will panic if `extract` returns `int(0)` on short data | | |||
| | S6 | `location/app.go:62-65` | `locTicker` and `healthTicker` have the same interval (`LARGE_TICKER_INTERVAL`), so they always fire together -- either intentional (combine them) or a bug | | |||
| | S7 | `bridge/app.go:116` | MQTT publish QoS=0 with retain=true for alerts/trackers -- consider QoS=1 for reliable delivery | | |||
| | S8 | `database.go:33-34` | `msg` variable used only to pass to `slog.Info(msg)` -- just inline the string | | |||
| --- | |||
| ## Test Assessment | |||
| ### What exists | |||
| - **Unit tests** for bridge MQTT handling, decoder processing, utils, model, logger, config, and appcontext. | |||
| - **Controller tests** using in-memory SQLite, covering CRUD for gateways, trackers, zones, settings. | |||
| - **Integration test stubs** for bridge and decoder with mock Kafka writers. | |||
| - **E2E test** is a placeholder (skips unless `E2E_TEST=1`). | |||
| ### Issues | |||
| 1. **Tests appear to be out of sync with production code.** The test files reference `model.BeaconAdvertisement` and `model.Settings`, but the production code has these types in `appcontext`. Controller test functions call `controller.GatewayListController(db)` with one arg, but production code requires two (`db, ctx`). This means the tests either do not compile against the current code or test a different version. | |||
| 2. **No test coverage measurement.** There is no CI step to report or enforce coverage. | |||
| 3. **E2E tests are stubs.** The `tests/e2e/e2e_test.go` does nothing meaningful. | |||
| **Recommendation:** Fix the test compilation issues first, then add coverage reporting, and implement at least one real E2E flow (e.g., publish MQTT message -> verify it reaches the Kafka topic -> verify the decoder processes it). | |||
| --- | |||
| ## Summary of Recommended Fix Priority | |||
| | Priority | Count | Action | | |||
| |---|---|---| | |||
| | **Critical** | 9 | Fix before any production deployment | | |||
| | **High** | 10 | Fix before GA / first customer | | |||
| | **Medium** | 9 | Fix in next sprint | | |||
| | **Minor/Style** | 8 | Fix during regular cleanup | | |||
| The biggest risks are: **resource leaks** (C1, C5), **race conditions** (C6, C7), **SQL injection** (C3), and **missing timeouts** (C2). Fixing these 6 items alone would significantly improve production readiness. | |||
| @@ -1,210 +0,0 @@ | |||
| # Production Readiness Report — Microservices (server, bridge, decoder, location) | |||
| **Scope:** `cmd/server`, `cmd/bridge`, `cmd/decoder`, `cmd/location` and all packages they import. | |||
| **Date:** 2025-03-05. | |||
| --- | |||
| ## Overall grade: **5.5 / 10** | |||
| The codebase has a **solid structure** and **consistent patterns** across the four services, but **security**, **reliability**, and **operational hardening** are not yet at production level. With the changes suggested below, it can be brought to a 7–8/10 for a production deployment. | |||
| --- | |||
| ## 1. Summary by dimension | |||
| | Dimension | Grade | Notes | | |||
| | --------------------------- | ----- | ------------------------------------------------------------------------------------------------- | | |||
| | **Structure & readability** | 7/10 | Clear app lifecycle (New/Run/Shutdown), good package layout, some naming/duplication issues. | | |||
| | **Reliability** | 5/10 | Graceful shutdown and Kafka cleanup are good; missing retries, commit semantics, and DB/timeouts. | | |||
| | **Security** | 3/10 | No API auth, TLS/DB and client TLS weakened, CORS permissive, no rate limiting. | | |||
| | **Observability** | 6/10 | slog, request ID, logging middleware, health/ready; no metrics/tracing. | | |||
| | **Correctness** | 6/10 | Path vs body ID bugs in update endpoints, `context.Background()` in hot paths. | | |||
| --- | |||
| ## 2. What’s in good shape | |||
| - **Unified app pattern:** All four services use the same lifecycle: `New(cfg)` → optional `Init(ctx)` (server only) → `Run(ctx)` → `Shutdown()`, with `signal.NotifyContext` for graceful shutdown. | |||
| - **Graceful shutdown:** HTTP server shutdown, Kafka readers/writers and MQTT disconnect are explicitly closed; `sync.WaitGroup` used for consumer goroutines. | |||
| - **Structured logging:** `slog` with JSON handler and file + stderr; request logging with method, path, status, duration, bytes. | |||
| - **HTTP middleware:** Recovery (panic → 500), request ID, CORS, logging applied in a clear chain. | |||
| - **Health endpoints:** `/health` (liveness) and `/ready` (DB ping) for the server. | |||
| - **Kafka usage:** Centralized `KafkaManager` with RWMutex, separate readers/writers, group IDs per service. | |||
| - **Shared state:** `AppState` in `common/appcontext` is thread-safe (RWMutex) and used consistently. | |||
| - **Config:** Env-based config with service-specific loaders (`LoadServer`, `LoadBridge`, etc.) and `getEnvPanic` for required vars. | |||
| - **API responses:** Centralized `response.JSON`, `Error`, `BadRequest`, `InternalError`, `NotFound` with consistent JSON shape. | |||
| - **OpenAPI:** Routes reference `api/openapi.yaml` (OpenAPI 3.0), which helps readability and contract clarity. | |||
| --- | |||
| ## 3. Critical issues | |||
| ### 3.1 Security | |||
| - **No authentication or authorization on the HTTP API** | |||
| All server routes (`/reslevis/*`, `/configs/beacons`, etc.) are unauthenticated. Anyone who can reach the server can read/update/delete gateways, zones, trackers, parser configs, settings, alerts, and tracks. | |||
| - **Database connection uses `sslmode=disable`** | |||
| In `internal/pkg/database/database.go`, DSN is built with `sslmode=disable`. In production, DB connections should use TLS and `sslmode=verify-full` (or equivalent) with CA verification. | |||
| - **TLS verification disabled for outbound HTTP** | |||
| - `internal/pkg/apiclient/updatedb.go`: `TLSClientConfig: &tls.Config{InsecureSkipVerify: true}`. | |||
| - `internal/pkg/location/inference.go`: same, and **`NewDefaultInferencer(skipTLSVerify bool)` ignores the parameter** and always uses `InsecureSkipVerify: true`. | |||
| - **CORS defaults to `*`** | |||
| In `internal/pkg/api/middleware/cors.go`, when `origins` is nil/empty, `origins = []string{"*"}`. Production should restrict origins to known front-end origins. | |||
| - **Logger file mode `0666`** | |||
| In `internal/pkg/logger/logger.go`, `os.OpenFile(..., 0666)` makes the log file world-readable and -writable. Prefer `0600` or `0640`. | |||
| - **No rate limiting or request body size limits** | |||
| No protection against abuse or large-body DoS; consider middleware for max body size and rate limiting (per IP or per key). | |||
| **Recommendations:** | |||
| - Add authentication/authorization middleware (e.g. JWT or API key validation) for all non-health API routes; keep `/health` and optionally `/ready` public. | |||
| - Make DB TLS configurable via env (e.g. `DB_SSLMODE`, `DB_SSLROOTCERT`) and use `sslmode=verify-full` in production. | |||
| - Use `cfg.TLSInsecureSkipVerify` (or equivalent) for all outbound HTTP clients; fix `NewDefaultInferencer` to respect the parameter. | |||
| - Configure CORS with explicit allowed origins (and optionally credentials) from config. | |||
| - Set log file mode to `0600` (or `0640` if a group needs read). | |||
| - Add middleware to limit request body size (e.g. `http.MaxBytesReader`) and consider rate limiting for API routes. | |||
| --- | |||
| ### 3.2 Reliability | |||
| - **Kafka consumer: decode errors and commit semantics** | |||
| In `internal/pkg/kafkaclient/consumer.go`, when `json.Unmarshal` fails, the code logs and `continue`s without committing. Depending on reader config, this can cause repeated redelivery of bad messages or ambiguous semantics. Production should either skip and commit, or send to a dead-letter path and commit. | |||
| Answer: because readers are using consumer groups messages are auto commited, meaning bad unmarshal still commits as the message was technically read | |||
| - **No retries on Kafka produce** | |||
| Event loops (server, bridge, decoder, location) call `WriteMessages` once; transient Kafka errors are not retried. Consider retry with backoff (and optional circuit breaker) for critical topics. | |||
| Answer: the Writer object is already holding the default configuration for timeout, backoff and retries, but I still added some extra configurations | |||
| - **Database: no explicit pool or timeouts** | |||
| `database.Connect` uses GORM defaults. For production, set `MaxOpenConns`, `MaxIdleConns`, and connection/timeout settings (e.g. `SetConnMaxLifetime`) on the underlying `*sql.DB`. | |||
| - **UpdateDB and Init: failures only logged** | |||
| In `internal/app/server/app.go`, `apiclient.UpdateDB` errors are only logged; Init continues. Consider failing Init (or marking “degraded”) if sync is required for correct operation, or add retries/backoff. | |||
| - **Use of `context.Background()` in async paths** | |||
| e.g. `internal/pkg/bridge/handler.go` and `internal/pkg/decoder/process.go` use `context.Background()` for Kafka writes. Prefer passing the request/event context (or a derived timeout) so shutdown and timeouts propagate. | |||
| **Recommendations:** | |||
| - Define a clear policy for Kafka consumer errors (skip+commit vs DLQ); avoid silent continue without commit unless intended. | |||
| - Add retry (with backoff) for Kafka produce in critical paths; consider a small wrapper or helper. | |||
| - Configure DB pool and timeouts in `database.Connect` (and optionally make them configurable via config). | |||
| - Decide whether UpdateDB is mandatory for startup; if yes, fail Init on error or retry; if no, document and consider a “degraded” readiness state. | |||
| - Pass context from the caller (or a timeout context) into Kafka write calls instead of `context.Background()`. | |||
| --- | |||
| ### 3.3 Correctness and consistency | |||
| - **Update endpoints: path `id` vs body `id`** | |||
| - **GatewayUpdateController** (`internal/pkg/controller/gateways_controller.go`): Uses `mux.Vars(r)["id"]` only to check existence with `First(..., "id = ?", id)`, then decodes body into `gateway` and calls `Save(&gateway)`. The updated record is identified by `gateway.ID` from the body, not the path. A client can send a different ID in the body and update another resource. | |||
| - **ZoneUpdateController**: Route is `updateZone` (no `{id}` in path); uses `zone.ID` from body only. If the API contract expects path-based ID, this is inconsistent. | |||
| Recommendation: For update-by-id, use the path parameter as the single source of truth: load by path `id`, decode body into a DTO or partial struct, then update only allowed fields for that id (e.g. selective updates or merge then update by path id). | |||
| - **TrackerUpdateController** | |||
| Uses body `tracker.ID` for lookup and save; route has no `{id}` in path. If other update endpoints use path `{id}`, align behavior and documentation. | |||
| **Recommendations:** | |||
| - Standardize update semantics: either path `{id}` only (body has no id or it must match path) or document “body id is canonical” and ensure no IDOR. | |||
| - Prefer path-based resource identification for updates/deletes and bind body to allowed fields only. | |||
| --- | |||
| ## 4. Minor issues and improvements | |||
| - **Logging:** Replace `fmt.Println` in `internal/pkg/apiclient/auth.go` and any `internal/pkg/apiclient/updatedb.go` / inference paths with `slog` (or structured logger) so logs are consistent and configurable. | |||
| - **Token lifecycle:** `DefaultInferencer` caches token in a struct field with no expiry or refresh; token may be used after expiry. Use token expiry from auth response and refresh when needed. | |||
| - **BeaconLookup naming:** In `appcontext`, `BeaconExists(id string)` is used with MAC (e.g. in bridge handler). Rename parameter to `mac` (or the method to `LookupIDByMAC`) to avoid confusion. | |||
| - **Bridge/location/decoder:** No `/health` or `/ready` endpoints. For orchestration (e.g. Kubernetes), consider a small HTTP server or at least a process-level health check so the platform can restart unhealthy instances. | |||
| - **Dependencies:** `go.mod` is clear; consider auditing indirect deps and keeping them updated (e.g. `go list -m -u all` and Dependabot/Renovate). | |||
| --- | |||
| ## 5. Propositions (prioritized) | |||
| ### P0 — Before production | |||
| 1. **Add API authentication/authorization** | |||
| Protect all non-health routes (e.g. JWT or API key middleware); document required claims/scopes if using JWT. | |||
| 2. **Enable and verify DB TLS** | |||
| Make `sslmode` (and optional root cert) configurable; use `verify-full` (or equivalent) in production. | |||
| 3. **Respect TLS config for outbound HTTP** | |||
| Use config (e.g. `TLSInsecureSkipVerify`) for apiclient and location inferencer; fix `NewDefaultInferencer(skipTLSVerify bool)` to use the parameter. | |||
| 4. **Fix update controllers** | |||
| Use path `id` as source of truth for update (and optionally delete); ensure body cannot override resource id in an unsafe way. | |||
| 5. **Tighten CORS and log file permissions** | |||
| Explicit allowed origins from config; set log file mode to `0600` (or `0640`). | |||
| ### P1 — High | |||
| 6. **Kafka consumer error policy** | |||
| Define and implement skip+commit or DLQ for bad messages; avoid infinite redelivery of poison messages. | |||
| 7. **DB connection pool and timeouts** | |||
| Set `MaxOpenConns`, `MaxIdleConns`, `ConnMaxLifetime`, and timeouts in `database.Connect`. | |||
| 8. **Request body size limit** | |||
| Middleware (e.g. `http.MaxBytesReader`) for API routes to prevent large-body DoS. | |||
| 9. **Replace fmt.Println with slog** | |||
| In apiclient and any remaining places; ensure response body is closed after read (e.g. `defer res.Body.Close()` in auth/data clients if not already). | |||
| ### P2 — Medium | |||
| 10. **Retries for Kafka produce** | |||
| Retry with backoff (and optionally circuit breaker) for critical `WriteMessages` calls. | |||
| 11. **Context propagation** | |||
| Pass request/event context (or bounded context) into Kafka writes instead of `context.Background()`. | |||
| 12. **Token refresh in Inferencer** | |||
| Use expiry from auth response and refresh token before inference calls. | |||
| 13. **Health for bridge/decoder/location** | |||
| Add minimal health/readiness (HTTP or signal file) for orchestration and load balancers. | |||
| ### P3 — Nice to have | |||
| 14. **Metrics and tracing** | |||
| Add metrics (e.g. request duration, Kafka lag, error counts) and optional distributed tracing (e.g. OTel). | |||
| 15. **Rate limiting** | |||
| Per-IP or per-token rate limiting on API routes. | |||
| 16. **Structured validation** | |||
| Use a validator (e.g. go-playground/validator) for request bodies and path params (IDs, limits). | |||
| 17. **Documentation** | |||
| Short runbooks for deploy, config env vars, and dependency on Kafka/DB/MQTT and external auth/API. | |||
| --- | |||
| ## 6. Files reviewed (representative) | |||
| - **Entrypoints:** `cmd/server/main.go`, `cmd/bridge/main.go`, `cmd/decoder/main.go`, `cmd/location/main.go` | |||
| - **Apps:** `internal/app/server/*`, `internal/app/bridge/app.go`, `internal/app/decoder/app.go`, `internal/app/location/app.go` | |||
| - **Config:** `internal/pkg/config/config.go` | |||
| - **Infra:** `internal/pkg/database/database.go`, `internal/pkg/kafkaclient/manager.go`, `internal/pkg/kafkaclient/consumer.go`, `internal/pkg/logger/logger.go` | |||
| - **API:** `internal/pkg/api/handler/health.go`, `internal/pkg/api/middleware/*`, `internal/pkg/api/response/response.go`, `internal/app/server/routes.go` | |||
| - **Controllers:** `internal/pkg/controller/*.go` (gateways, trackers, zone, parser, settings, alerts, tracks, trackerzones) | |||
| - **Services:** `internal/pkg/service/beacon_service.go`, `internal/pkg/apiclient/*.go`, `internal/pkg/bridge/mqtt.go`, `internal/pkg/bridge/handler.go`, `internal/pkg/decoder/process.go`, `internal/pkg/location/inference.go`, `internal/pkg/common/appcontext/context.go` | |||
| --- | |||
| ## 7. Conclusion | |||
| The microservices are **well-structured and readable**, with **consistent lifecycle and shutdown**. The main gaps are **security (no API auth, weak TLS usage)** and **reliability (Kafka and DB tuning, retries, context usage)**. Addressing the **P0 and P1** items above would bring the system much closer to production grade (around **7–8/10**); adding **P2/P3** would further improve operability and resilience. | |||
| @@ -6,17 +6,22 @@ toolchain go1.24.9 | |||
| require ( | |||
| github.com/eclipse/paho.mqtt.golang v1.5.1 | |||
| github.com/go-playground/validator/v10 v10.30.1 | |||
| github.com/google/uuid v1.6.0 | |||
| github.com/gorilla/handlers v1.5.2 | |||
| github.com/gorilla/mux v1.8.1 | |||
| github.com/mitchellh/mapstructure v1.5.0 | |||
| github.com/segmentio/kafka-go v0.4.49 | |||
| gorm.io/driver/postgres v1.6.0 | |||
| gorm.io/driver/sqlite v1.6.0 | |||
| gorm.io/gorm v1.31.1 | |||
| ) | |||
| require ( | |||
| github.com/felixge/httpsnoop v1.0.3 // indirect | |||
| github.com/gabriel-vasile/mimetype v1.4.12 // indirect | |||
| github.com/go-playground/locales v0.14.1 // indirect | |||
| github.com/go-playground/universal-translator v0.18.1 // indirect | |||
| github.com/gorilla/websocket v1.5.3 // indirect | |||
| github.com/jackc/pgpassfile v1.0.0 // indirect | |||
| github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect | |||
| @@ -25,12 +30,13 @@ require ( | |||
| github.com/jinzhu/inflection v1.0.0 // indirect | |||
| github.com/jinzhu/now v1.1.5 // indirect | |||
| github.com/klauspost/compress v1.15.9 // indirect | |||
| github.com/leodido/go-urn v1.4.0 // indirect | |||
| github.com/mattn/go-sqlite3 v1.14.22 // indirect | |||
| github.com/pierrec/lz4/v4 v4.1.15 // indirect | |||
| github.com/stretchr/testify v1.11.1 // indirect | |||
| golang.org/x/crypto v0.42.0 // indirect | |||
| golang.org/x/net v0.44.0 // indirect | |||
| golang.org/x/sync v0.17.0 // indirect | |||
| golang.org/x/text v0.29.0 // indirect | |||
| gorm.io/driver/sqlite v1.6.0 // indirect | |||
| golang.org/x/crypto v0.46.0 // indirect | |||
| golang.org/x/net v0.47.0 // indirect | |||
| golang.org/x/sync v0.19.0 // indirect | |||
| golang.org/x/sys v0.39.0 // indirect | |||
| golang.org/x/text v0.32.0 // indirect | |||
| ) | |||
| @@ -5,6 +5,16 @@ github.com/eclipse/paho.mqtt.golang v1.5.1 h1:/VSOv3oDLlpqR2Epjn1Q7b2bSTplJIeV2I | |||
| github.com/eclipse/paho.mqtt.golang v1.5.1/go.mod h1:1/yJCneuyOoCOzKSsOTUc0AJfpsItBGWvYpBLimhArU= | |||
| github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk= | |||
| github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= | |||
| github.com/gabriel-vasile/mimetype v1.4.12 h1:e9hWvmLYvtp846tLHam2o++qitpguFiYCKbn0w9jyqw= | |||
| github.com/gabriel-vasile/mimetype v1.4.12/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s= | |||
| github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s= | |||
| github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4= | |||
| github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA= | |||
| github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY= | |||
| github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY= | |||
| github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY= | |||
| github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w= | |||
| github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM= | |||
| github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= | |||
| github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= | |||
| github.com/gorilla/handlers v1.5.2 h1:cLTUSsNkgcwhgRqvCNmdbRWG0A3N4F+M2nWKdScwyEE= | |||
| @@ -27,6 +37,8 @@ github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ= | |||
| github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8= | |||
| github.com/klauspost/compress v1.15.9 h1:wKRjX6JRtDdrE9qwa4b/Cip7ACOshUI4smpCQanqjSY= | |||
| github.com/klauspost/compress v1.15.9/go.mod h1:PhcZ0MbTNciWF3rruxRgKxI5NkcHHrHUDtV4Yw2GlzU= | |||
| github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ= | |||
| github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI= | |||
| github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU= | |||
| github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y= | |||
| github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY= | |||
| @@ -48,14 +60,16 @@ github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY= | |||
| github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4= | |||
| github.com/xdg-go/stringprep v1.0.4 h1:XLI/Ng3O1Atzq0oBs3TWm+5ZVgkq2aqdlvP9JtoZ6c8= | |||
| github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM= | |||
| golang.org/x/crypto v0.42.0 h1:chiH31gIWm57EkTXpwnqf8qeuMUi0yekh6mT2AvFlqI= | |||
| golang.org/x/crypto v0.42.0/go.mod h1:4+rDnOTJhQCx2q7/j6rAN5XDw8kPjeaXEUR2eL94ix8= | |||
| golang.org/x/net v0.44.0 h1:evd8IRDyfNBMBTTY5XRF1vaZlD+EmWx6x8PkhR04H/I= | |||
| golang.org/x/net v0.44.0/go.mod h1:ECOoLqd5U3Lhyeyo/QDCEVQ4sNgYsqvCZ722XogGieY= | |||
| golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug= | |||
| golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= | |||
| golang.org/x/text v0.29.0 h1:1neNs90w9YzJ9BocxfsQNHKuAT4pkghyXc4nhZ6sJvk= | |||
| golang.org/x/text v0.29.0/go.mod h1:7MhJOA9CD2qZyOKYazxdYMF85OwPdEr9jTtBpO7ydH4= | |||
| golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU= | |||
| golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0= | |||
| golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY= | |||
| golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU= | |||
| golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= | |||
| golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= | |||
| golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk= | |||
| golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= | |||
| golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU= | |||
| golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY= | |||
| gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= | |||
| gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= | |||
| gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= | |||
| @@ -52,9 +52,10 @@ func (a *ServerApp) RegisterRoutes() http.Handler { | |||
| r.HandleFunc("/reslevis/settings", controller.SettingsUpdateController(a.DB, a.KafkaManager.GetWriter("settings"), a.ctx)).Methods("PATCH") | |||
| r.HandleFunc("/reslevis/settings", controller.SettingsListController(a.DB, a.ctx)).Methods("GET") | |||
| r.HandleFunc("/reslevis/alerts/{id}", controller.AlertDeleteController(a.DB, a.ctx)).Methods("DELETE") | |||
| r.HandleFunc("/reslevis/alerts", controller.AlertsListController(a.DB, a.ctx)).Methods("GET") | |||
| r.HandleFunc("/reslevis/alerts/{id}", controller.ListAlertsByTrackerIDController(a.DB, a.ctx)).Methods("GET") | |||
| r.HandleFunc("/reslevis/alerts/{id}", controller.AlertUpdateStatusController(a.DB, a.ctx)).Methods("PATCH") | |||
| r.HandleFunc("/reslevis/alerts/{id}", controller.AlertDeleteController(a.DB, a.ctx)).Methods("DELETE") | |||
| // Tracks | |||
| r.HandleFunc("/reslevis/getTracks/{id}", controller.TracksListController(a.DB, a.ctx)).Methods("GET") | |||
| @@ -3,6 +3,7 @@ package bridge | |||
| import ( | |||
| "context" | |||
| "encoding/json" | |||
| "fmt" | |||
| "log/slog" | |||
| "strings" | |||
| "time" | |||
| @@ -34,6 +35,7 @@ func HandleMQTTMessage(topic string, payload []byte, appState *appcontext.AppSta | |||
| if reading.Type == "Gateway" { | |||
| continue | |||
| } | |||
| id, ok := appState.BeaconExists(reading.MAC) | |||
| if !ok { | |||
| continue | |||
| @@ -57,6 +59,8 @@ func HandleMQTTMessage(topic string, payload []byte, appState *appcontext.AppSta | |||
| } | |||
| } | |||
| return | |||
| } else { | |||
| fmt.Println("CSV message: ", msgStr) | |||
| } | |||
| // CSV format: validate minimum fields (e.g. 6 columns); full parsing can be added later | |||
| s := strings.Split(msgStr, ",") | |||
| @@ -51,6 +51,7 @@ func LoopADStructures(b []byte, i [][2]int, id string, parserRegistry *model.Par | |||
| for name, parser := range parserRegistry.ParserList { | |||
| if parser.CanParse(ad) { | |||
| event, ok := parser.Parse(name, ad) | |||
| fmt.Println("beacon id: ", id) | |||
| fmt.Println("parser can parse: ", name) | |||
| if ok { | |||
| event.ID = id | |||
| @@ -2,11 +2,13 @@ package controller | |||
| import ( | |||
| "context" | |||
| "encoding/json" | |||
| "errors" | |||
| "net/http" | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/service" | |||
| "github.com/AFASystems/presence/internal/pkg/validation" | |||
| "github.com/gorilla/mux" | |||
| "gorm.io/gorm" | |||
| ) | |||
| @@ -48,3 +50,30 @@ func AlertDeleteController(db *gorm.DB, ctx context.Context) http.HandlerFunc { | |||
| response.JSON(w, http.StatusOK, map[string]string{"status": "deleted"}) | |||
| } | |||
| } | |||
| // AlertUpdateStatusController updates an alert's status by id. Body: {"status": "resolved"} (or other status string). | |||
| func AlertUpdateStatusController(db *gorm.DB, ctx context.Context) http.HandlerFunc { | |||
| return func(w http.ResponseWriter, r *http.Request) { | |||
| id := mux.Vars(r)["id"] | |||
| var body struct { | |||
| Status string `json:"status"` | |||
| } | |||
| if err := json.NewDecoder(r.Body).Decode(&body); err != nil { | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Var(body.Status, "required"); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := service.UpdateAlertStatus(id, body.Status, db, ctx); err != nil { | |||
| if errors.Is(err, gorm.ErrRecordNotFound) { | |||
| response.NotFound(w, "alert not found") | |||
| return | |||
| } | |||
| response.InternalError(w, "failed to update alert status", err) | |||
| return | |||
| } | |||
| response.JSON(w, http.StatusOK, map[string]string{"status": "updated"}) | |||
| } | |||
| } | |||
| @@ -7,6 +7,7 @@ import ( | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/model" | |||
| "github.com/AFASystems/presence/internal/pkg/validation" | |||
| "github.com/gorilla/mux" | |||
| "gorm.io/gorm" | |||
| ) | |||
| @@ -18,6 +19,10 @@ func GatewayAddController(db *gorm.DB, context context.Context) http.HandlerFunc | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&gateway); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Create(&gateway).Error; err != nil { | |||
| response.InternalError(w, "failed to create gateway", err) | |||
| @@ -71,6 +76,10 @@ func GatewayUpdateController(db *gorm.DB, context context.Context) http.HandlerF | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&gateway); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Save(&gateway).Error; err != nil { | |||
| response.InternalError(w, "failed to update gateway", err) | |||
| @@ -0,0 +1,16 @@ | |||
| package controller | |||
| import ( | |||
| "context" | |||
| "net/http" | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/common/appcontext" | |||
| ) | |||
| func HealthController(appState *appcontext.AppState, context context.Context) http.HandlerFunc { | |||
| return func(w http.ResponseWriter, r *http.Request) { | |||
| health := appState.GetHealth() | |||
| response.JSON(w, http.StatusOK, health) | |||
| } | |||
| } | |||
| @@ -9,6 +9,7 @@ import ( | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/model" | |||
| "github.com/AFASystems/presence/internal/pkg/service" | |||
| "github.com/AFASystems/presence/internal/pkg/validation" | |||
| "github.com/gorilla/mux" | |||
| "github.com/segmentio/kafka-go" | |||
| "gorm.io/gorm" | |||
| @@ -21,6 +22,10 @@ func ParserAddController(db *gorm.DB, writer *kafka.Writer, context context.Cont | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&config); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Create(&config).Error; err != nil { | |||
| response.InternalError(w, "failed to create parser config", err) | |||
| @@ -95,6 +100,10 @@ func ParserUpdateController(db *gorm.DB, writer *kafka.Writer, context context.C | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&config); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Save(&config).Error; err != nil { | |||
| response.InternalError(w, "failed to update parser config", err) | |||
| @@ -31,6 +31,10 @@ func SettingsUpdateController(db *gorm.DB, writer *kafka.Writer, context context | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if len(updates) == 0 { | |||
| response.BadRequest(w, "at least one setting is required") | |||
| return | |||
| } | |||
| slog.Info("updating settings", "updates", updates) | |||
| @@ -9,6 +9,7 @@ import ( | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/kafkaclient" | |||
| "github.com/AFASystems/presence/internal/pkg/model" | |||
| "github.com/AFASystems/presence/internal/pkg/validation" | |||
| "github.com/gorilla/mux" | |||
| "github.com/segmentio/kafka-go" | |||
| "gorm.io/gorm" | |||
| @@ -39,6 +40,10 @@ func TrackerAdd(db *gorm.DB, writer *kafka.Writer, context context.Context) http | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&tracker); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Create(&tracker).Error; err != nil { | |||
| response.InternalError(w, "failed to create tracker", err) | |||
| return | |||
| @@ -78,6 +83,10 @@ func TrackerUpdate(db *gorm.DB, context context.Context) http.HandlerFunc { | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&tracker); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| id := tracker.ID | |||
| if err := db.WithContext(context).First(&model.Tracker{}, "id = ?", id).Error; err != nil { | |||
| @@ -7,6 +7,7 @@ import ( | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/model" | |||
| "github.com/AFASystems/presence/internal/pkg/validation" | |||
| "github.com/gorilla/mux" | |||
| "gorm.io/gorm" | |||
| ) | |||
| @@ -18,6 +19,10 @@ func TrackerZoneAddController(db *gorm.DB, context context.Context) http.Handler | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&tz); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Create(&tz).Error; err != nil { | |||
| response.InternalError(w, "failed to create tracker zone", err) | |||
| return | |||
| @@ -45,6 +50,10 @@ func TrackerZoneUpdateController(db *gorm.DB, context context.Context) http.Hand | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&tz); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| id := tz.ID | |||
| if err := db.WithContext(context).First(&model.TrackerZones{}, "id = ?", id).Error; err != nil { | |||
| @@ -7,6 +7,7 @@ import ( | |||
| "github.com/AFASystems/presence/internal/pkg/api/response" | |||
| "github.com/AFASystems/presence/internal/pkg/model" | |||
| "github.com/AFASystems/presence/internal/pkg/validation" | |||
| "github.com/gorilla/mux" | |||
| "gorm.io/gorm" | |||
| ) | |||
| @@ -18,6 +19,10 @@ func ZoneAddController(db *gorm.DB, context context.Context) http.HandlerFunc { | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&zone); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| if err := db.WithContext(context).Create(&zone).Error; err != nil { | |||
| response.InternalError(w, "failed to create zone", err) | |||
| return | |||
| @@ -45,6 +50,10 @@ func ZoneUpdateController(db *gorm.DB, context context.Context) http.HandlerFunc | |||
| response.BadRequest(w, "invalid request body") | |||
| return | |||
| } | |||
| if err := validation.Struct(&zone); err != nil { | |||
| response.BadRequest(w, err.Error()) | |||
| return | |||
| } | |||
| id := zone.ID | |||
| if err := db.WithContext(context).First(&model.Zone{}, "id = ?", id).Error; err != nil { | |||
| @@ -1,8 +1,11 @@ | |||
| package model | |||
| import "time" | |||
| type Alert struct { | |||
| ID string `json:"id" gorm:"primaryKey"` | |||
| TrackerID string `json:"tracker_id"` | |||
| Type string `json:"type"` | |||
| Value string `json:"value"` | |||
| ID string `json:"id" gorm:"primaryKey"` | |||
| TrackerID string `json:"tracker_id"` | |||
| Type string `json:"type"` | |||
| Status string `json:"status"` | |||
| Timestamp time.Time `json:"timestamp"` | |||
| } | |||
| @@ -1,9 +1,9 @@ | |||
| package model | |||
| type Gateway struct { | |||
| ID string `json:"id" gorm:"primaryKey"` | |||
| Name string `json:"name"` | |||
| MAC string `json:"mac" gorm:"index"` | |||
| ID string `json:"id" gorm:"primaryKey" validate:"required"` | |||
| Name string `json:"name" validate:"required"` | |||
| MAC string `json:"mac" gorm:"index" validate:"required"` | |||
| Status string `json:"status"` | |||
| Model string `json:"model"` | |||
| IP string `json:"ip"` | |||
| @@ -27,10 +27,10 @@ type ParserRegistry struct { | |||
| } | |||
| type Config struct { | |||
| Name string `json:"name" gorm:"primaryKey"` | |||
| Min int `json:"min"` | |||
| Max int `json:"max"` | |||
| Pattern []string `json:"pattern" gorm:"serializer:json"` | |||
| Name string `json:"name" gorm:"primaryKey" validate:"required"` | |||
| Min int `json:"min" validate:"gte=0,ltefield=Max"` | |||
| Max int `json:"max" validate:"gte=0"` | |||
| Pattern []string `json:"pattern" gorm:"serializer:json" validate:"required,min=1,dive,required"` | |||
| Configs map[string]ParserConfig `json:"configs" gorm:"serializer:json"` | |||
| } | |||
| @@ -1,9 +1,9 @@ | |||
| package model | |||
| type TrackerZones struct { | |||
| ID string `json:"id" gorm:"primaryKey"` | |||
| ID string `json:"id" gorm:"primaryKey" validate:"required"` | |||
| ZoneList []string `json:"zoneList" gorm:"serializer:json"` | |||
| Tracker string `json:"tracker" gorm:"index"` | |||
| Tracker string `json:"tracker" gorm:"index" validate:"required"` | |||
| Days string `json:"days"` | |||
| Time string `json:"time"` | |||
| } | |||
| @@ -1,9 +1,9 @@ | |||
| package model | |||
| type Tracker struct { | |||
| ID string `json:"id" gorm:"primaryKey"` | |||
| Name string `json:"name"` | |||
| MAC string `json:"mac"` | |||
| ID string `json:"id" gorm:"primaryKey" validate:"required"` | |||
| Name string `json:"name" validate:"required"` | |||
| MAC string `json:"mac" validate:"required"` | |||
| Status string `json:"status"` | |||
| Model string `json:"model"` | |||
| IP string `json:"ip"` | |||
| @@ -1,8 +1,8 @@ | |||
| package model | |||
| type Zone struct { | |||
| ID string `json:"id" gorm:"primaryKey"` | |||
| Name string `json:"name"` | |||
| ID string `json:"id" gorm:"primaryKey" validate:"required"` | |||
| Name string `json:"name" validate:"required"` | |||
| Groups []string `json:"groups" gorm:"serializer:json"` | |||
| Floor string `json:"floor"` | |||
| Building string `json:"building"` | |||
| @@ -37,3 +37,15 @@ func GetAlertById(id string, db *gorm.DB, ctx context.Context) (model.Alert, err | |||
| } | |||
| return alert, nil | |||
| } | |||
| // UpdateAlertStatus updates the status of an alert by id. Returns gorm.ErrRecordNotFound if the alert does not exist. | |||
| func UpdateAlertStatus(id string, status string, db *gorm.DB, ctx context.Context) error { | |||
| result := db.WithContext(ctx).Model(&model.Alert{}).Where("id = ?", id).Update("status", status) | |||
| if result.Error != nil { | |||
| return result.Error | |||
| } | |||
| if result.RowsAffected == 0 { | |||
| return gorm.ErrRecordNotFound | |||
| } | |||
| return nil | |||
| } | |||
| @@ -131,33 +131,42 @@ func LocationToBeaconServiceAI(msg model.HTTPLocation, db *gorm.DB, writer *kafk | |||
| func sendAlert(gwId, trackerId string, writer *kafka.Writer, ctx context.Context, allowedZones []string, db *gorm.DB) { | |||
| if len(allowedZones) != 0 && !slices.Contains(allowedZones, gwId) { | |||
| alert := model.Alert{ | |||
| ID: uuid.New().String(), | |||
| TrackerID: trackerId, | |||
| Type: "Restricted zone", | |||
| Value: gwId, | |||
| } | |||
| if err := InsertAlert(alert, db, ctx); err != nil { | |||
| msg := fmt.Sprintf("Error in inserting alert: %v", err) | |||
| slog.Error(msg) | |||
| return | |||
| } | |||
| var existingAlert model.Alert | |||
| result := db.Select("status").Where("tracker_id = ? AND type = ?", trackerId, "Restricted zone").Order("timestamp DESC").First(&existingAlert) | |||
| if result.Error == gorm.ErrRecordNotFound || existingAlert.Status == "resolved" { | |||
| alert := model.Alert{ | |||
| ID: uuid.New().String(), | |||
| TrackerID: trackerId, | |||
| Type: "Restricted zone", | |||
| Status: "new", | |||
| Timestamp: time.Now(), | |||
| } | |||
| eMsg, err := json.Marshal(alert) | |||
| if err != nil { | |||
| msg := "Error in marshaling" | |||
| slog.Error(msg) | |||
| return | |||
| } else { | |||
| msg := kafka.Message{ | |||
| Value: eMsg, | |||
| if err := InsertAlert(alert, db, ctx); err != nil { | |||
| msg := fmt.Sprintf("Error in inserting alert: %v", err) | |||
| slog.Error(msg) | |||
| return | |||
| } | |||
| if err := kafkaclient.Write(ctx, writer, msg); err != nil { | |||
| msg := fmt.Sprintf("Error in writing message: %v", err) | |||
| eMsg, err := json.Marshal(alert) | |||
| if err != nil { | |||
| msg := "Error in marshaling" | |||
| slog.Error(msg) | |||
| return | |||
| } else { | |||
| msg := kafka.Message{ | |||
| Value: eMsg, | |||
| } | |||
| if err := kafkaclient.Write(ctx, writer, msg); err != nil { | |||
| msg := fmt.Sprintf("Error in writing message: %v", err) | |||
| slog.Error(msg) | |||
| return | |||
| } | |||
| } | |||
| return | |||
| } else { | |||
| return | |||
| } | |||
| } | |||
| } | |||
| @@ -0,0 +1,36 @@ | |||
| // Package validation uses github.com/go-playground/validator/v10 for API request validation. | |||
| package validation | |||
| import ( | |||
| "fmt" | |||
| "github.com/go-playground/validator/v10" | |||
| ) | |||
| var defaultValidator = validator.New() | |||
| // Struct validates s and returns the first validation error as a readable message, or nil. | |||
| func Struct(s interface{}) error { | |||
| err := defaultValidator.Struct(s) | |||
| if err == nil { | |||
| return nil | |||
| } | |||
| if errs, ok := err.(validator.ValidationErrors); ok && len(errs) > 0 { | |||
| fe := errs[0] | |||
| return fmt.Errorf("%s: %s", fe.Field(), fe.Tag()) | |||
| } | |||
| return err | |||
| } | |||
| // Var validates a single value with the given tag (e.g. "required") and returns an error or nil. | |||
| func Var(field interface{}, tag string) error { | |||
| err := defaultValidator.Var(field, tag) | |||
| if err == nil { | |||
| return nil | |||
| } | |||
| if errs, ok := err.(validator.ValidationErrors); ok && len(errs) > 0 { | |||
| fe := errs[0] | |||
| return fmt.Errorf("%s", fe.Tag()) | |||
| } | |||
| return err | |||
| } | |||
| @@ -0,0 +1,17 @@ | |||
| #!/bin/bash | |||
| # PATCH alert status by alert id. | |||
| # Usage: ./alert_status.sh <alert_id> [status] | |||
| # alert_id - required, the alert id (e.g. from GET /reslevis/alerts) | |||
| # status - optional, default: resolved | |||
| # Example: ./alert_status.sh abc-123 resolved | |||
| # BASE_URL=http://host:port ./alert_status.sh abc-123 acknowledged | |||
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | |||
| . "${SCRIPT_DIR}/../_common.sh" | |||
| ALERT_ID="${1:?Usage: $0 <alert_id> [status]}" | |||
| STATUS="${2:-resolved}" | |||
| curl -s -X PATCH "${BASE_URL}/reslevis/alerts/${ALERT_ID}" \ | |||
| -H "Content-Type: application/json" \ | |||
| -d "{\"status\": \"${STATUS}\"}" | |||
| echo "" | |||
| @@ -0,0 +1,3 @@ | |||
| #!/bin/bash | |||
| curl -X GET http://localhost:1902/reslevis/health | |||