Documentation

fsds/bbu-rfid-azure-deployment-preparation-plan.md

Azure Deployment Preparation Plan

Overview

Prepare the bbu-rfid project for Azure deployment by securing credentials, packaging Superset definitions, migrating database data, and configuring MQTT environment isolation.


Stage 1: Superset Credentials for Azure

Goal: Replace hardcoded admin/admin with configurable, secure credentials stored in Azure App Configuration.

Files to Modify:

  • projects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/appsettings.json
  • strata/orchestration/src/Acsis.Dynaplex.Strata.Orchestration/DynaplexInfrastructureExtensions.cs
  • projects/bbu-rfid/resources/superset/superset_config.py

Implementation:

  1. Add Superset configuration section to AppHost appsettings.json:

    "Superset": {
      "AdminUsername": "acsis-admin",
      "AdminPassword": "kv:superset-admin-password",
      "SecretKey": "kv:superset-secret-key"
    }
    
  2. Update AddDynaplexAnalytics() to read config and pass as env vars:

    • SUPERSET_ADMIN_USERNAME from config
    • SUPERSET_ADMIN_PASSWORD from config (supports kv: prefix for Key Vault)
    • SUPERSET_SECRET_KEY from config
  3. The docker-entrypoint.sh already reads these env vars - no changes needed there.

  4. Generate secure passwords for Azure and add to Key Vault:

    • superset-admin-password: Strong password (16+ chars, mixed case, numbers, symbols)
    • superset-secret-key: 32+ character random string for Flask session signing

Stage 2: Export Superset Definitions

Goal: Export dashboards, charts, datasets from local Superset and create import mechanism for Azure.

Approach:

Use Superset's native export CLI to create portable ZIP files.

Steps:

  1. Create export script: projects/bbu-rfid/resources/superset/export-definitions.sh

    #!/bin/bash
    # Export all dashboards (includes dependent charts/datasets)
    superset export_dashboards -d /app/exports/
    
  2. Create import script: projects/bbu-rfid/resources/superset/import-definitions.sh

    #!/bin/bash
    # Import from mounted exports directory
    for f in /app/imports/*.zip; do
      superset import_dashboards -p "$f"
    done
    
  3. Export procedure (one-time manual):

    # Connect to running Superset container
    podman exec -it superset /bin/bash
    # Run export
    superset export_dashboards -d /tmp/exports/
    # Copy out
    podman cp superset:/tmp/exports ./superset-exports/
    
  4. Add exports to repo: projects/bbu-rfid/resources/superset/exports/

  5. Modify docker-entrypoint.sh to auto-import on first run if exports exist and dashboards are empty.


Stage 3: Identity Seeder Update for Production

Goal: Ensure admin user exists in production without hardcoded passwords.

Files to Modify:

  • engines/identity/src/Acsis.Dynaplex.Engines.Identity.Database/Seeding/IdentityReferenceDataSeeder.cs
  • projects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/appsettings.json

Implementation:

  1. Modify SeedDevelopmentUsers() to:

    • Check if ANY users exist in the database
    • If empty, seed initial admin user regardless of environment
    • Read password from configuration instead of hardcoding "admin"
    // Check if any users exist
    var hasUsers = await db.Users.IgnoreQueryFilters().AnyAsync(cancellationToken);
    if (!hasUsers)
    {
        // Seed initial admin - required for first deployment
        await SeedInitialAdminUser(logger, cancellationToken);
    }
    
  2. Add configuration section for initial admin:

    "Identity": {
      // ... existing config ...
      "InitialAdmin": {
        "Username": "admin",
        "Password": "kv:identity-admin-password",
        "Email": "admin@acsis.com"
      }
    }
    
  3. Since we're doing a full DB restore, admin user from local will exist with local password. We'll need to update the password post-restore or update it in the backup before restore.


Stage 4: Database Migration Strategy

Goal: Full PostgreSQL backup from local, restore to Azure PostgreSQL.

Tools:

  • pg_dump for backup
  • pg_restore for restore
  • Azure CLI for PostgreSQL access

Pre-requisites:

  • Azure PostgreSQL firewall must allow your IP or use Azure Cloud Shell
  • Have the admin credentials from .azure/bbu-rfid-rnd/.env

Backup Script: scripts/backup-local-db.sh

#!/bin/bash
# Backup local Acsis database
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
OUTPUT_FILE="acsis_backup_${TIMESTAMP}.dump"

pg_dump \
  -h localhost \
  -p 15432 \
  -U postgres \
  -Fc \
  -v \
  --no-owner \
  --no-acl \
  acsis > "${OUTPUT_FILE}"

echo "Backup created: ${OUTPUT_FILE}"

Restore Script: scripts/restore-to-azure.sh

#!/bin/bash
# Restore to Azure PostgreSQL
# Requires: Azure CLI, psql client

BACKUP_FILE=$1
AZURE_HOST="psql-elbbudev-db.postgres.database.azure.com"
AZURE_USER="acsisadmin"
# Get password from .env or prompt

pg_restore \
  -h "${AZURE_HOST}" \
  -p 5432 \
  -U "${AZURE_USER}" \
  -d acsis \
  -v \
  --no-owner \
  --no-acl \
  --clean \
  --if-exists \
  "${BACKUP_FILE}"

Important Considerations:

  1. Update admin password post-restore: Since we're copying local DB, need to update identity user password
  2. Verify checkpoint/high-water marks: After restore, processing will resume from where local left off
  3. Schema compatibility: Azure PG 18 should be compatible with local PG 18 dump

Stage 5: MQTT Environment Isolation

Goal: Configure MQTT EnvironmentId to use Azure environment name, ensuring Azure processors don't conflict with local development.

Files to Modify:

  • engines/bbu/src/Acsis.Dynaplex.Engines.Bbu/Services/BbuMqttProcessor.cs (no changes needed - uses config)
  • engines/iot/src/Acsis.Dynaplex.Engines.Iot.Abstractions/Services/MqttProcessorBase.cs (verify EnvironmentId handling)
  • projects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/appsettings.json
  • projects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/infra/bbu/bbu.module.bicep (add env var)

Implementation:

  1. The MqttProcessorConfiguration already has EnvironmentId property that:

    • Gets appended to ClientId and SharedSubscriptionGroup
    • In Debug: Auto-detects to machine name
    • In Release: Must be explicitly configured
  2. Add environment variable in Azure deployment:

    • Pass AZURE_ENV_NAME to the BBU container
    • Map it to Bbu__Mqtt__EnvironmentId configuration
  3. Update BBU bicep module to include:

    {
      name: 'Bbu__Mqtt__EnvironmentId'
      value: environmentName  // From main.bicep parameter
    }
    
  4. Result: Azure MQTT will use:

    • ClientId: bbu-processor-bbu-rfid-rnd-<unique>
    • SharedGroup: $share/bbu-processors-bbu-rfid-rnd/zebra/fx/+/reads

Execution Order

  1. Stage 5 (MQTT) - Quick config change, low risk
  2. Stage 3 (Identity seeder) - Code change, needs testing
  3. Stage 1 (Superset creds) - Config + orchestration changes
  4. Stage 2 (Superset export) - Scripts + manual export step
  5. Stage 4 (DB migration) - Execute last, right before azd up

Pre-Deployment Checklist

  • Stage 5: MQTT EnvironmentId configured in bicep
  • Stage 3: Identity seeder updated and tested
  • Stage 1: Superset credentials in App Config, secrets in Key Vault
  • Stage 2: Superset definitions exported and import script ready
  • Stage 4: Database backup created
  • Azure Key Vault secrets added:
    • superset-admin-password
    • superset-secret-key
    • identity-admin-password (if using config-based admin)
    • bbu-mqtt-password (already exists)
    • identity-rsa-private-key and identity-rsa-public-key
  • Azure PostgreSQL firewall allows restore connection
  • Local database backup verified

Post-Deployment Verification

  1. Check db-manager container completes successfully (migrations should skip since tables exist)
  2. Verify Superset:
    • Can login with new credentials
    • Dashboards/charts imported correctly
    • Can query Acsis database
  3. Verify Identity:
    • Can login to Core UI
    • Permissions work correctly
  4. Verify BBU:
    • MQTT connects with correct EnvironmentId
    • Tag read processing resumes from checkpoint
  5. Check Application Insights for errors