Documentation
fsds/bbu-rfid-azure-deployment-preparation-plan.md
Azure Deployment Preparation Plan
Overview
Prepare the bbu-rfid project for Azure deployment by securing credentials, packaging Superset definitions, migrating database data, and configuring MQTT environment isolation.
Stage 1: Superset Credentials for Azure
Goal: Replace hardcoded admin/admin with configurable, secure credentials stored in Azure App Configuration.
Files to Modify:
projects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/appsettings.jsonstrata/orchestration/src/Acsis.Dynaplex.Strata.Orchestration/DynaplexInfrastructureExtensions.csprojects/bbu-rfid/resources/superset/superset_config.py
Implementation:
Add Superset configuration section to AppHost appsettings.json:
"Superset": { "AdminUsername": "acsis-admin", "AdminPassword": "kv:superset-admin-password", "SecretKey": "kv:superset-secret-key" }Update
AddDynaplexAnalytics()to read config and pass as env vars:SUPERSET_ADMIN_USERNAMEfrom configSUPERSET_ADMIN_PASSWORDfrom config (supportskv:prefix for Key Vault)SUPERSET_SECRET_KEYfrom config
The docker-entrypoint.sh already reads these env vars - no changes needed there.
Generate secure passwords for Azure and add to Key Vault:
superset-admin-password: Strong password (16+ chars, mixed case, numbers, symbols)superset-secret-key: 32+ character random string for Flask session signing
Stage 2: Export Superset Definitions
Goal: Export dashboards, charts, datasets from local Superset and create import mechanism for Azure.
Approach:
Use Superset's native export CLI to create portable ZIP files.
Steps:
Create export script:
projects/bbu-rfid/resources/superset/export-definitions.sh#!/bin/bash # Export all dashboards (includes dependent charts/datasets) superset export_dashboards -d /app/exports/Create import script:
projects/bbu-rfid/resources/superset/import-definitions.sh#!/bin/bash # Import from mounted exports directory for f in /app/imports/*.zip; do superset import_dashboards -p "$f" doneExport procedure (one-time manual):
# Connect to running Superset container podman exec -it superset /bin/bash # Run export superset export_dashboards -d /tmp/exports/ # Copy out podman cp superset:/tmp/exports ./superset-exports/Add exports to repo:
projects/bbu-rfid/resources/superset/exports/Modify docker-entrypoint.sh to auto-import on first run if exports exist and dashboards are empty.
Stage 3: Identity Seeder Update for Production
Goal: Ensure admin user exists in production without hardcoded passwords.
Files to Modify:
engines/identity/src/Acsis.Dynaplex.Engines.Identity.Database/Seeding/IdentityReferenceDataSeeder.csprojects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/appsettings.json
Implementation:
Modify
SeedDevelopmentUsers()to:- Check if ANY users exist in the database
- If empty, seed initial admin user regardless of environment
- Read password from configuration instead of hardcoding "admin"
// Check if any users exist var hasUsers = await db.Users.IgnoreQueryFilters().AnyAsync(cancellationToken); if (!hasUsers) { // Seed initial admin - required for first deployment await SeedInitialAdminUser(logger, cancellationToken); }Add configuration section for initial admin:
"Identity": { // ... existing config ... "InitialAdmin": { "Username": "admin", "Password": "kv:identity-admin-password", "Email": "admin@acsis.com" } }Since we're doing a full DB restore, admin user from local will exist with local password. We'll need to update the password post-restore or update it in the backup before restore.
Stage 4: Database Migration Strategy
Goal: Full PostgreSQL backup from local, restore to Azure PostgreSQL.
Tools:
pg_dumpfor backuppg_restorefor restore- Azure CLI for PostgreSQL access
Pre-requisites:
- Azure PostgreSQL firewall must allow your IP or use Azure Cloud Shell
- Have the admin credentials from
.azure/bbu-rfid-rnd/.env
Backup Script: scripts/backup-local-db.sh
#!/bin/bash
# Backup local Acsis database
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
OUTPUT_FILE="acsis_backup_${TIMESTAMP}.dump"
pg_dump \
-h localhost \
-p 15432 \
-U postgres \
-Fc \
-v \
--no-owner \
--no-acl \
acsis > "${OUTPUT_FILE}"
echo "Backup created: ${OUTPUT_FILE}"
Restore Script: scripts/restore-to-azure.sh
#!/bin/bash
# Restore to Azure PostgreSQL
# Requires: Azure CLI, psql client
BACKUP_FILE=$1
AZURE_HOST="psql-elbbudev-db.postgres.database.azure.com"
AZURE_USER="acsisadmin"
# Get password from .env or prompt
pg_restore \
-h "${AZURE_HOST}" \
-p 5432 \
-U "${AZURE_USER}" \
-d acsis \
-v \
--no-owner \
--no-acl \
--clean \
--if-exists \
"${BACKUP_FILE}"
Important Considerations:
- Update admin password post-restore: Since we're copying local DB, need to update identity user password
- Verify checkpoint/high-water marks: After restore, processing will resume from where local left off
- Schema compatibility: Azure PG 18 should be compatible with local PG 18 dump
Stage 5: MQTT Environment Isolation
Goal: Configure MQTT EnvironmentId to use Azure environment name, ensuring Azure processors don't conflict with local development.
Files to Modify:
engines/bbu/src/Acsis.Dynaplex.Engines.Bbu/Services/BbuMqttProcessor.cs(no changes needed - uses config)engines/iot/src/Acsis.Dynaplex.Engines.Iot.Abstractions/Services/MqttProcessorBase.cs(verify EnvironmentId handling)projects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/appsettings.jsonprojects/bbu-rfid/src/Acsis.Dynaplex.Projects.BbuRfid/infra/bbu/bbu.module.bicep(add env var)
Implementation:
The
MqttProcessorConfigurationalready hasEnvironmentIdproperty that:- Gets appended to ClientId and SharedSubscriptionGroup
- In Debug: Auto-detects to machine name
- In Release: Must be explicitly configured
Add environment variable in Azure deployment:
- Pass
AZURE_ENV_NAMEto the BBU container - Map it to
Bbu__Mqtt__EnvironmentIdconfiguration
- Pass
Update BBU bicep module to include:
{ name: 'Bbu__Mqtt__EnvironmentId' value: environmentName // From main.bicep parameter }Result: Azure MQTT will use:
- ClientId:
bbu-processor-bbu-rfid-rnd-<unique> - SharedGroup:
$share/bbu-processors-bbu-rfid-rnd/zebra/fx/+/reads
- ClientId:
Execution Order
- Stage 5 (MQTT) - Quick config change, low risk
- Stage 3 (Identity seeder) - Code change, needs testing
- Stage 1 (Superset creds) - Config + orchestration changes
- Stage 2 (Superset export) - Scripts + manual export step
- Stage 4 (DB migration) - Execute last, right before
azd up
Pre-Deployment Checklist
- Stage 5: MQTT EnvironmentId configured in bicep
- Stage 3: Identity seeder updated and tested
- Stage 1: Superset credentials in App Config, secrets in Key Vault
- Stage 2: Superset definitions exported and import script ready
- Stage 4: Database backup created
- Azure Key Vault secrets added:
superset-admin-passwordsuperset-secret-keyidentity-admin-password(if using config-based admin)bbu-mqtt-password(already exists)identity-rsa-private-keyandidentity-rsa-public-key
- Azure PostgreSQL firewall allows restore connection
- Local database backup verified
Post-Deployment Verification
- Check db-manager container completes successfully (migrations should skip since tables exist)
- Verify Superset:
- Can login with new credentials
- Dashboards/charts imported correctly
- Can query Acsis database
- Verify Identity:
- Can login to Core UI
- Permissions work correctly
- Verify BBU:
- MQTT connects with correct EnvironmentId
- Tag read processing resumes from checkpoint
- Check Application Insights for errors