Documentation
fsds/prism-attribute-system-implementation-plan.md
title: 'Prism Attribute System: Implementation Spec'
Note: all SQL is PostgreSQL‑flavored. Schema names follow existing prism.*. All new objects/columns are explicitly prefixed prism.
Overview & Invariants
Responsibility Split
prism.attribute_definitions: schema of an attribute (key, kind, options, quantity/UoM).prism.attribute_assignments: scope & rules for a definition at a specific object (global_id), including defaults and required/override flags.prism.attribute_values: explicit values at an object (only when diverging from inherited/default).
Sealing
A level seals its subtree only if it provides a value (explicit or default) and allow_override = false.
De‑Dupe Rule (Collection‑Scoped)
Within the same platform type collection, you cannot have two different definitions with the same key_slug whose assignments overlap on any object (self/children coverage). Cross‑collection duplicates are allowed.
Applicability
You may not create attribute_values unless there exists at least one applicable assignment for that (definition, object) along the object’s ancestor chain in that collection.
Multi‑select
Option collections can be select_multiple = true; values are stored in uuid[] (FK to options) in attribute_values, with corresponding typed defaults in attribute_assignments.
UoM
attribute_definitions include quantity_id and canonical unit_of_measure_id. Values and defaults are stored in canonical numeric.
Cascades
When a scope object or definition is deleted, dependent rows cascade; journals capture history.
Platform Collections: Attributes Switch
alter table prism.platform_type_collections
add column supports_attributes boolean not null default true;
Guard: block assignments on collections that don’t support attributes.
create or replace function prism.fn_assert_collection_supports_attrs()
returns trigger language plpgsql as $$
declare v_supports boolean;
begin
select pc.supports_attributes
into v_supports
from prism.passports p
join prism.platform_types pt on pt.id = p.platform_type_id
join prism.platform_type_collections pc on pc.id = pt.collection
where p.global_id = NEW.scope_global_id;
if not v_supports then
raise exception 'This collection does not support attributes';
end if;
return NEW;
end $$;
Hierarchy: Edges + Closure with Triggers (No Manual Syncing)
Tables
create table if not exists prism.object_edges (
collection_id smallint not null
references prism.platform_type_collections(id) on delete cascade,
parent_global_id uuid not null
references prism.passports(global_id) on delete cascade,
child_global_id uuid not null
references prism.passports(global_id) on delete cascade,
primary key (collection_id, parent_global_id, child_global_id)
);
create table if not exists prism.object_ancestors (
collection_id smallint not null
references prism.platform_type_collections(id) on delete cascade,
ancestor_global_id uuid not null
references prism.passports(global_id) on delete cascade,
descendant_global_id uuid not null
references prism.passports(global_id) on delete cascade,
distance int not null check (distance >= 0),
primary key (collection_id, ancestor_global_id, descendant_global_id)
);
create index if not exists "IX_object_ancestors_desc"
on prism.object_ancestors (collection_id, descendant_global_id, distance);
Trigger Functions
create or replace function prism.fn_edges_ins() returns trigger language plpgsql as $$
declare v_exists int;
begin
-- cycle guard: child cannot be an ancestor of parent
select 1 into v_exists
from prism.object_ancestors
where collection_id = NEW.collection_id
and ancestor_global_id = NEW.child_global_id
and descendant_global_id = NEW.parent_global_id
limit 1;
if found then
raise exception 'Cycle detected in collection %', NEW.collection_id;
end if;
-- ensure self rows exist
insert into prism.object_ancestors(collection_id, ancestor_global_id, descendant_global_id, distance)
values (NEW.collection_id, NEW.child_global_id, NEW.child_global_id, 0)
on conflict do nothing;
insert into prism.object_ancestors(collection_id, ancestor_global_id, descendant_global_id, distance)
values (NEW.collection_id, NEW.parent_global_id, NEW.parent_global_id, 0)
on conflict do nothing;
-- insert closure: for each ancestor A of parent and each descendant D of child
insert into prism.object_ancestors (collection_id, ancestor_global_id, descendant_global_id, distance)
select NEW.collection_id, a.ancestor_global_id, d.descendant_global_id, a.distance + 1 + d.distance
from prism.object_ancestors a
join prism.object_ancestors d on d.collection_id = NEW.collection_id
where a.collection_id = NEW.collection_id
and a.descendant_global_id = NEW.parent_global_id
and d.ancestor_global_id = NEW.child_global_id
on conflict do nothing;
return NEW;
end $$;
create or replace function prism.fn_edges_del() returns trigger language plpgsql as $$
begin
-- delete closure rows that depend solely on this path
delete from prism.object_ancestors oa
using prism.object_ancestors a_parent, prism.object_ancestors d_child
where oa.collection_id = OLD.collection_id
and a_parent.collection_id = OLD.collection_id
and d_child.collection_id = OLD.collection_id
and a_parent.descendant_global_id = OLD.parent_global_id
and d_child.ancestor_global_id = OLD.child_global_id
and oa.ancestor_global_id = a_parent.ancestor_global_id
and oa.descendant_global_id = d_child.descendant_global_id
and oa.distance = a_parent.distance + 1 + d_child.distance
and not exists ( -- is there an alternate path?
select 1
from prism.object_edges e2
where e2.collection_id = OLD.collection_id
and e2.parent_global_id in (
select ancestor_global_id
from prism.object_ancestors
where collection_id = OLD.collection_id
and descendant_global_id = OLD.parent_global_id)
and e2.child_global_id in (
select descendant_global_id
from prism.object_ancestors
where collection_id = OLD.collection_id
and ancestor_global_id = OLD.child_global_id)
);
return OLD;
end $$;
drop trigger if exists trg_edges_ins on prism.object_edges;
create trigger trg_edges_ins
after insert on prism.object_edges
for each row execute function prism.fn_edges_ins();
drop trigger if exists trg_edges_del on prism.object_edges;
create trigger trg_edges_del
after delete on prism.object_edges
for each row execute function prism.fn_edges_del();
Usage: Your code only inserts/deletes in object_edges. The closure stays correct automatically.
Attribute Definitions (Schema Table)
Columns Changes
Keep the following columns: id uuid PK, platform_type_id, quantity_id, unit_of_measure_id, name, options_collection_id (FK), sort_order.
Add the following:
alter table prism.attribute_definitions
add column key_slug varchar(128) not null,
add column kind varchar(32) not null check (kind in ('quantity','option','text','bool','datetime')),
add unique (key_slug);
Columns to Remove / Move to Assignments Table
Move out of attribute_definitions:
- associated_object_id, applies_to_self, allow_override, required, default_value → into attribute_assignments.
- Consider moving sort_order to assignments (ordering can be scope‑specific). If you keep it here, it’s global.
Kind <-> Options Guard
alter table prism.attribute_definitions
add constraint chk_kind_vs_options
check (
(kind = 'option' and options_collection_id is not null) or
(kind <> 'option' and options_collection_id is null)
);
Option Collections (multi‑select)
alter table prism.attribute_option_collections
add column select_multiple boolean not null default false;
Attribute Assignments (scope & rules)
create table prism.attribute_assignments
(
id uuid primary key
references prism.passports(global_id) on delete restrict,
attribute_definition_id uuid not null
references prism.attribute_definitions(id) on delete cascade,
scope_global_id uuid not null
references prism.passports(global_id) on delete cascade,
applies_to_self boolean not null default true,
applies_to_children boolean not null default false,
allow_override boolean not null default true,
required boolean not null default false,
sort_order smallint not null default 0,
-- typed defaults (one field populated according to definition.kind)
default_text text,
default_numeric numeric(38,10),
default_bool boolean,
default_timestamp timestamptz,
default_option_id uuid references prism.attribute_options(id),
default_option_ids uuid[],
default_uom_id uuid references prism.units_of_measure(id),
unique (attribute_definition_id, scope_global_id),
constraint chk_assignment_onehot
check (
(case when default_text is not null then 1 else 0 end) +
(case when default_numeric is not null then 1 else 0 end) +
(case when default_bool is not null then 1 else 0 end) +
(case when default_timestamp is not null then 1 else 0 end) +
(case when default_option_id is not null then 1 else 0 end) +
(case when default_option_ids is not null then 1 else 0 end)
<= 1
)
);
create index "IX_attribute_assignments_scope"
on prism.attribute_assignments (scope_global_id);
create index "IX_attribute_assignments_def"
on prism.attribute_assignments (attribute_definition_id);
-- Guard collection support
create trigger trg_assign_collection_guard
before insert or update on prism.attribute_assignments
for each row execute function prism.fn_assert_collection_supports_attrs();
Default Sanity (Kind Aware) Trigger
create or replace function prism.fn_assign_shape_guard() returns trigger language plpgsql as $$
declare v_kind text; v_coll uuid; v_mult bool; v_canonical_uom uuid;
begin
select ad.kind, ad.options_collection_id, ad.unit_of_measure_id
into v_kind, v_coll, v_canonical_uom
from prism.attribute_definitions ad
where ad.id = NEW.attribute_definition_id;
if v_kind = 'option' then
select oc.select_multiple into v_mult from prism.attribute_option_collections oc where oc.id = v_coll;
if v_mult then
if NEW.default_option_ids is null then return NEW; end if;
-- verify all members belong to the collection
if exists (
select 1
from unnest(NEW.default_option_ids) oid
left join prism.attribute_options o on o.id = oid
where o.collection_id is distinct from v_coll
) then
raise exception 'Default options include IDs from the wrong collection';
end if;
if NEW.default_option_id is not null then
raise exception 'Use default_option_ids for multi-select collections';
end if;
else
if NEW.default_option_id is not null then
perform 1 from prism.attribute_options o where o.id = NEW.default_option_id and o.collection_id = v_coll;
if not found then raise exception 'Default option does not belong to the collection'; end if;
end if;
if NEW.default_option_ids is not null then
raise exception 'Use default_option_id for single-select collections';
end if;
end if;
elsif v_kind = 'quantity' then
if NEW.default_numeric is not null and NEW.default_uom_id is not null and NEW.default_uom_id <> v_canonical_uom then
raise exception 'Quantity defaults must be in canonical UoM';
end if;
if NEW.default_text is not null or NEW.default_option_id is not null or NEW.default_option_ids is not null then
raise exception 'Quantity defaults must use default_numeric (and optional canonical UoM)';
end if;
elsif v_kind = 'text' then
if NEW.default_text is null and NEW.required = true and NEW.applies_to_children = true then
-- allowed, but note: descendants will be required to supply explicit values at write-time
return NEW;
end if;
elsif v_kind = 'bool' then
null; -- nothing special
elsif v_kind = 'datetime' then
null;
end if;
return NEW;
end $$;
create trigger trg_assign_shape_guard
before insert or update on prism.attribute_assignments
for each row execute function prism.fn_assign_shape_guard();
Attribute Values (Typed, plus Multi‑Select Arrays)
Replace legacy value text with typed columns.
alter table prism.attribute_values
add column value_text text,
add column value_numeric numeric(38,10),
add column value_bool boolean,
add column value_timestamp timestamptz,
add column value_option_id uuid references prism.attribute_options(id),
add column value_option_ids uuid[],
add column value_uom_id uuid references prism.units_of_measure(id);
-- unique, single-valued only (multi-select uses array but still unique container)
create unique index if not exists "UQ_attr_value_object_def"
on prism.attribute_values (associated_object_id, attribute_definition_id);
-- One-hot constraint; array treated as “present” if not null
alter table prism.attribute_values
add constraint chk_value_onehot
check (
(case when value_text is not null then 1 else 0 end) +
(case when value_numeric is not null then 1 else 0 end) +
(case when value_bool is not null then 1 else 0 end) +
(case when value_timestamp is not null then 1 else 0 end) +
(case when value_option_id is not null then 1 else 0 end) +
(case when value_option_ids is not null then 1 else 0 end)
<= 1
);
Value Shape, Applicability, and Sealing (BEFORE Triggers)
-- A) shape (kind-aware)
create or replace function prism.fn_value_shape_guard() returns trigger language plpgsql as $$
declare v_kind text; v_coll uuid; v_mult bool; v_canonical_uom uuid;
begin
select ad.kind, ad.options_collection_id, ad.unit_of_measure_id
into v_kind, v_coll, v_canonical_uom
from prism.attribute_definitions ad
where ad.id = NEW.attribute_definition_id;
if v_kind = 'option' then
select oc.select_multiple into v_mult from prism.attribute_option_collections oc where oc.id = v_coll;
if v_mult then
if NEW.value_option_ids is null then
raise exception 'Multi-select attribute requires value_option_ids[]';
end if;
if exists (
select 1
from unnest(NEW.value_option_ids) oid
left join prism.attribute_options o on o.id = oid
where o.collection_id is distinct from v_coll
) then
raise exception 'One or more options do not belong to the attribute''s collection';
end if;
if NEW.value_option_id is not null then
raise exception 'Use value_option_ids for multi-select';
end if;
else
if NEW.value_option_id is null then
raise exception 'Single-select attribute requires value_option_id';
end if;
perform 1 from prism.attribute_options o where o.id = NEW.value_option_id and o.collection_id = v_coll;
if not found then raise exception 'Option does not belong to the attribute''s collection'; end if;
if NEW.value_option_ids is not null then
raise exception 'Do not set value_option_ids for single-select';
end if;
end if;
elsif v_kind = 'quantity' then
if NEW.value_numeric is null then
raise exception 'Quantity attribute requires value_numeric';
end if;
if NEW.value_uom_id is not null and NEW.value_uom_id <> v_canonical_uom then
raise exception 'Quantity value must be in canonical UoM';
end if;
elsif v_kind = 'text' then
if NEW.value_text is null then raise exception 'Text attribute requires value_text'; end if;
elsif v_kind = 'bool' then
if NEW.value_bool is null then raise exception 'Bool attribute requires value_bool'; end if;
elsif v_kind = 'datetime' then
if NEW.value_timestamp is null then raise exception 'Datetime attribute requires value_timestamp'; end if;
end if;
return NEW;
end $$;
-- B) applicability (there must be an assignment in the chain)
create or replace function prism.fn_value_applicability_guard() returns trigger language plpgsql as $$
declare v_collection smallint;
begin
-- Get collection of the target object
select pt.collection into v_collection
from prism.passports p
join prism.platform_types pt on pt.id = p.platform_type_id
where p.global_id = NEW.associated_object_id;
-- Is there any assignment that applies to this object for this definition?
if not exists (
with anc as (
select ancestor_global_id as aid, distance
from prism.object_ancestors
where collection_id = v_collection
and descendant_global_id = NEW.associated_object_id
)
select 1
from anc
join prism.attribute_assignments aa on aa.scope_global_id = anc.aid
where aa.attribute_definition_id = NEW.attribute_definition_id
and (
(aa.applies_to_self and aa.scope_global_id = NEW.associated_object_id)
or aa.applies_to_children
)
limit 1
) then
raise exception 'No applicable assignment exists for this attribute on this object';
end if;
return NEW;
end $$;
-- C) sealing (reject explicit value if an ancestor provides sealed value)
create or replace function prism.fn_value_sealing_guard() returns trigger language plpgsql as $$
declare v_collection smallint; v_kind text;
begin
select pt.collection into v_collection
from prism.passports p
join prism.platform_types pt on pt.id = p.platform_type_id
where p.global_id = NEW.associated_object_id;
select ad.kind into v_kind
from prism.attribute_definitions ad
where ad.id = NEW.attribute_definition_id;
if exists (
with anc as (
select ancestor_global_id as aid, distance
from prism.object_ancestors
where collection_id = v_collection
and descendant_global_id = NEW.associated_object_id
),
aa as (
select a.*, anc.distance
from prism.attribute_assignments a
join anc on anc.aid = a.scope_global_id
where a.attribute_definition_id = NEW.attribute_definition_id
),
providers as (
select aa.scope_global_id,
aa.allow_override = false as sealed,
(
exists( -- explicit value at that scope
select 1 from prism.attribute_values v
where v.associated_object_id = aa.scope_global_id
and v.attribute_definition_id = aa.attribute_definition_id
limit 1
)
or -- or a default exists at that scope
(case v_kind
when 'option' then (aa.default_option_id is not null or aa.default_option_ids is not null)
when 'quantity' then aa.default_numeric is not null
when 'text' then aa.default_text is not null
when 'bool' then aa.default_bool is not null
when 'datetime' then aa.default_timestamp is not null
end)
) as has_value
from aa
where (aa.applies_to_self and aa.scope_global_id = NEW.associated_object_id)
or aa.applies_to_children
)
select 1 from providers where sealed and has_value limit 1
) then
raise exception 'Write rejected: an ancestor provides a sealed value for this attribute';
end if;
return NEW;
end $$;
create trigger trg_attr_values_shape
before insert or update on prism.attribute_values
for each row execute function prism.fn_value_shape_guard();
create trigger trg_attr_values_applicability
before insert or update on prism.attribute_values
for each row execute function prism.fn_value_applicability_guard();
create trigger trg_attr_values_sealing
before insert or update on prism.attribute_values
for each row execute function prism.fn_value_sealing_guard();
De‑Dupe by Collection Using key_slug
Prevent overlapping applicability of different definitions with the same key_slug in the same collection.
create or replace function prism.fn_assert_no_slug_collision()
returns trigger language plpgsql as $$
declare v_collection smallint; v_slug text;
begin
select pt.collection into v_collection
from prism.passports p
join prism.platform_types pt on pt.id = p.platform_type_id
where p.global_id = NEW.scope_global_id;
select ad.key_slug into v_slug
from prism.attribute_definitions ad
where ad.id = NEW.attribute_definition_id;
with new_targets as (
-- self
select NEW.scope_global_id as oid where NEW.applies_to_self
union all
-- children
select oa.descendant_global_id
from prism.object_ancestors oa
where NEW.applies_to_children
and oa.collection_id = v_collection
and oa.ancestor_global_id = NEW.scope_global_id
),
other as (
select aa.*
from prism.attribute_assignments aa
join prism.attribute_definitions ad2 on ad2.id = aa.attribute_definition_id
join prism.passports p2 on p2.global_id = aa.scope_global_id
join prism.platform_types pt2 on pt2.id = p2.platform_type_id
where ad2.key_slug = v_slug
and pt2.collection = v_collection
and aa.id is distinct from NEW.id
and aa.attribute_definition_id <> NEW.attribute_definition_id -- different definition
),
other_targets as (
select case when o.applies_to_children then a.descendant_global_id else o.scope_global_id end as oid
from other o
left join prism.object_ancestors a
on a.collection_id = v_collection
and a.ancestor_global_id = o.scope_global_id
and o.applies_to_children
union
select o.scope_global_id
from other o
where o.applies_to_self
)
select 1
from new_targets nt
join other_targets ot using (oid)
limit 1
into v_collection; -- dummy
if found then
raise exception 'Duplicate key_slug within collection on overlapping scopes: %', v_slug
using errcode = '23514';
end if;
return NEW;
end $$;
create trigger trg_assign_dedup
before insert or update on prism.attribute_assignments
for each row execute function prism.fn_assert_no_slug_collision();
Cascades (Flexibility and Journal Tables)
-- Assignments die with their definition or scope
alter table prism.attribute_assignments
add constraint fk_assign_def
foreign key (attribute_definition_id) references prism.attribute_definitions(id) on delete cascade,
add constraint fk_assign_scope
foreign key (scope_global_id) references prism.passports(global_id) on delete cascade;
-- Values die with their object; definition delete cascades values too
alter table prism.attribute_values
drop constraint if exists "FK_attribute_values_attribute_definitions_attribute_definition~",
add constraint fk_value_def
foreign key (attribute_definition_id) references prism.attribute_definitions(id) on delete cascade;
alter table prism.attribute_values
drop constraint if exists "FK_attribute_values_passports_associated_object_id",
add constraint fk_value_obj
foreign key (associated_object_id) references prism.passports(global_id) on delete cascade;
-- Options cascade to values (journal captures meaning loss)
alter table prism.attribute_values
add constraint fk_value_option
foreign key (value_option_id) references prism.attribute_options(id) on delete cascade;
Journals: Mirror Tables and Blocking Direct DDL
Audit Schema and Control Flag
create schema if not exists audit;
-- session flag to allow managed DDL
-- (use: set local audit.allow_ddl = 'on' inside wrapper)
Enable Journaling for a Base Table
create or replace function audit.enable_journaling(base_table regclass, pk_cols text[])
returns void language plpgsql as $$
declare jn name; base name; schm name; ddl text; pk listagg; i int;
begin
select n.nspname, c.relname into schm, base
from pg_class c join pg_namespace n on n.oid = c.relnamespace
where c.oid = base_table;
jn := base || '_journal';
-- create journal table as LIKE base INCLUDING ALL
execute format('create table if not exists %I.%I (like %I.%I including all)',
schm, jn, schm, base);
-- add journal columns if missing
perform 1
from information_schema.columns
where table_schema=schm and table_name=jn and column_name='seq_id';
if not found then
execute format('alter table %I.%I add column seq_id integer not null',
schm, jn);
execute format('alter table %I.%I add column valid_from timestamptz not null default now()',
schm, jn);
execute format('alter table %I.%I add column valid_to timestamptz not null default ''infinity''::timestamptz',
schm, jn);
execute format('alter table %I.%I add column initiated_by_user_id uuid',
schm, jn);
execute format('alter table %I.%I add column invalidated_by_user_id uuid',
schm, jn);
end if;
-- primary key: (pk_cols..., seq_id)
ddl := 'alter table '||quote_ident(schm)||'.'||quote_ident(jn)||' drop constraint if exists '||quote_ident('PK_'||jn)||';'
|| ' alter table '||quote_ident(schm)||'.'||quote_ident(jn)
|| ' add primary key ('|| array_to_string(pk_cols, ',') ||', seq_id);';
execute ddl;
-- DML trigger
execute format($x$
create or replace function audit.fn_journal_%I_%I() returns trigger language plpgsql as $$
declare v_seq int;
begin
if tg_op = 'INSERT' then
select coalesce(max(seq_id),0)+1 into v_seq
from %I.%I
where %s;
insert into %I.%I select *, v_seq, now(), 'infinity'::timestamptz, current_setting('app.user_id', true)::uuid, null
from new_table_placeholder; -- replaced below
return NEW;
elsif tg_op = 'UPDATE' then
-- close previous row
update %I.%I set valid_to = now(), invalidated_by_user_id = current_setting('app.user_id', true)::uuid
where %s and valid_to = 'infinity';
select coalesce(max(seq_id),0)+1 into v_seq
from %I.%I
where %s;
insert into %I.%I select *, v_seq, now(), 'infinity'::timestamptz, current_setting('app.user_id', true)::uuid, null
from new_table_placeholder;
return NEW;
elsif tg_op = 'DELETE' then
update %I.%I set valid_to = now(), invalidated_by_user_id = current_setting('app.user_id', true)::uuid
where %s and valid_to = 'infinity';
return OLD;
end if;
end $$;
$x$, schm, base, schm, jn,
-- pk predicate NEW/OLD qualified at runtime
'', schm, jn, schm, jn, '', schm, jn, '', schm, jn, '');
-- The above is a template; we install concrete triggers with pk predicate:
-- Build pk predicate for NEW/OLD
ddl := format(
'create trigger trg_journal_%1$s_%2$s_ins after insert on %1$s.%2$s
for each row execute function audit.fn_journal_%1$s_%2$s();',
schm, base);
-- For brevity in this spec, the agent should implement the template replacement:
-- replace "new_table_placeholder" with "select (NEW.*)" form and fill pk predicates on NEW/OLD
end $$;
Important Note: The template above shows the pattern. The dev/agent implementing the solution should implement audit.enable_journaling concretely for the tracked tables (definitions, assignments, values, option_collections, options) where the PK is id (or (id, seq_id) in journals). That keeps the code compact and robust.
Blocking Direct DDL and Providing a Wrapper
-- Block direct ALTER/CREATE/DROP on tracked tables unless bypass is set
create or replace function audit.fn_block_direct_ddl()
returns event_trigger language plpgsql as $$
declare obj record;
begin
if current_setting('audit.allow_ddl', true) = 'on' then
return;
end if;
for obj in select * from pg_event_trigger_ddl_commands() loop
if obj.schema_name = 'prism'
and obj.object_type = 'table'
and right(obj.object_identity, 8) <> '_journal' -- base tables only
and obj.command_tag in ('ALTER TABLE','DROP TABLE') then
raise exception 'Direct DDL on % is blocked. Use audit.sync_ddl(...)', obj.object_identity;
end if;
end loop;
end $$;
drop event trigger if exists trg_block_direct_ddl;
create event trigger trg_block_direct_ddl
on ddl_command_start
execute function audit.fn_block_direct_ddl();
-- Wrapper to apply the same ALTER to base and journal
create or replace function audit.sync_ddl(base_table regclass, ddl_clause text)
returns void language plpgsql as $$
declare schm name; base name; jn name;
begin
select n.nspname, c.relname into schm, base
from pg_class c join pg_namespace n on n.oid = c.relnamespace
where c.oid = base_table;
jn := base||'_journal';
perform set_config('audit.allow_ddl','on', true);
execute format('alter table %I.%I %s', schm, base, ddl_clause);
execute format('alter table %I.%I %s', schm, jn, ddl_clause);
perform set_config('audit.allow_ddl','off', true);
end $$;
DML journaling triggers
After you create *_journal with audit.enable_journaling, attach the concrete per‑table INSERT/UPDATE/DELETE triggers that copy NEW/OLD into journal and maintain valid_*. (The template above shows the approach; the dev/agent must implement the concrete functions with id as the PK.)
Read Path: Effective Attributes
Key idea: fold by attribute_definition_id, not by name, so cross‑collection duplicates both show up.
SQL for Retrieving Candidate Providers
inputs: object_id
with obj as (
select p.global_id as oid, pt.collection
from prism.passports p
join prism.platform_types pt on pt.id = p.platform_type_id
where p.global_id = :object_id
),
anc as (
select oa.ancestor_global_id as aid, oa.distance, o.collection
from obj o
join prism.object_ancestors oa
on oa.collection_id = o.collection
and oa.descendant_global_id = o.oid
),
assigns as (
select aa.*, anc.distance, ad.kind, ad.key_slug, ad.name,
case ad.kind
when 'quantity' then (aa.default_numeric is not null)
when 'option' then (aa.default_option_id is not null or aa.default_option_ids is not null)
when 'text' then (aa.default_text is not null)
when 'bool' then (aa.default_bool is not null)
when 'datetime' then (aa.default_timestamp is not null)
end as has_default
from anc
join prism.attribute_assignments aa on aa.scope_global_id = anc.aid
join prism.attribute_definitions ad on ad.id = aa.attribute_definition_id
where (aa.applies_to_self and aa.scope_global_id = (select oid from obj))
or aa.applies_to_children
),
explicit as (
select v.*
from prism.attribute_values v
where v.associated_object_id in (select aid from anc)
),
-- choose nearest provider per definition (fold is done in app; this is a flat set)
providers as (
select
a.attribute_definition_id,
a.kind, a.key_slug, a.name,
a.scope_global_id, a.distance,
a.allow_override,
-- flags:
a.has_default,
(exists (select 1 from explicit e where e.associated_object_id = a.scope_global_id and e.attribute_definition_id = a.attribute_definition_id)) as has_explicit
from assigns a
)
select * from providers
order by attribute_definition_id, distance asc, allow_override asc;
Evaluator C# Sketch
// fold by definition id
foreach (var defGroup in providers.GroupBy(p => p.DefId)) {
EffectiveValue? chosen = null;
foreach (var p in defGroup.OrderBy(p => p.Distance)) {
var hasValue = p.HasExplicit || p.HasDefault;
if (!hasValue) continue;
var sealedHere = !p.AllowOverride;
chosen = new EffectiveValue(p, sealedHere);
if (sealedHere) break; // stop on sealed provider
}
// explicit value on the target object (wins if not sealed by a nearer provider)
var ev = explicitOnObject.TryGet(defGroup.Key);
if (ev != null && (chosen == null || !chosen.Sealed)) {
chosen = new EffectiveValueFromExplicit(ev);
}
if (chosen != null) yield return chosen;
}
API Contracts (Minimal Set)
GET /passports/
List of effective attributes (one per attribute_definition_id):
{
definitionId,
keySlug,
name,
kind,
value {
typed
},
source {
scopeGlobalId,
distance,
sealed,
fromDefault
},
required,
collectionId
}
PUT /passports/
Body: typed value (match kind).
Errors: 409 sealed, 400 no-applicable-assignment, 400 kind-mismatch.
DELETE /passports/
Removes explicit value; falls back to inherited/default.
POST /attribute-definitions
{
keySlug,
name,
kind,
quantityId?,
unitOfMeasureId?,
optionsCollectionId?
}
POST /attribute-assignments
{
definitionId,
scopeGlobalId,
appliesToSelf,
appliesToChildren,
allowOverride,
required,
defaults...
}
Errors: 409 duplicate-keySlug-overlap-in-collection.
Acceptance Tests
Sealed ancestor rejects child write
- Parent assignment: default present,
allow_override=false. - Writing explicit at child → 409/DB error.
Unsealed parent allows child write
- Parent default present,
allow_override=true. - Child explicit wins; deleting child explicit restores parent default.
No applicable assignment rejects write
- No assignment in chain → DB error.
De‑dupe by collection
- Create assignment A (def with
key_slug=weight) at Type; - Attempt to create assignment B (different def with same
key_slug) at Item under that Type with overlapping applicability → DB error. - Same across different collection → allowed.
Multi‑select
- For an attribute with
select_multiple=true, writingvalue_option_idworks and validates membership. - Writing
value_option_idinstead → DB error.
Quantity canonical
- Values/defaults in non‑canonical UoM rejected.
Cascade
- Delete definition → assignments and values cascade; journal entries emitted.
Closure sync
- Insert edge P→C populates closure for ancestors/descendants; delete edge removes only dependent closure rows; cycle insert rejected.
Required
- If an ancestor assignment is
required=trueand has no provider, creating/updating a child without an explicit value fails in service (and can be DB‑enforced with a write‑time check if you want strictness).
EF Core and Roslyn
Add a CI step (SQL) that fails if any column names match
%PassportGlobalId%or legacy cruft.Optionally add a Roslyn rule to flag POCOs or EF configurations that map non‑existent columns or define duplicate keys. (You’ve got the analyzer infra; this is a good place to enforce naming & key conventions.)
-- CI sanity: orphan legacy columns/indexes
select table_schema, table_name, column_name
from information_schema.columns
where (column_name ilike '%PassportGlobalId%')
and table_schema='prism';
select schemaname, tablename, indexname
from pg_indexes
where schemaname='prism'
and indexdef ilike '%PassportGlobalId%';
Implementation Order
- Create
object_edges,object_ancestors, and the two edge triggers. - Modify
attribute_definitions(addkey_slug,kind, move scope/default cols out). - Create
attribute_assignments+ guards (collection support, shape, de‑dupe). - Extend
attribute_valueswith typed columns + shape/applicability/sealing triggers. - Add cascades (definitions→assignments/values, passports→assignments/values, options).
- Add
select_multipleand triggers for multi‑select validation. - Install journaling for:
attribute_definitions,attribute_assignments,attribute_values,attribute_option_collections,attribute_options.- Use
audit.enable_journaling('prism.attribute_assignments'::regclass, ARRAY['id'])etc. - Add the block‑direct‑DDL event trigger; use
audit.sync_ddlfor schema changes.
- Use
- Implement the evaluator in the service and wire API endpoints.