| Typical timeline | 12–18 months |
| Typical cost | $1–3M per migration |
| Failure rate | ~60% miss original timeline |
| Auditor acceptance | Often rejected — manual transformations are not defensibly traceable |
| Security stance | Non-negotiable: data cannot leave their network, cannot use public AI, cannot ship schemas to a SaaS vendor |
Accenture, Deloitte, IBM. Slow, expensive, every project rebuilt from scratch — knowledge walks out the door.
Informatica, Talend. Don't understand GxP, no audit defensibility, no domain knowledge of quality systems.
Veeva-only, capacity-constrained, expensive. Doesn't help non-Veeva targets.
Fast but unauditable. Regulators reject them. Tribal knowledge that doesn't survive turnover.
| Where does our data go? | Will any record, attachment, or schema leave our network — even temporarily, even to your cloud? |
| Who has access? | Can your engineers see our data? Can your AI vendor see it? Can anyone outside our company see it? |
| What does the AI see? | If you use AI for mapping, does our quality information flow through OpenAI, Anthropic, or any external service? |
| Can we prove what happened? | When a regulator asks why a record was transformed a particular way, can we show them — for every record? |
| What if you go away? | If your company disappears next year, does our migration stop working? Are we locked in? |
| How do we control access? | Can our existing identity system (AD, Okta) control who logs in? Can we revoke access using our normal offboarding process? |
| Where does data go? | Nowhere. CLAIRE app + MSSQL + Redis run as a Docker stack inside their data center. No callback. No upload. No telemetry. |
| Who has access? | Only their staff. Their CLAIRE realm runs on their MSSQL. Our SE configures alongside them, on their network, under their security policy. |
| What does the AI see? | Their own LLM, nothing else. CLAIRE's LLMProviderFactory is configured per realm to point at their Azure OpenAI / Bedrock / self-hosted model. |
| Can we prove what happened? | Yes — automatically. CLAIRE's auditLogs.ts records every mutation; LLMAuditLogger records every LLM call. Per-record evidence pack is generated from these. |
| What if you go away? | Nothing breaks. The realm runs on their hardware. Output (records + audit logs + evidence packs) is theirs forever. Zero ongoing dependency on us. |
| How do we control access? | Their IdP runs the show. CLAIRE's existing SAML auth + withAuth middleware uses AD/Okta. Same offboarding protects MAiGRATE automatically. |
Every piece of MAiGRATE is built from CLAIRE primitives that already exist in production today — running the audit-trail flow at Compliance Group right now.
| MAiGRATE concept | What it actually is in CLAIRE | CLAIRE table / module |
|---|---|---|
| A MAiGRATE install | A CLAIRE deployment plus one Realm provisioned for the customer | core.realms |
| A source/target connector | Custom HTTP Tool + JS Function + Agent | core.tools, core.js_functions, core.agents |
| The migration pipeline | A Flow, executed by the existing FlowExecutionEngine | core.flows, core.flow_nodes |
| Field mapping logic | JS Functions (deterministic) + Skills (LLM templates) + Agent (orchestration) | core.js_functions, core.skills, core.agents |
| Customer-specific behavior | Realm Variables, Knowledge Base entries, per-realm seed data | core.realm_variables, core.knowledge_base |
| The evidence pack | Generated from the existing audit log + LLM audit tables | core.audit_logs, core.llm_audit |
JS Function in CLAIRE's VM2 sandbox applies field-to-field rules. Fully deterministic. No AI. No data sent anywhere.JS Function reads CSV/markdown from core.knowledge_base and translates source values to target values. Still deterministic. Still no AI.Agent calls Skills via LLMProviderFactory, using the customer's own LLM. Every call logged by LLMAuditLogger.Anything that can be mapped deterministically is mapped deterministically. We do not use AI where rules work — it would be slower, more expensive, and harder to defend to a regulator.
Start nodecore.knowledge_base lookups — ~85% doneLLMProviderFactory for the residual ~15%set_state writes per-record evidence from core.audit_logs + core.llm_auditEnd nodeNo new tables. No new code. Same seed-script pattern as server-ts/database/seeds/0003.maigrate-pipeline.ts — already in production.
| Configuration item | What it is in CLAIRE | Where it lives |
|---|---|---|
| Source / target endpoints + creds | Realm Variables | core.realm_variables |
| Source extractor + target loader | Custom HTTP Tool + JS Function | core.tools, core.js_functions |
| Field mapping rules (Layer 1) | JS Function | core.js_functions |
| Lookup tables (Layer 2) | CSV / markdown KB entries | core.knowledge_base |
| Validation rules | JS Function | core.js_functions |
| Customer terminology guide | KB markdown injected into Agent's markdownContent | core.knowledge_base, core.agents |
| Mapping Agent + LLM templates | Agent + Skills | core.agents, core.skills |
| Customer's LLM endpoint | Per-realm provider config | core.llm_providers |
| The migration pipeline | Flow + Flow Nodes | core.flows, core.flow_nodes |
| Workflow choices | node.data.config on Flow nodes | core.flow_nodes |
Same install pattern. Same delivery shape. Same SE playbook. The next three slides walk through what this looks like for each illustrative scenario.
| Week 1 | Install CLAIRE Docker stack in their data center. Wire SAML to their identity provider. Place credentials in their secrets vault. Provision Customer A's realm in their MSSQL. |
| Week 2 | Field mapping workshop with their QA team. SE drafts the realm seed bundle: realm variables, JS functions, KB lookups, agent system prompt, skill templates, flow definition. |
| Week 3 | Seed the realm. Dry-run on 500 records via FlowExecutionEngine. SMEs review every AI mapping in core.llm_audit. Tighten rules. |
| Weeks 4–6 | Phased migration in 10K-record batches via POST /rest/flow/run. Each batch produces an evidence pack from core.audit_logs + core.llm_audit. Auditors sign off batch by batch. |
| Weeks 1–2 | Same install pattern. Provision Customer B's realm. Configure the MasterControl source connector — Custom HTTP Tool (SOAP-wrapped) + JS Function for binary document handling. |
| Week 3 | Seed realm with version-chain mapping JS Functions, signature-record handling, KB lookups for document types, customer terminology guide injected into the Field Mapper Agent's markdownContent. |
| Weeks 4–5 | Dry-run, mapping refinement, full migration through FlowExecutionEngine, evidence pack delivery, sign-off. |
| Week 1 | Same install. Reuse the TrackWise source connector exactly as configured for Customer A — same rows in core.tools and core.js_functions. |
| Week 2 | Configure ServiceNow GRC target loader — a new Custom HTTP Tool wrapping the ServiceNow REST API. |
| Week 3 | Seed realm with target object model mappings, validation rules, and parallel-run mode enabled in flow node config (node.data.config.parallelRun = true). Configure realm LLM provider to point at their self-hosted model. |
| Weeks 4–6 | Parallel run begins. Every TrackWise change triggers a Flow execution that syncs to ServiceNow within 90 seconds. Runs 90 days, then cutover. |
| Resource | Min | Recommended |
|---|---|---|
| vCPU | 8 | 16 |
| RAM | 32 GB | 64 GB |
| Disk | 200 GB SSD | 500 GB – 1 TB SSD |
| OS | Linux (Docker host) — RHEL / Ubuntu LTS | |
| Customer VMware / on-premiseThey already have host capacity | ~$0 marginal |
| Customer Azure / AWSEquivalent of one mid-size VM + ~500 GB SSD | $300 – $700 / mo |
| Migration window only (3 months)Scale down or decommission after cutover | $900 – $2,100 total |
Customer mental anchor is the $1–3M they would otherwise pay a consultancy. Our pricing should land clearly under that.
Numbers below are placeholders calibrated against comparable enterprise software. Sales/finance to validate before any customer conversation.
| MAiGRATE license | $250K – $400K | One-time. Right to run the CLAIRE realm for this migration. |
| Implementation services | $150K – $350K | One-time. Our SE configures the realm, runs dry-runs, supports cutover, hands off. |
| Optional 12-month support | $50K – $80K | Bug fixes, SE hours during stabilization period. |
| Total per customer | $450K – $830K | Roughly half the cost of the consultancy alternative ($1–3M). |