01 / 20
V2 · Technical
MAiGRATE
A Migration Approach Built on CLAIRE Primitives — While Respecting Customer Data Privacy and Security
How we would deliver MAiGRATE to three illustrative customers, using the CLAIRE platform we already run in production today.
The Problem
02 / 20

Every regulated enterprise has a migration problem they cannot solve safely today.

Typical timeline12–18 months
Typical cost$1–3M per migration
Failure rate~60% miss original timeline
Auditor acceptanceOften rejected — manual transformations are not defensibly traceable
Security stanceNon-negotiable: data cannot leave their network, cannot use public AI, cannot ship schemas to a SaaS vendor
The Problem
03 / 20

Every existing option fails them.

Big consultancies

Accenture, Deloitte, IBM. Slow, expensive, every project rebuilt from scratch — knowledge walks out the door.

Generic ETL tools

Informatica, Talend. Don't understand GxP, no audit defensibility, no domain knowledge of quality systems.

Veeva services

Veeva-only, capacity-constrained, expensive. Doesn't help non-Veeva targets.

DIY scripts

Fast but unauditable. Regulators reject them. Tribal knowledge that doesn't survive turnover.

No vendor in this market today ships a productized migration appliance that runs inside the customer's environment, uses AI safely, and produces audit-grade evidence. That gap is what MAiGRATE fills.
Illustrative Scenarios
04 / 20

Three illustrative customer scenarios — different sources, different targets, different cutover strategies.

Customer A
Top-20 pharma · hypothetical
Source
TrackWise CQ
Target
Veeva Vault QMS
Records
160,000 CAPAs, deviations, complaints
Hard part
Inconsistent narrative quality across 8 years; auditor demands traceability
Cutover
Phased migration over 6 weeks
AI environment
Their Azure OpenAI tenant
Customer B
Mid-size biotech · hypothetical
Source
MasterControl Documents
Target
Veeva QualityDocs
Records
12,000 SOPs + version history + signature records + training assignments
Hard part
Bit-for-bit preservation of document version chains and signatures
Cutover
Big-bang cutover; archive source
AI environment
Their AWS Bedrock
Customer C
Specialty manufacturer · hypothetical
Source
TrackWise CQ
Target
ServiceNow GRC
Records
25,000 quality events
Hard part
Source and target use different object models; business must keep running during cutover
Cutover
90-day parallel run, then cutover
AI environment
Self-hosted (no cloud AI)
Illustrative Scenarios
05 / 20
Three different sources. Three different targets. Three different volumes. Three different cutover strategies. Three different AI environments.
Looks like three projects.
It is one product, configured three ways.
Customer Concerns
06 / 20

What every regulated buyer asks before they let any new software near their data.

Where does our data go?Will any record, attachment, or schema leave our network — even temporarily, even to your cloud?
Who has access?Can your engineers see our data? Can your AI vendor see it? Can anyone outside our company see it?
What does the AI see?If you use AI for mapping, does our quality information flow through OpenAI, Anthropic, or any external service?
Can we prove what happened?When a regulator asks why a record was transformed a particular way, can we show them — for every record?
What if you go away?If your company disappears next year, does our migration stop working? Are we locked in?
How do we control access?Can our existing identity system (AD, Okta) control who logs in? Can we revoke access using our normal offboarding process?
How MAiGRATE Answers
07 / 20

Each answer is a structural commitment, not a feature.

Where does data go?Nowhere. CLAIRE app + MSSQL + Redis run as a Docker stack inside their data center. No callback. No upload. No telemetry.
Who has access?Only their staff. Their CLAIRE realm runs on their MSSQL. Our SE configures alongside them, on their network, under their security policy.
What does the AI see?Their own LLM, nothing else. CLAIRE's LLMProviderFactory is configured per realm to point at their Azure OpenAI / Bedrock / self-hosted model.
Can we prove what happened?Yes — automatically. CLAIRE's auditLogs.ts records every mutation; LLMAuditLogger records every LLM call. Per-record evidence pack is generated from these.
What if you go away?Nothing breaks. The realm runs on their hardware. Output (records + audit logs + evidence packs) is theirs forever. Zero ongoing dependency on us.
How do we control access?Their IdP runs the show. CLAIRE's existing SAML auth + withAuth middleware uses AD/Okta. Same offboarding protects MAiGRATE automatically.
CLAIRE Mapping
08 / 20

MAiGRATE is not new engineering. It is a packaged CLAIRE deployment.

Every piece of MAiGRATE is built from CLAIRE primitives that already exist in production today — running the audit-trail flow at Compliance Group right now.

MAiGRATE conceptWhat it actually is in CLAIRECLAIRE table / module
A MAiGRATE installA CLAIRE deployment plus one Realm provisioned for the customercore.realms
A source/target connectorCustom HTTP Tool + JS Function + Agentcore.tools, core.js_functions, core.agents
The migration pipelineA Flow, executed by the existing FlowExecutionEnginecore.flows, core.flow_nodes
Field mapping logicJS Functions (deterministic) + Skills (LLM templates) + Agent (orchestration)core.js_functions, core.skills, core.agents
Customer-specific behaviorRealm Variables, Knowledge Base entries, per-realm seed datacore.realm_variables, core.knowledge_base
The evidence packGenerated from the existing audit log + LLM audit tablescore.audit_logs, core.llm_audit
How Mapping Works
09 / 20

Field mapping happens in three layers. AI handles only ~15%, by design.

LAYER 01
Direct rule mapping
~70%
A JS Function in CLAIRE's VM2 sandbox applies field-to-field rules. Fully deterministic. No AI. No data sent anywhere.
LAYER 02
Lookup table mapping
~15%
A JS Function reads CSV/markdown from core.knowledge_base and translates source values to target values. Still deterministic. Still no AI.
LAYER 03
AI-assisted mapping
~15%
An Agent calls Skills via LLMProviderFactory, using the customer's own LLM. Every call logged by LLMAuditLogger.

Anything that can be mapped deterministically is mapped deterministically. We do not use AI where rules work — it would be slower, more expensive, and harder to defend to a regulator.

How Mapping Works
10 / 20

The mapping flow runs in CLAIRE's existing FlowExecutionEngine.

Start
Standard Start node
JS
Source extractor
JS Function reads from source via Custom HTTP Tool
JS
Layer 1 mapper
JS Function applies direct rules — ~70% of fields done
JS
Layer 2 mapper
JS Function reads core.knowledge_base lookups — ~85% done
AI
Field Mapper Agent
Agent calls Skills via LLMProviderFactory for the residual ~15%
JS
Validator
JS Function checks the assembled record against target schema
HTTP
Target loader
Custom HTTP Tool writes to target system
SET
Record evidence
set_state writes per-record evidence from core.audit_logs + core.llm_audit
End
Standard End node
Loading Configuration
11 / 20

"Loading the customer's configuration" means seeding rows into the standard CLAIRE tables.

No new tables. No new code. Same seed-script pattern as server-ts/database/seeds/0003.maigrate-pipeline.ts — already in production.

Configuration itemWhat it is in CLAIREWhere it lives
Source / target endpoints + credsRealm Variablescore.realm_variables
Source extractor + target loaderCustom HTTP Tool + JS Functioncore.tools, core.js_functions
Field mapping rules (Layer 1)JS Functioncore.js_functions
Lookup tables (Layer 2)CSV / markdown KB entriescore.knowledge_base
Validation rulesJS Functioncore.js_functions
Customer terminology guideKB markdown injected into Agent's markdownContentcore.knowledge_base, core.agents
Mapping Agent + LLM templatesAgent + Skillscore.agents, core.skills
Customer's LLM endpointPer-realm provider configcore.llm_providers
The migration pipelineFlow + Flow Nodescore.flows, core.flow_nodes
Workflow choicesnode.data.config on Flow nodescore.flow_nodes
Delivery Approach
12 / 20

Every customer engagement follows the same shape.
The differences are realm configuration, not code.

01
Install
02
Configure
03
Dry-run
04
Migrate
05
Hand off

Same install pattern. Same delivery shape. Same SE playbook. The next three slides walk through what this looks like for each illustrative scenario.

Delivery Approach
13 / 20
Week 1Install CLAIRE Docker stack in their data center. Wire SAML to their identity provider. Place credentials in their secrets vault. Provision Customer A's realm in their MSSQL.
Week 2Field mapping workshop with their QA team. SE drafts the realm seed bundle: realm variables, JS functions, KB lookups, agent system prompt, skill templates, flow definition.
Week 3Seed the realm. Dry-run on 500 records via FlowExecutionEngine. SMEs review every AI mapping in core.llm_audit. Tighten rules.
Weeks 4–6Phased migration in 10K-record batches via POST /rest/flow/run. Each batch produces an evidence pack from core.audit_logs + core.llm_audit. Auditors sign off batch by batch.
Delivery Approach
14 / 20
Weeks 1–2Same install pattern. Provision Customer B's realm. Configure the MasterControl source connector — Custom HTTP Tool (SOAP-wrapped) + JS Function for binary document handling.
Week 3Seed realm with version-chain mapping JS Functions, signature-record handling, KB lookups for document types, customer terminology guide injected into the Field Mapper Agent's markdownContent.
Weeks 4–5Dry-run, mapping refinement, full migration through FlowExecutionEngine, evidence pack delivery, sign-off.
Delivery Approach
15 / 20
Week 1Same install. Reuse the TrackWise source connector exactly as configured for Customer A — same rows in core.tools and core.js_functions.
Week 2Configure ServiceNow GRC target loader — a new Custom HTTP Tool wrapping the ServiceNow REST API.
Week 3Seed realm with target object model mappings, validation rules, and parallel-run mode enabled in flow node config (node.data.config.parallelRun = true). Configure realm LLM provider to point at their self-hosted model.
Weeks 4–6Parallel run begins. Every TrackWise change triggers a Flow execution that syncs to ServiceNow within 90 seconds. Runs 90 days, then cutover.
Infrastructure
16 / 20

MAiGRATE runs entirely inside the customer's environment.
Footprint: one VM running the CLAIRE Docker stack.

Source
System
CLAIRE Docker Stack
app + MSSQL + Redis
Target
System
Customer's LLM endpoint
1 VM. Docker-based. Network access to source, target, LLM, and identity. That's it.
ResourceMinRecommended
vCPU8
RAM32 GB
Disk200 GB SSD
OSLinux (Docker host) — RHEL / Ubuntu LTS
Infrastructure Cost
17 / 20

Estimated infrastructure cost in the customer environment.

Customer VMware / on-premiseThey already have host capacity~$0 marginal
Customer Azure / AWSEquivalent of one mid-size VM + ~500 GB SSD$300 – $700 / mo
Migration window only (3 months)Scale down or decommission after cutover$900 – $2,100 total
Customer-side infrastructure: under $1K / month for the duration of the migration. Our-side infrastructure: zero.
Pricing Model
18 / 20

Three credible pricing structures. We recommend Option A.

Customer mental anchor is the $1–3M they would otherwise pay a consultancy. Our pricing should land clearly under that.

Option B
Annual subscription
Annual license for as long as the appliance runs, plus implementation services.
Use only when there is real ongoing work · e.g. parallel-run scenarios
Option C
Per-record pricing
Fixed price per record migrated. Implementation included or capped.
Good for very large migrations where volume is the conversation
Pricing Model
19 / 20

Recommended structure — clean three-line invoice.

Numbers below are placeholders calibrated against comparable enterprise software. Sales/finance to validate before any customer conversation.

MAiGRATE license $250K – $400K One-time. Right to run the CLAIRE realm for this migration.
Implementation services $150K – $350K One-time. Our SE configures the realm, runs dry-runs, supports cutover, hands off.
Optional 12-month support $50K – $80K Bug fixes, SE hours during stabilization period.
Total per customer $450K – $830K Roughly half the cost of the consultancy alternative ($1–3M).
Operating Principle
20 / 20
Their CLAIRE realm. Their network. Their MSSQL. Their AI provider. Their identity provider.
Their people sign off. Their evidence to keep.
MAiGRATE is not new engineering. Every primitive we use — Realms, Agents, Tools, Skills, JS Functions, Flows, Knowledge Base, FlowExecutionEngine, LLMProviderFactory, LLMAuditLogger, auditLogs.ts — is already in production in CLAIRE today. MAiGRATE is the same machine, packaged for migration use cases, sold to customers who refuse to put their data in our infrastructure.