01 / 12
Continuous GxP Compliance Monitoring
ATR · Audit Trail Review
Automated compliance review across the audit systems you already run — entirely inside your environment.
How CGLabs delivers ATR to three illustrative customers while respecting every data privacy and security constraint a regulated buyer brings to the table.
The Problem
02 / 12

GxP audit preparation is sporadic, manual, and expensive — and still misses what inspectors find.

Typical review cadenceQuarterly — issues compound in between
Typical cost per cycle$150–500K in internal effort + external audit fees
Event coverage~5–10% — humans sample, they cannot review every event
Failure modeMissed patterns surface during FDA / EMA inspections, when they are expensive to remediate
Security stanceNon-negotiable: audit events cannot leave their network, cannot use public AI
The Problem
03 / 12

Every existing option fails them.

Internal audit teams

Slow, inconsistent, sample-based. Good at catching what they look for; blind to unknown patterns at scale.

Big-4 auditors

Expensive, sporadic, and every engagement starts from scratch. They leave no operational capability behind.

GRC tools (Archer, MetricStream)

Can store events and workflows — but cannot reason across them. No pattern detection. No AI-grade traceability.

DIY SQL / scripts

Brittle, undocumented, unauditable. When a regulator asks how a finding was reached, nobody can answer.

No vendor today ships a continuous compliance monitoring appliance that runs inside the customer environment, reasons across audit events with AI, and produces regulator-grade evidence for every finding. That gap is what ATR fills.
Illustrative Scenarios
04 / 12

Three illustrative customer scenarios — different sources, different cadences, different AI environments.

Customer A
Top-20 pharma · hypothetical
Audit source
Veeva Vault audit trail
Review cadence
Monthly
Use cases in scope
9 standard + 2 custom
Hard part
2 weeks of manual review every month, yet still missing cross-user patterns
AI environment
Their Azure OpenAI tenant
Customer B
Mid-size biotech · hypothetical
Audit source
Jira (change control) + LIMS
Review cadence
Weekly
Use cases in scope
6 custom — tailored to their SDLC
Hard part
No audit trail defensibility on change control today; regulator flagged it
AI environment
Self-hosted (no cloud AI)
Customer C
Specialty medical device · hypothetical
Audit source
Veeva Vault + TrackWise CQ
Review cadence
Monthly + on-demand
Use cases in scope
9 standard + 3 device-specific
Hard part
Cross-system anomalies missed because sources reviewed separately
AI environment
Their AWS Bedrock
Illustrative Scenarios
05 / 12
Three different audit sources. Three different cadences. Three different custom use cases. Three different AI environments.
Looks like three projects.
It is one product, configured three ways.
Customer Concerns & Answers
06 / 12

Six questions every regulated buyer asks — and our structural answers.

Where does our data go?Nowhere. ATR installs as a Docker appliance inside the customer's data center. Audit events never leave their network.
Who has access to it?Only their own staff. Our SE configures alongside them; we never need access to their events to deliver.
What does the AI see?Only their LLM environment. Azure OpenAI, AWS Bedrock, or self-hosted. We never bring an AI vendor into their stack.
Can we prove what happened?Yes — for every finding. Original event + cross-reference reasoning + AI rationale + severity. Defensible to any regulator.
What if you go away?Nothing breaks. The appliance runs independently. Historical findings remain in their database forever.
How do we control access?Their identity provider. Azure AD / Okta via SAML. Same offboarding as their other quality systems.
How ATR Works
07 / 12

Nine compliance use cases, evaluated against every audit event, every cycle.

ATR fetches audit events, cross-references each user against the customer's knowledge graph (roles, training, tickets, SOPs), and evaluates each event against nine standard use cases. Every finding is logged with full reasoning.

UC-01
Unauthorized User
User not found in knowledge graph or lacks system authorization
Critical
UC-02
Untrained User
Training gaps for the role this user performed actions in
Major
UC-03
Uncontrolled Changes
Edits to config or security objects without approved tickets
Major
UC-04
Undocumented Deletions
Delete actions without change tickets or approvals
Major
UC-05
Off-Hours Activity
Actions outside customer's defined business hours policy
Minor
UC-06
Failed Logins
Repeated auth failures or unusual credential patterns
Major
UC-07
Bulk Operations
Same user + action repeated within a short window
Major
UC-08
Audit Trail Gaps
Unexplained gaps during business hours
Minor
UC-09
Regulatory Record Mods
Edits to approved/locked records — the highest-risk category
Critical
Delivery Approach
08 / 12

Every customer engagement follows the same shape. The differences are configuration, not code.

01
Install
Docker stack in their data center, SAML wired
02
Seed KG
Users, roles, training, SOPs, tickets loaded
03
Baseline
First cycle on recent history; establish normal
04
Calibrate
SMEs review findings, tune thresholds
05
Operate
Ongoing weekly/monthly review cycles
06
Hand off
Customer staff run cycles independently

The knowledge graph is the critical asset — it encodes who is authorized, who is trained, which SOPs govern which systems. Keeping it current is a customer operational commitment, not a software install.

Cost & Pricing
09 / 12

Customer-side cost is trivial. Pricing is subscription-native.

Customer Infrastructure
Under $1K / month to host.
Existing on-premiseThey already have host capacity~$0
Customer Azure / AWSOne VM + 500 GB SSD$500–700/mo
Our-side hostingATR runs in their environment, not ours$0
Pricing Model
Annual subscription — per audit source.
Platform license (annual)Right to run ATR appliance$150–250K
Per additional audit sourceAdd Jira, MES, TrackWise, etc.+$50–100K
Implementation servicesOne-time install, seed, calibrate, hand off$100–200K

Numbers are placeholders. Sales/finance to validate before quoting any customer.

Deliverables Per Cycle
10 / 12

What the customer gets at the end of every review cycle.

1
Formal compliance review reportExecutive summary + severity breakdown + regulatory references (21 CFR Part 11, EU Annex 11)
2
Findings table with full evidenceEvery finding traced back to the originating events, the cross-reference that caught it, and the AI reasoning — defensible to a regulator
3
Prioritized remediation actionsWith severity-based due dates (Critical: 7d · Major: 30d · Minor: 90d) and owner assignments
4
Trend analysis vs prior cyclesFindings by severity over time, recurring patterns, closure rates — their compliance posture as a living metric
5
Audit trail of the review itselfEvery fetch, every cross-reference query, every AI call — their regulators can inspect how ATR reached its conclusions
The Real Advantage
11 / 12
ATR never writes to customer systems. It reads audit events, cross-references the knowledge graph, and produces findings.
Read-only by design.
We cannot corrupt what we cannot modify.
Operating Principle
12 / 12
Across all three customers, the working pattern is the same:
Their installation. Their network. Their audit data. Their AI. Their identity. Their people sign off. Their evidence to keep.
That sentence is what makes ATR acceptable to a regulated buyer. It is not a feature list — it is the operating model. Everything else is built on top of that foundation.