This is a personal portfolio project — not a commercial product. No real client data is used.

Portfolio project · Demo available · No real data

I built an AI
audit workpaper
engine.

AlainIQ automates the most time-consuming part of audit fieldwork — evidence upload to formatted, multi-tab Excel export — using a structured 5-stage AI pipeline built in TypeScript.

Multi-stage AI pipelineExcel workpaper exportVision + structured JSONFull-stack TypeScript
5-StageAI PipelinePreflight → workpaper compile
Multi-tabExcel ExportExcelJS with custom formatting
PDF + XLSXEvidence Types+ images, text, email files
ValidatedJSON OutputsStructured schema per stage
SHA-256Evidence IntegrityHash on every upload
Architecture

The 5-stage AI pipeline

Every analysis run passes through the same deterministic sequence. No single-shot prompting — each stage has a defined input schema, structured output, and failure mode. The pipeline can be resumed, retried per stage, and queried for debugging.

01Stage
Preflight

Evidence completeness and quality scored before analysis begins. Poor-quality or irrelevant uploads are flagged early — no wasted AI calls.

Output
Quality score + skip/proceed
02Stage
Evidence Extract

GPT-4o vision parses PDFs, spreadsheets, images, and email into validated structured JSON. Each file type has a dedicated extraction schema.

Output
Typed evidence objects
03Stage
Step Evaluate

Every test criterion is independently evaluated against extracted evidence. Each step yields a pass/fail, confidence score, and full audit narrative.

Output
Step conclusions + narratives
04Stage
QC Review

Cross-step consistency pass. Contradictions between steps, missing evidence references, and unsupported conclusions are automatically flagged.

Output
QC score + exception flags
05Stage
Workpaper Compile

ExcelJS assembles the formatted, multi-tab workpaper pack: testing WP, evidence log, exceptions register, remediation plan, and sign-off sheet.

Output
Formatted .xlsx workpaper

Why this architecture?

Structured over conversational. Each stage outputs a validated JSON schema instead of free-form text. This makes the AI output deterministic enough to drive downstream logic — exception flags, QC comparisons, and workpaper cell values all come from typed objects, not parsed prose.

Staged, not monolithic. Splitting the pipeline into stages lets each one fail, be retried, or be skipped independently. The preflight check alone saves significant cost — low-quality evidence uploads are rejected before any expensive model calls.

Excel is harder than it looks. Audit workpapers have specific formatting requirements — merged cells, locked headers, conditional colours, reference formulae. ExcelJS generates the file programmatically, cell by cell, rather than filling a template — which means the output format can be changed in code without touching a file.

Evidence integrity as a first-class concern. Every file gets a SHA-256 hash on upload. The hash is stored alongside the evidence record and re-verified at analysis time, so the workpaper can document that evidence used in the analysis matches what was uploaded.

Stack

Next.js 14
App Router, server components, API routes
OpenAI GPT-4o
Vision + structured JSON mode, model selectable per user
Prisma + PostgreSQL
Relational data model via Supabase
ExcelJS
Programmatic multi-tab workpaper generation
NextAuth.js
Session-based auth with per-user rate limiting
Zod
Schema validation for all AI output and API boundaries
What it does

Built like a real audit tool

Every feature was designed around how audit workpapers are actually structured — from engagement setup through to exception documentation and sign-off.

Audit Workflow Engine

Engagements, controls, risk ratings, test steps, and criteria — modelled around how audit workpapers are actually structured and reviewed.

Prisma + PostgreSQL · NIST, COBIT, FFIEC, PCAOB

AI Step Evaluation

Every test criterion evaluated with a pass/fail, confidence score, and supporting narrative written in real audit language — not generic AI output.

GPT-4o structured JSON outputs · per-step confidence

Evidence Handling

PDF, XLSX, images, and email files. SHA-256 hash on every upload. Evidence stored separately from application data with session-authenticated API access.

Multimodal vision · integrity hashing · per-user rate limiting

Workpaper Export

One-click Excel generation: testing WP, evidence log, exceptions register, remediation plan, and sign-off sheet — formatted and ready for review.

ExcelJS · multi-tab · custom cell + column formatting

Review & QC Workflow

QC scoring after analysis, reviewer approval flow, change request loop, and final sign-off. The full audit review lifecycle in a structured pipeline.

ReviewDecision model · staged run status · approval gate

Framework Mapping

Controls map to NIST CSF, COBIT 2019, FFIEC CAT, and PCAOB. Criteria excerpts attached per control and carried through into the exported workpaper.

Per-control framework + criteria storage · configurable per engagement
The project

Why I built this

Audit fieldwork involves an enormous amount of repetitive documentation — downloading evidence, reading through documents, writing workpaper narratives, populating Excel templates. The mechanical parts of that process are well-defined enough that a structured AI pipeline can handle them.

I built AlainIQ to explore whether that was actually true in practice. Not with a generic chatbot, but with a purpose-built pipeline that understands audit methodology: criteria, evidence, step-level conclusions, exceptions, and a formatted output that audit teams can actually use.

The technical challenge wasn't just the AI — it was making the output trustworthy enough to hand to a reviewer. That meant structured JSON schemas, QC validation, evidence hashing, and an Excel export that looks like it was built by someone who has actually filed audit workpapers.

What I focused on technically

Structured pipeline, not prompt hacking

Each stage has a defined JSON output schema validated with Zod. The AI is constrained to produce typed data, not free-form prose.

Domain-specific output format

The AI writes conclusions in audit language — not generic summaries. Prompts are grounded in real audit methodology (criteria, evidence, conclusion, exception).

Real workpaper format

The Excel output matches how audit teams actually lay out testing workpapers — not a report dump. Cell references, column widths, and locked headers are all programmatic.

Evidence integrity as a design constraint

SHA-256 hashing, session-authenticated file API, and evidence isolated from application data — not afterthoughts, but built in from the start.

Product walkthrough

See AlainIQ in action

A quick overview of the full flow from evidence upload to structured AI analysis and formatted Excel workpaper output.

Try it

Explore the demo

Create a demo account and walk through the full workflow — engagement setup, evidence upload, AI analysis, and Excel export. Demo credentials available on the sign-in page.

No real client data · Portfolio environment only

Explore the build

See how it's built

The 5-stage pipeline, Excel generation approach, evidence handling design, and the engineering decisions behind the structured AI output format.