> This is the markdown version of https://www.maniac.ai/. Visit the full page for interactive content.


recursive data analyst · built on your ontology

# Ask the messy question.  
Get the deterministic answer.

Arendil decomposes business questions into hundreds of grounded sub-queries across your warehouse, docs, tickets, and CRM, then reconstructs one cited answer. No SQL. No schema cleanup. No agent demo-ware.

arendil · analyst · live

listening

question

question 01question 02question 03question 04

real examples from enterprise users · cycling every few seconds

[Ask a question of your data](https://app.maniac.ai/auth/register)[See how the ontology learns](/book-demo)

soc 2 type ii

deploys in your vpc

bring your own model

no schema cleanup

\[ The moat is in the mess \]

## An analyst that reasons recursively. On data that was never clean.

Arendil sits between your business questions and the ugliest schema in your warehouse, and makes the answer deterministic.

01

1,000×

Sub-queries per question

Resolved recursively in one user minute.

02

0

Schemas to pre-clean

The ontology forms from your mess, not against it.

03

100 +

Source systems unified

Warehouse, SaaS, docs, tickets, code, comms.

04

15 min

Average time from signup to first answer

Plug in one warehouse and one system of record.

\[ The shift under messy data \]

## Most AI flattens. Arendil digs in.

Clean data is a commodity. Messy data is where every enterprise actually lives. Recursion is how you reason across it.

FLAT AGENT

Ask once. Answer once. Pray the columns line up.

-   One prompt → one answer → one table
-   Breaks on joins, nulls, misspelled columns
-   Gets worse as your data grows
-   Hallucinated insight, no lineage
-   You pre-clean the schema

RECURSIVE ANALYST

Every question becomes a tree. Every leaf is a fact.

-   Decomposed tree of sub-queries across every system
-   Ontology absorbs the mess before the LLM sees it
-   RL fine-tunes on every correction
-   Cited lineage, row-level provenance
-   The ontology forms itself

\[ The ontology layer \]

## Your business has an ontology.  
You just haven’t written it down.

Column names lie. Dashboards contradict. Definitions drift across teams. Arendil learns the true ontology from how your team actually queries, and makes it the contract every agent, app, and analyst shares.

Learned, not configured

Our RL loop catches the joins your docs forgot. Edges appear in amber the first time the system figures them out.

Deterministic under messy input

The same question resolves to the same query, every run. Reviewers can trust the number without re-reading the trace.

Shared by every agent

One source of truth, queried a thousand different ways. The ontology is the contract across the org.

amber = the ontology figured it out· hover any field to see its sources

ontology · customer concept · weaving

sources 4edges learned 0 / 20

salesforce

-   Idstring
-   Namestring
-   BillingAddressaddress
-   OwnerIdfk
-   StageName\_\_cenum

stripe

-   id
-   name
-   address
-   metadata.owner
-   status

netsuite

-   internalId
-   companyName
-   billAddress1
-   salesRepInternalId
-   entityStatus

hubspot

-   hs\_object\_id
-   name
-   address
-   hubspot\_owner\_id
-   lifecyclestage

concept

customer

5 fields

-   idstring
    
    from 4
-   namestring
    
    from 4
-   addressaddress
    
    from 4
-   owner\_idfk<user>
    
    from 4
-   lifecycle\_stageenum
    
    from 4

SELECT \* FROM ontology.customer

Four sources, one concept. Every thread in amber is a mapping the ontology learned, not a join you had to configure.

learnedsourceunified

\[ Watch the analyst think \]

## Every answer is a tree  
you can inspect, replay, and challenge.

One real business question. Decomposed into sub-queries. Each leaf grounded in a source system. A final answer with 14 citations.

arendil · recursive query

\[ Q3 pipeline investigation \]

depth3leaves8sources6cites14

replay

01ask

02decompose

03resolve

ask

Why did enterprise pipeline slip in Q3?

sub-query01

Stage transitions

sub-query02

Call engagement

sub-query03

Plan realism

sub-query04

ICP segments

\[01\]salesforce

Stage B → C conversion

−18% WoW

\[02\]salesforce

Stage C → Won conversion

stable

\[03\]gong

Positive-sentiment calls

−22%

\[04\]zoom

Average meeting length

−6 min

\[05\]confluence

Assumed rep capacity

+12%

\[06\]snowflake

Actual rep capacity

+3%

\[07\]snowflake

Fintech usage

−9%

\[08\]snowflake

Retail usage

+2%

answer · grounded

cites \[01\] \[03\] \[05\] \[07\] · 14 total

The Q3 slip concentrates at Stage B→C (−18%), where call sentiment collapsed (−22%) against a plan that assumed +12% rep capacity versus the +3% we actually shipped. Fintech ICP lagged (−9%) while retail held.

inspect treechallenge stepexport sqlreplay

\[ Every system becomes substrate \]

## Arendil learns the ontology that ties them.

You don’t wire the joins. They emerge from how your team already queries. Every new system adds substrate, not setup.

amber · learned relationships compound daily

Warehouses

source of truth

SnowflakeBigQueryDatabricksRedshiftPostgres

Systems of record

transactional ground truth

SalesforceNetSuiteWorkdaySAPHubSpot

Tickets + knowledge

narrative ground truth

JiraServiceNowConfluenceNotionLinear

Communication

informal signal

SlackTeamsGongGmailZoom

Code + docs

definitions drift here

GitHubDriveSharePointDropboxBox

\[ Why this compounds \]

## The messier your data,  
the stronger Arendil becomes.

Recursive reasoning plus RL-trained ontology means accuracy goes up on the schemas your consultants failed on.

day 1

Connect.

Point Arendil at your warehouse and two systems of record. Mapping begins in the background. You don’t clean a single table.

ontology coverage

15% resolved

week 1

Ontology forms.

~70% of business questions resolve without human schema work. Learned edges light up in amber across the graph. Analysts stop being translators.

ontology coverage

70% resolved

month 1

RL compounds.

Every correction trains the recursive analyst. Accuracy climbs above human baseline on your hardest schemas — and stays there.

ontology coverage

100% resolved

\[ Epistemic governance \]

## Audit the reasoning.  
Not just the output.

Every run is a tree you can replay. Every correction becomes a learned edge. Every answer carries its provenance so reviewers can trust the number without re-reading the query.

Replay the tree

Walk every sub-query, see the exact row that produced each claim.

Recompute with context

When an edge is corrected, all affected answers re-resolve against the new ontology.

Learn from challenges

Human disagreement becomes training signal, not a bug report.

Row-level permissions

What the analyst can read is constrained by the same SSO + roles your BI tool uses.

reasoning\_trace · today · 14:00–15:00

LIVE

1.  14:22:06query
    
    sarah.chen@ · cfo office
    
    Q: "Why did enterprise pipeline slip in Q3?"
    
    decomposed · 14 sub-queries
2.  14:22:09resolve
    
    analyst · recursive
    
    Walked 6 sources. Produced 14 facts with citations.
    
    grounded
3.  14:22:41correction
    
    marcus.oyelowo@ · rev ops
    
    Flagged: "Opp.StageHistory feeds PipelineView, not dbt\_pipeline."
    
    edge learned · ontology updated
4.  14:23:18recompute
    
    analyst · recursive
    
    Re-ran question against updated ontology.
    
    +2% closer to CFO baseline

Every answer, every correction, every relearned edge. Stored, replayable, exportable.

REPLAYEXPORTROLLBACK

\[ Who it’s for \]

## Loved by the business.  
Trusted by IT and security.

Give every team a faster way to get answers from messy data without bypassing the controls enterprise buyers need.

Data + analytics leaders

Answer the executive question before the dashboard loads. Reclaim your analysts from "can you pull..." tickets.

Rev ops + finance

Reconcile figures across Stripe, NetSuite, and Salesforce without a week of CSV surgery.

IT + platform

One governed surface for the analyst agent. One ontology. One auditable trail. Every team gets it.

\[ Enterprise-ready by default \]

## The controls your procurement, security, and legal teams will ask for.

Built to pass review at financial institutions, healthcare systems, and global services firms. Same controls whether we’re running in our cloud or inside yours.

SSO, SCIM, SOC 2

Okta, Entra, SAML 2.0, OIDC, row-level access inherited from your warehouse.

Bring any model

OpenAI, Anthropic, open-source, or your own endpoint. We never lock the reasoning layer.

Deploy in your VPC

Managed cloud, single-tenant VPC, or on-prem. Your warehouse credentials never leave your boundary.

Row-level lineage

Every claim traces to the exact row. Every corrected edge is stored and replayable.

Reversible by default

Pause, rollback, or quarantine an answer path without taking the system down.

Enterprise assurance

Security pack, red-team reports, and procurement package available on request.

\[ Next step \]

## Ask your first messy question.

Plug in one warehouse and one system of record. See the ontology form in 15 minutes. Get your first cited answer before your next stand-up.

[Start with your data](https://app.maniac.ai/auth/register)[Book a technical walkthrough](/book-demo)[Talk to our enterprise team](/contact)

accuracy compounds after launchSOC 2 · VPC · BYO modelrecursion you can audit

---

*Arendil, High throughput background agents. Opus-quality outputs at 1/50 of the cost. Learn more at [maniac.ai](https://www.maniac.ai).*