Zetesis Labs

World Models for Specialized, Adaptable AI that can take Action, Safely

A research-driven venture from ARTPARK @ IISc.

We build systems that make discovery measurable and dependable—grounded in measurement, world-model based, and verification-aware.

Explore the Solution ↓

1. The Bet

Market Context: Industry Shift to 'Features'
[ I ]
Post-Training LLMs
RL Infrastructure
[ II ]
Neuro-Symbolic
(NeSy)
[ III ]
Small Models
(SLMs)
[ IV ]
Causal AI
Scientific Reasoning

Industry seeks specialized intelligence not generic Q/A (coding, RCA, etc). This has invited 4 main paradigms. Each needs local context/a world model.

The Bottleneck

The "Cold Start" Problem

Specialized Answers & Verification requires Structured Knowledge/World Models. Lacking one is the terminal constraint on deployment.

Failure Mode: Path A

Theoretical Route - God Model/Universal World Model

f(m) = ∂V/∂x ...

Computationally intractable.

Failure Mode: Path B

Wiring Knowledge from Experience

ŷ = θ₀ + θ₁x ...

Bespoke services. "Consultancy".

Zetesis Resolution

Principled Calculus of Discovery

URT
Universal Process of Discovery
Automated Discovery
Verifiable, Adaptable Construction

2. The Solution: Intelligence Stack

Layer 1: World Model Building

Construction of World Model G. Internal mechanism is encapsulated.

Using: Universal Representation & Reasoning Theory (URT). View Demo API →

Layer 2: Planning & Execution

Neuro-symbolic RCA circuit. Ingests fault-context, isolation.

RCA

Root Cause Analysis with ranked suspects and minimum next checks.

4M Framework

Man
Operator context
Machine
Telemetry/configs
Method
Process recipes
Material
Batch provenance
incident → candidate causes → ranked suspects → minimum checks → resolution

Outputs are tethered to evidence, not narrative fluency.

Calibration

Maintained mapping under changing conditions, not one-time correction.

Multi-Layer Stack

1 Step tests (controlled lab)
2 Phantom validation (confounders explicit)
3 Field closure (drift accounting)
Bayesian calibration: drift and confounding treated as ignorance objects, not nuisances.

Where sensing startups die — made transferable, not artisanal.

Layer 3: Verification

Inductive reasoning gate. Checks claims against Ground Truth.

/-- No Free Lunch Theorem: No learner outperforms uniform random guessing 
   across all possible target functions. -/

structure Learner (X Y : Type) where
  hypothesis : List (X × Y) → X → Y

def offTrainingError (L : Learner X Y) (f : X → Y) 
    (train test : Finset X) : ℕ :=
  (test \ train).card.filter fun x => L.hypothesis (train.toList.map fun t => (t, f t)) x ≠ f x

theorem noFreeLunch [Fintype X] [DecidableEq X] [Fintype Y] 
    (L₁ L₂ : Learner X Y) (train test : Finset X) :
     f : X → Y, offTrainingError L₁ f train test = 
     f : X → Y, offTrainingError L₂ f train test := by
  -- The sum over all target functions is invariant to learner choice
  -- Each off-training point contributes equally across the uniform distribution
  apply Fintype.sum_equiv (Equiv.refl _)
  intro f
  simp [offTrainingError]
  -- Symmetry: permuting outputs preserves error count
  exact uniformDistribution_symmetry L₁ L₂ f train test

Formal verification in Lean4 ensures claims survive scrutiny.

Ecosystem

Ather IISc ARTPARK DST Temple