Zetesis Lab

Zetesis is an attempt to make discovery engineerable. We treat ignorance as a first-class object with internal geometry, and inquiry as a controllable operator that changes what can be expressed, tested, and retained.

The lab exists because most real systems fail not due to lack of computation or data, but due to fragile understanding under drift: teams can plan once they have a model, and they can verify once they have disciplined measurement, but they repeatedly stall at the same upstream bottleneck— constructing a living, local world model that stays usable as reality changes.

The Thesis

AI should not output "answers"; it should solve real problems:

  1. (i) a local world model,
  2. (ii) a concrete plan and reproducible execution, and
  3. (iii) an auditable explanation tethered to observing results

The point of AI is not to eliminate uncertainty; the point is to keep uncertainty structured so execution remain safe, fast, and revisable.


A Practical Definition of Intelligence

(as we engineer it)

We define general intelligence operationally as a loop over three objects:

1. World-building (abduction)

Construct and maintain an internal world model that explains what is happening and what could happen. This is not "fitting a curve"; it is choosing variables, mechanisms, and representations that make the situation legible.

2. Planning & Execution (interventions)

Use the world model to synthesize actions that either (a) change the world, or (b) collapse uncertainty cheaply.

3. Verification (induction)

Check claims against traces; detect drift; keep outputs auditable and corrigible.

Key motivation: Humans do not merely adapt their internal model to the world; they repeatedly try to reshape the world to match internal models. That capability is what produces disproportionate impact—and also disproportionate failure.

The Object: A Local World Model

The primary object is a local theory-state—what the system currently believes is going on, what it is holding fixed, what it can express, and what it has actually observed. Internally we represent a theory-state in a compact, typed format (URS), and we track not just "uncertainty," but structured ignorance.

The Core Design Reversal

"Knowledge is what remains after inquiry. Ignorance is not a residual error term; it is the generative substrate. If you can represent the geometry of ignorance, you can engineer inquiry."

The Calculus of Discovery

[epistemology]

The calculus of discovery is the agent-independent part of our work: a disciplined way to go from traces to mechanisms and back again, without collapsing into either storytelling (pure narrative) or brute learning (fit-first). It treats inquiry as a sequence of explicit operators:

Imagine

Propose new variables and mechanisms; move unarticulated regularities into articulated questions.

Experiment

Design minimum interventions that collapse uncertainty cheaply.

Describe

Retype traces into stable representations; keep language consistent with data and provenance.

Communicate

Import external knowledge as traces/representations without forcing it into the model as unquestioned axioms.

Measurement Classes

Class Definition
Plausibility Could a claim be made consistent with current structure?
Verification Is there evidence/proof for this claim against the current traces and representations?
Invariance How robust is a claim under future coherent challenges (threat sets, stress tests)?
Efficiency Validated novelty per unit resource—how much externally checkable knowledge obtained per budget.

The goal is not "truth." The goal is a world model with explicit ignorance that tells you what to do next, what would change your mind, and which parts of the model are stable under stress.


Cognitive Architecture

[noology]

The agent-dependent part: not only "what is known," but the structure of the agent that is doing the knowing—representation biases, inductive priors, and the capacity to invent new representations when existing ones cannot compress the situation.

Central Claim

Some structure is learned, but that depends primarily on the agent, not on the task. Two agents can face the same traces and diverge wildly because they factor the world differently. An agent with a compatible internal structure for a task performs disproportionately well; an agent that can discover new structure (new variables, new grammars, new decompositions) is what we would call "genius" in practice.

Learning

Accumulates bedrock content. Tends to constrain you to what is repeatedly experienced and safely generalized.

Imagination

Ability to posit a structure cheaply from thin traces and then act to make it real. Allows a local world model to be created quickly—even if imperfect—and then enforced on reality.


Geometry of Ignorance

Ignorance is not a scalar. It has types and boundaries.

I.

What is explicitly known and supported.

II.

What is explicitly known to be unknown.

III.

What is implicitly present in traces and regularities but not yet structured into the current representation.

IV.

What is outside the current language and measurement system.

A discovery system fails when it hides these distinctions behind a single score. Our approach keeps them explicit so the system can choose the minimum next question.


Case Studies

(rendered as diagrams, not essays)

Case 1 — When everyone agrees and nothing moves

polite agreement signals → micro-delays and deferrals → multiple plausible constraints → small probes → one stable constraint → governance shift → execution

Takeaway: Stalled action is usually a missing constraint; small probes surface it, and then decisions become obvious.

Case 2 — A doctor with the next patient

sparse presentation traces → candidate explanations → single discriminating test → collapse of dangerous branch → constrained action → follow-up verification

Takeaway: Diagnosis begins as a working explanation; tests are chosen to eliminate possibilities with minimum cost, not to maximize confidence.

Case 3 — A prehistoric hunter with one attempt

thin traces (partial, unreliable) → possible worlds → committed plan → world-shaping action (tools/coordination) → outcome → model update

Takeaway: When failure is too expensive for trial-and-error, thin traces steer a world model; the plan reshapes the environment.

Case 4 — Periodic table: structure makes facts generative

shared catalog of facts → imposed organizing structure → blank slots → predicted properties → directed search → verification

Takeaway: Breakthroughs come from changing representation, not collecting more examples; structure choice controls what you can see next.


What We Want You to Walk Away With

1.

Discovery can be engineered: it is a sequence of operators under explicit ignorance, not a vibe.

2.

Robustness is a stronger primitive than confidence: what survives future coherent challenges is what you can safely build on.

3.

Representation is not a formatting choice: it is a generative grammar that controls what mechanisms can be discovered.

4.

A serious system earns the right to plan: it outputs minimum next actions tethered to evidence, and it knows what would change its mind.


Invitation

If you want to understand what we do, do not ask "what model do you use?" Ask:

Zetesis is built for domains where failure has a cost, drift is real, and justification chains matter.