Zetesis is an attempt to make discovery engineerable. We treat ignorance as a first-class object with internal geometry, and inquiry as a controllable operator that changes what can be expressed, tested, and retained.
The lab exists because most real systems fail not due to lack of computation or data, but due to fragile understanding under drift: teams can plan once they have a model, and they can verify once they have disciplined measurement, but they repeatedly stall at the same upstream bottleneck— constructing a living, local world model that stays usable as reality changes.
AI should not output "answers"; it should solve real problems:
The point of AI is not to eliminate uncertainty; the point is to keep uncertainty structured so execution remain safe, fast, and revisable.
(as we engineer it)
We define general intelligence operationally as a loop over three objects:
Construct and maintain an internal world model that explains what is happening and what could happen. This is not "fitting a curve"; it is choosing variables, mechanisms, and representations that make the situation legible.
Use the world model to synthesize actions that either (a) change the world, or (b) collapse uncertainty cheaply.
Check claims against traces; detect drift; keep outputs auditable and corrigible.
The primary object is a local theory-state—what the system currently believes is going on, what it is holding fixed, what it can express, and what it has actually observed. Internally we represent a theory-state in a compact, typed format (URS), and we track not just "uncertainty," but structured ignorance.
"Knowledge is what remains after inquiry. Ignorance is not a residual error term; it is the generative substrate. If you can represent the geometry of ignorance, you can engineer inquiry."
[epistemology]
The calculus of discovery is the agent-independent part of our work: a disciplined way to go from traces to mechanisms and back again, without collapsing into either storytelling (pure narrative) or brute learning (fit-first). It treats inquiry as a sequence of explicit operators:
Propose new variables and mechanisms; move unarticulated regularities into articulated questions.
Design minimum interventions that collapse uncertainty cheaply.
Retype traces into stable representations; keep language consistent with data and provenance.
Import external knowledge as traces/representations without forcing it into the model as unquestioned axioms.
| Class | Definition |
|---|---|
| Plausibility | Could a claim be made consistent with current structure? |
| Verification | Is there evidence/proof for this claim against the current traces and representations? |
| Invariance | How robust is a claim under future coherent challenges (threat sets, stress tests)? |
| Efficiency | Validated novelty per unit resource—how much externally checkable knowledge obtained per budget. |
The goal is not "truth." The goal is a world model with explicit ignorance that tells you what to do next, what would change your mind, and which parts of the model are stable under stress.
[noology]
The agent-dependent part: not only "what is known," but the structure of the agent that is doing the knowing—representation biases, inductive priors, and the capacity to invent new representations when existing ones cannot compress the situation.
Some structure is learned, but that depends primarily on the agent, not on the task. Two agents can face the same traces and diverge wildly because they factor the world differently. An agent with a compatible internal structure for a task performs disproportionately well; an agent that can discover new structure (new variables, new grammars, new decompositions) is what we would call "genius" in practice.
Accumulates bedrock content. Tends to constrain you to what is repeatedly experienced and safely generalized.
Ability to posit a structure cheaply from thin traces and then act to make it real. Allows a local world model to be created quickly—even if imperfect—and then enforced on reality.
Ignorance is not a scalar. It has types and boundaries.
What is explicitly known and supported.
What is explicitly known to be unknown.
What is implicitly present in traces and regularities but not yet structured into the current representation.
What is outside the current language and measurement system.
A discovery system fails when it hides these distinctions behind a single score. Our approach keeps them explicit so the system can choose the minimum next question.
(rendered as diagrams, not essays)
Takeaway: Stalled action is usually a missing constraint; small probes surface it, and then decisions become obvious.
Takeaway: Diagnosis begins as a working explanation; tests are chosen to eliminate possibilities with minimum cost, not to maximize confidence.
Takeaway: When failure is too expensive for trial-and-error, thin traces steer a world model; the plan reshapes the environment.
Takeaway: Breakthroughs come from changing representation, not collecting more examples; structure choice controls what you can see next.
Discovery can be engineered: it is a sequence of operators under explicit ignorance, not a vibe.
Robustness is a stronger primitive than confidence: what survives future coherent challenges is what you can safely build on.
Representation is not a formatting choice: it is a generative grammar that controls what mechanisms can be discovered.
A serious system earns the right to plan: it outputs minimum next actions tethered to evidence, and it knows what would change its mind.
If you want to understand what we do, do not ask "what model do you use?" Ask:
Zetesis is built for domains where failure has a cost, drift is real, and justification chains matter.