Intelligent Micro Agents thru Validated Autonomous Learning

NPCs that think,
not just react.

Tiny neural networks that give game characters autonomous, goal-driven behavior. No behavior trees. No scripting. Learned intelligence that generalizes to situations it has never seen before.

1M
Parameters
<0.1ms
Per Decision
100%
Valid Output
10K+
Simultaneous NPCs

Game AI hasn't evolved in 20 years.

Every NPC in every major game runs on hand-authored behavior trees or utility systems. Designers manually script every possible decision. The result: predictable, brittle characters that break the moment they encounter a situation the designer didn't anticipate.

Players notice. They learn the patterns. The illusion of intelligence collapses after a few hours. Studios spend thousands of engineering hours maintaining AI systems that still feel robotic.

Traditional

Behavior Trees & Utility AI

Every decision path manually authored. Combinatorial explosion as game complexity grows. Breaks on novel situations. Months of designer iteration. No generalization — if it wasn't coded, it can't happen.

Ima Val

Learned Neural Policy

Train once on behavioral examples. Model learns patterns, not rules. Generalizes to novel states automatically. 45 minutes to train. Characters develop emergent behavior the designer never explicitly programmed.

Three steps to intelligent NPCs.

01

Define Your World

Map your game's locations, stats, actions, and goals to compact tokens. The vocabulary is your game's language — locations, verbs, needs, intentions. A 20-token input captures everything an NPC needs to make a decision.

02

Generate Training Data

Describe what good NPC behavior looks like through example trajectories. Our corpus generator creates tens of thousands of validated behavioral sequences. The data is the curriculum — no reward functions, no RL instability.

03

Train & Deploy

Train in the browser in under an hour. Export a single portable JSON file. Integrate with any engine — Unity, Unreal, Godot, custom. Pure matrix math, no ML frameworks at runtime. GPU-batched inference under 0.1ms per NPC.

Not a concept. A working system.

Ima Val has been prototyped and tested in a live simulation environment with a full economic system, threat dynamics, day-night cycles, and inventory management.

100%

Valid Decisions

309 consecutive actions without a single illegal output. The model learned game rules from data alone — no hardcoded constraints in the network.

10 Goals

Autonomous Intent

The model reads AND writes its own goal token. It decides what it wants, then acts on that intent — creating multi-step plans like gather→travel→sell→buy without scripting.

<0.1ms

Inference Speed

Each decision takes under a tenth of a millisecond per NPC on GPU. No ML framework at runtime. Just batched matrix multiplications running in parallel across thousands of cores.

~2 MB

Total Footprint

The entire trained model ships as a single JSON file. Weights shared across all NPCs. Memory per character is roughly 100 bytes of state.

Tens of thousands. Simultaneously.

Traditional AI systems collapse with scale. Each behavior tree is a CPU thread. Neural policy inference is a matrix operation — and GPUs are built for exactly that.

RimWorld Colony
20
MMO Town
500
City Simulation
5,000
Open World
10,000
Ima Val (GPU)
40,000+

A brain factory for game developers.

Train custom neural NPC intelligence for any game. Define behavior through examples, not code. Deploy a 2MB file that gives every character in your world the ability to think for itself.

🧠

Any Genre

RPG companions, RTS units, survival NPCs, city citizens, enemy tactics

🔧

Any Engine

Unity, Unreal, Godot, or custom

🎭

Personality

Different training data = different character. Personality from weights, not parameters

📈

Scales With Hardware

More GPU cores = more NPCs. The architecture is future-proof by design

The future of game AI is learned, not scripted.

We're looking for partners who want to ship it first.

Get in Touch