Home
← Back to HomeView on GitHub

Zubot

Motivation

Zubot started as a way for me to understand agent systems at a deeper level than prompt chaining. I had already used tools like Claude Code and looked closely at projects like OpenClaw, but I wanted to build the runtime myself so I could learn the full stack end to end.

The main goal was complete control and safety. I wanted strict control over which tools can run, how tasks get scheduled, how runs are resumed, and how memory is persisted. That made this project both a practical automation system and a long-term systems design exercise.

Project Overview

Zubot is an agent runtime that orchestrates scheduled and interactive LLM workflows. It combines a run queue, task scheduling, tool abstraction, and structured memory into one framework so automations can run repeatedly without turning into ad-hoc scripts.

Instead of treating an agent as a single chat loop, I built Zubot as a control layer between models and tools. That separation makes behavior more predictable, keeps execution state durable, and allows approval gates for actions that should never run silently.

Architecture

At the center is a SQLite-backed service that owns the run queue and task slots. Runs are queued, claimed, and tracked with explicit states, with no-overlap guarantees for the same task profile. Interactive runs support pause/resume lifecycles so human input can be gated into the workflow without breaking run integrity.

A heartbeat scheduler decides what should run and when. It evaluates definitions, applies misfire behavior, and enqueues due runs deterministically so duplicate execution does not happen during clock drift or process restarts.

Tools are exposed through a registry rather than called directly from agent logic. That layer handles filesystem policies, provider-level serialization for rate-limited integrations, and approval-gated control requests for high-impact actions.

Memory is modeled as structured state, not just transcript replay. Events are persisted, summarized by day, and retrieved in bounded form so prompts stay focused while still carrying useful long-term context.

Design Choices & Tradeoffs

I chose a single-process central service on purpose. Distributed workers are powerful, but for this stage they add coordination complexity and race conditions that are easy to get wrong. Keeping one authority process made queue semantics and concurrency limits easier to reason about.

Explicit run queues and fixed task slots were another deliberate choice. They prevent runaway parallel execution and make the runtime behavior inspectable. That also makes retries, timeouts, and cancellation rules easier to apply consistently.

I also avoided raw memory blobs where possible and leaned into structured state tables and daily summarization jobs. The tradeoff is more schema work up front, but the payoff is better idempotency, clearer debugging, and bounded context assembly during longer-running automations.

Planned Evolution

The next major step is a remote control plane backed by Postgres and deployed on Railway for metadata, thread storage, and remote control APIs. The current plan is to keep execution on the local host and use a reverse-polling model for safer remote invocation.

I also plan to add vector-backed long-term memory using pgvector so Zubot can do better cross-session recall and assemble context from semantically relevant historical artifacts instead of relying mostly on recent-day summaries.

Why This Project Matters

Zubot reflects a shift from isolated AI features to runtime engineering around AI systems. The project is where I experiment with determinism, tool governance, memory design, and automation infrastructure in a way that is practical enough to use day to day.

Source

GitHub: zubinjha/Zubot

Status: Actively evolving as a long-term platform for reliable AI automation workflows.