Skip to main content
Zenii logo

Zenii

v0.0.19MITRust 2024

20 megabytes. AI everywhere.

Install one binary. Now your scripts have AI memory. Your cron jobs reason. Your Telegram bot thinks. A private AI backend for everything on your machine — with a native desktop app, plugins in any language, and an API your curl can call. Just Rust.

terminal
$curl http://localhost:18981/health
{"status": "ok"}
$curl -X POST http://localhost:18981/chat \
  -H "Content-Type: application/json" \
  -d '{"session_id": "abc", "prompt": "What files changed today?"}'
18 AI providers — or bring your own via Ollama
A full API your curl can call
Plugins in any language — Python, Go, JS, whatever you write
An AI that remembers what you told it last month
Your data never leaves your machine
A real desktop app — not a browser wearing a disguise
Security that's on by default, not an afterthought
20 MB. Just Rust.
MIT licensed

ChatGPT is a tab you open. Zenii is a capability your machine gains.

96
API Routes
18
AI Providers
<20 MB
Binary Size

What Zenii is NOT

  • Not a chatbot wrapperit's a full API backend
  • Not Electronnative Tauri 2, under 20 MB
  • Not a framework you learnit's infrastructure you call via curl
  • Not cloud-dependentruns fully offline with Ollama
  • Not opinionated about your stackany language, any tool, JSON over HTTP

Why Zenii?

5 tools to do one job

The status quo is duct tape.

  • Conversations disappear after each session
  • No API — just a chat interface
  • Locked to one language ecosystem
  • Your data trains someone else's model

1 binary. 96 routes.

Everything you need, nothing you don't.

  • Semantic memory persists across restarts
  • 96 REST + WebSocket routes
  • Plugins in any language via JSON-RPC
  • 100% local. Zero telemetry.

Single Rust binary. Zero telemetry. Your data stays local.

Download Zenii v0.0.19

Choose the right installer for your platform. Your OS has been auto-detected.

Full GUI application with native window

Your pain. Our fix.

Your painHow Zenii fixes it
Context resets every AI sessionSemantic memory persists across sessions and survives restarts
AI can't do things, only talk16 built-in tools: web search, file ops, shell, scheduling
Locked into one AI provider18 providers, switch with one config change
AI tools are cloud-only100% local, zero telemetry, OS keyring for secrets
"Works on my machine" for AISame binary on macOS, Linux, Windows — desktop, CLI, or daemon

Built for real desktop AI work

Not a chatbot. An API server.

96 routes. curl localhost:18981. Your scripts, cron jobs, and browser extensions all get AI — no SDK required.

A real desktop app. Not Electron.

Tauri 2 + Svelte 5. Under 20 MB binary, under 50 MB idle RAM. A native app that respects your machine.

Write plugins in Python, Go, JS — or anything.

JSON-RPC 2.0 over stdio. Any language that reads stdin and writes stdout works. Plugins are first-class citizens.

It remembers. Across sessions, across restarts.

SQLite FTS5 + vector search. Conversations, tool results, and context survive restarts. Your AI gets better over time.

Gets smarter over time. Asks before changing.

Self-evolving agent capabilities with human-approved proposals. The AI learns your preferences and grows its skills — with your permission.

Security is architecture, not a checkbox.

6 layers active by default: OS keyring, autonomy controls, filesystem sandboxing, injection detection, rate limiting, and full audit trail.

Code examples

Integrate from any language. No SDK required.

# Health check
curl localhost:18981/health
# → {"status": "ok"}
 
# Create a chat session
SESSION=$(curl -s -X POST localhost:18981/sessions \
-H "Content-Type: application/json" \
-d '{"title": "my-project"}' | jq -r '.id')
 
# Send a message
curl -X POST localhost:18981/sessions/$SESSION/messages \
-H "Content-Type: application/json" \
-d '{"role": "user", "content": "What tools do you have available?"}'
 
# Chat with the agent (non-streaming)
curl -X POST localhost:18981/chat \
-H "Content-Type: application/json" \
-d '{"session_id": "'$SESSION'", "prompt": "Search the web for Rust async patterns"}'

Where Zenii fits

FeatureZeniiOpenClawZeroClaw
CategoryAI backendChat agentMinimal daemon
LanguageRustTypeScriptRust
Binary<20 MB (w/ GUI)~100 MB+~3.4 MB
Desktop GUINative (Tauri 2)
API Routes96 REST+WSChat endpointDaemon endpoint
PluginsAny languageJS onlyRust only
MemoryFTS5 + vectorsFile-basedBasic
Self-EvolutionHuman-approvedAutonomous
SchedulingCron + one-shotCron
Security6 layers defaultOptional sandboxPrivacy claims
LicenseMITOpen sourceOpen source

6-layer security defense

LayerDescription
Credentials
OS keyring with zeroize memory protection. No plaintext secrets on disk.
Autonomy
Three autonomy modes: Supervised, Autonomous, Strict. Configurable per-session.
Filesystem
Allowlist/blocklist path rules. Agents cannot access paths outside configured boundaries.
Injection
Prompt injection detection heuristics applied before every LLM call.
Rate Limiting
Per-provider and per-tool rate limits prevent runaway costs and abuse.
Audit Trail
Every tool invocation, LLM call, and file access is logged to a local audit database.

Autonomy Modes

Strict

Agent operates within tightly constrained boundaries. Minimal autonomy, maximum oversight.

Supervised

Agent proposes actions. User confirms or rejects each one before execution.

Autonomous

Agent executes autonomously within configured boundaries. Use with caution.

All data stays on your machine. No telemetry, no cloud sync, no account required. Zenii is v0.0.19, actively developed.

Built for how you actually work

Personal Knowledge Assistant

An AI that remembers what you told it last month. FTS5 + vector search means past conversations inform future ones. Your data never leaves your machine.

DevOps Automation Hub

A full API your curl can call — the AI equivalent of a well-documented Unix tool. Built-in cron scheduler — no external orchestration needed.

Private Coding Assistant

Ollama integration for 100% offline operation. An AI that remembers project structure, conventions, and past discussions across restarts.

Multi-Provider AI Router

18 AI providers managed through one gateway — or bring your own via Ollama. Switch models per-task without code changes. OS keyring stores all credentials securely.

Plugin Developer Platform

Plugins in any language — Python, Go, JS, whatever you write. JSON-RPC 2.0 over stdio. Their tools appear identically to built-in ones.

Get started in minutes

  1. 1Go to the download section above and grab the installer for your platform.
  2. 2Run the installer (.dmg for macOS, .msi for Windows, .deb/.rpm for Linux).
  3. 3Launch Zenii from your applications menu.
  4. 4Configure your first LLM provider in Settings.
  5. Go to downloads

Frequently asked questions

About Zenii

Zenii (pronounced "ZEN-ee-eye", /ˈzɛn.iː.aɪ/) is a portmanteau of Zen — the Japanese philosophy of calm mastery and elegant simplicity — and genii, the Latin plural of genius, meaning guardian spirits or innate intelligence. Together, it captures what the project is: calm, minimal, powerful AI that quietly runs on your machine. The double-i ending also nods to AI itself — artificial intelligence baked into the name. 20 MB of serene genius.

Zenii is at v0.0.16 under active development with comprehensive test coverage, zero clippy warnings, and 6-layer security. It’s stable for personal and development use. Check the GitHub roadmap for enterprise readiness milestones.

Using Zenii

Yes. Zenii runs 100% on your machine with no cloud dependency. If you use a local model (Ollama, llama.cpp, LM Studio), nothing ever leaves your network. Cloud providers like OpenAI or Anthropic are optional — you connect them with your own API key only if you want to.

18 providers out of the box: OpenAI, Anthropic, Google Gemini, Mistral, Cohere, Groq, Together AI, Fireworks AI, Perplexity, DeepSeek, xAI Grok, Ollama, llama.cpp, LM Studio, OpenRouter, Azure OpenAI, AWS Bedrock, and any OpenAI-compatible endpoint. Bring your own API key — no Zenii account required.

Everything lives in a single folder on your machine — SQLite database, vector embeddings, configuration, and logs. Nothing is synced to the cloud. You can back it up, move it, or delete it like any other folder. On macOS: ~/Library/Application Support/Zenii, on Linux: ~/.local/share/zenii, on Windows: %APPDATA%\\Zenii.

No. Zenii is MIT-licensed and completely free. There is no account, no sign-up, no telemetry, and no subscription. You just download it and run it. The only cost is whatever your chosen LLM provider charges for API usage — and if you use a local model, even that is zero.

Plugins communicate over JSON-RPC via stdio — write them in any language that can read stdin and write stdout. A plugin declares its tools in a manifest, Zenii launches it as a subprocess, and your conversations or API calls can invoke those tools. No SDK required, no language lock-in. Ship a Python script, a Go binary, or a shell script.

Comparisons

OpenClaw is for chatting with AI — 50+ messaging integrations and self-extending skills. Zenii is for building with AI — a local backend your scripts, bots, and cron jobs call. Different goals. They can even work together.

ZeroClaw is a minimalist daemon — same Rust DNA, same privacy values. Zenii trades 3.4 MB for a desktop GUI, vector memory, a plugin system in any language, and scheduled automation. Think of ZeroClaw as the engine, Zenii as the car.