The terminal is the UI
No web app, no electron, no daemon. Just `./secorizon` and you're talking to your model. Bracketed paste, raw-mode TTY, arrow-key history — the things you'd expect.
A terminal-native AI shell with shell access, methodology playbooks, and zero patience for cloud-AI condescension about whether you're authorized. Single binary, local model via Ollama, no telemetry, no rate-limited API.
Things were already obvious to anyone who's stared at a cloud LLM saying "I can't help with that" mid-engagement. SecorizonAI is what happens when you build the agent the security industry actually needs.
No web app, no electron, no daemon. Just `./secorizon` and you're talking to your model. Bracketed paste, raw-mode TTY, arrow-key history — the things you'd expect.
Commands the AI runs in its tool-use loop run on your machine, in your shell, with your privileges. The agent does the work — it doesn't just tell you what to type.
Plain markdown files define identity, rules, and workflow. Pentest playbooks for recon, web, code review, exploit dev. Edit, restart, redeploy in seconds.
All inference happens on your hardware via Ollama. No cloud round-trip, no telemetry, no rate-limited API. Your engagement data never leaves the box.
Every model response is a small JSON object — text for you, an optional shell command, a status flag. The shell parses it, runs the command, feeds the output back as the next user turn, and the loop continues. Simple, transparent, and trivially extensible.
{
"text": "Checking what's listening on the
target's edge...",
"command": "curl -sI https://target.example",
"search": "",
"status": "continue"
}Cert transparency, DNS, HTTP fingerprinting, takeover candidates, exposed admin surfaces — chained reasoning, not a pipeline.
Multi-file review with attacker mindset. Spots logic flaws, race conditions, deserialization sinks, auth bypasses that linters miss.
Stealthy network asset mapping, IPv6, NBT-NS / LLMNR / MDNS poisoning on selected targets, hash cracking. Chains into NTLM relay, ESC, and beyond.
Crash triage, ROP chain reasoning, primitive identification, PoC drafting. Pairs naturally with gdb/pwndbg sessions.
Subdomain → tech → vuln → PoC → H1 writeup, all in one shell. The agent drafts the report; you sign off.
Plain-markdown system prompt + guides means you can retarget the same chassis at legal research, financial analysis, or whatever your work demands.
# 1. Install Ollama (https://ollama.com) curl -fsSL https://ollama.com/install.sh | sh # 2. Pull a JSON-mode-friendly instruct model ollama pull <your-model>:tag # 3. Build the chat binary go build -o secorizon-go ./src/chat.go # 4. Drop in your system prompt mkdir -p ~/.secorizon $EDITOR ~/.secorizon/SECORIZON.md # 5. Run it SECORIZON_MODEL=<your-model>:tag ./secorizon
The system prompt is plain markdown — identity, rules, workflow protocol. Methodology playbooks live in ~/.secorizon/guides/ and load on demand via slash commands.
License the heart of SecorizonAI — the system prompt and methodology guides — and run it on your own infrastructure. Or have us apply it to specific targets on a pay-per-asset basis.
We run SecorizonAI against your scope and deliver a findings report with PoCs and remediation. No retainer, no minimum.
Own the brain of SecorizonAI — the system prompt and methodology guides. Run it on your own infrastructure, your own model, your own engagements.
Larger scopes, retained engagements, or bespoke methodologies — get in touch.
Tell us your scope. We'll come back with a quote, a timeline, and a clear answer on which plan fits your situation.