April 28, 2026 • qnib • 7 min read
OpenSpec: Starting Config
100% human-written — no AI tools are used to write these posts.
In the last blog post (Path towards Comprehension Debt) I teased that I am using OpenSpec to run my development. This blog post is going to dive a bit deeper into my workflow and how I use OpenCode.
OpenCode is an opensource GUI (and TUI) to run a coding agent on your computer. It’s similar to ClaudeCode, Cursor, etc. - but it is not tied to any LLM or service provider. That’s what stuck with me actually. At the end of the day I’d like to use my own models to run my coding task. I got a 128GB StrixHalo box in my homelab that eventually should run the LLMs needed.
My setup evolved the last couple of weeks and will evolve in the future - but I hope that the concepts about security, isolation of sessions and the initial tooling will last longer than summer 2026. We’ll judge at the end of 2026.
Git Worktrees
Something I started using recently are git worktrees. It’s been around for a while but I never felt the urge to use it. Until now with agentic workloads.
I have four repos for Metahub (lib,cli,api,ui - obvious mapping) and I might run two different workstreams within them (b/c the LLM is not at 1000 tokens/sec yet):
- UI improvements
- CLI feature
I need a way of running both (what I call) sessions at the same time. I usually would have one place for the git repo to be cloned to and that’s it.
What I am doing now is create a git worktree within a session directory (all of that is automated of course using Taskfiles).
And within that session directory I run my OpenCode TUI. This way I can have two OpenCode processes with two distincted sessions going.
.
├── sessions
│ └── ui-and-manipulation
│ ├── AGENTS.md
│ ├── CONTEXT.md
│ ├── metahub-api <- API
│ ├── metahub-lib <- LIB
│ ├── metahub-ui <- UI
│ ├── metahub-cli <- CLI
│ ├── opencode.json
│ ├── opencode.sh
│ └── Taskfile.yml
With that I can jump in the ui-and-manipulation session, start opencode and off I go.
Context
An LLM in its pure form is stateless. For it to “know” things it keeps a blob of text which is called the context. In it’s simplest form it might keep all the messages in the context with the user prompts and the agents replies.
If you prompt a question like What do you know about me? it might have a context that holds the messages like this:
## Context
The last messages where:
- My name is Christian
- I like in Berlin
## User Prompt
What do you know about me?
This blob of text allows the LLM to mimic human conversation and appear smart.
Agents.md
To create a baseline, context agents use a file called Agents.md (for ClaudeCode it’s called Claude.md). It has information for agents about the repository or the intend of the user that is put into the context of each agent and sub-agents.
My agent quickly took control of my Agents.md tbh but it started with something like:
The goal of the development is to work on three repos at the same time and keep them in lock-step for the duration of the session:
- repo/lib: common repository with the bulk of the business logic of the tool
- repo/api: api repo which uses repo/lib extensivly
- repo/cli: command line interface for admin jobs
- repo/ui: frontend of the tool
All of the repos are mapped into the session directory. Only work on these git worktrees and not the upstream repos.
### Tools
Use the MCP servers to do inspection instead of trying it with bash
- neo4j-main: metahub-dev instance which is tied to the GoCD livecycle
- neo4j-local: compose neo4j instance for fast local iterations
- jeager-local: compose jaeger instance to check traces of metahub-api interactions.
I tried to start small, as of writing the AGENTS.md has 160 lines.
OpenCode Config
For the past I used Anthropic models through the opencode-anthropic-auth plugin. The latest models works well for me and the cost are bareable for what I got. Anthropic (and the rest of the bunch as well) try to box you into their tooling - so far I was able to overcome it every time.
While I understand that a company can make the token usage much more efficient by controlling the application the user uses, I do not like to be boxed into one app. Call me old-fashion.
My config file looks like this:
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"obsidian": {
"type": "local",
"command": [
"node",
"${HOME}/.npm/_npx/XYZ/node_modules/@bitbonsai/mcpvault/dist/server.js",
"${HOME}/Library/Mobile Documents/iCloud~md~obsidian/Documents/QnibObsidian"
],
"enabled": true
},
"context7": {
"type": "remote",
"url": "https://mcp.context7.com/mcp",
"enabled": true,
"headers": {
"CONTEXT7_API_KEY": "api-key"
}
},
"sonarqube": {
"type": "local",
"command": [
"docker", "run", .....
],
"enabled": true
},
"neo4j-main": {
"type": "local",
"command": [
"neo4j-mcp", "--neo4j-uri",....
]
},
"gocd": {
"type": "remote",
"url": "http://localhost:3003/mcp"
}
},
},
"plugin": [
"@ex-machina/opencode-anthropic-auth@1.7.5"
]
}
It’s going to be to long if I dive into all the MCPs, but just a brief overview: MCP (Model Context Protocol) servers are little helper services for different aspects. With MCPs your context is not going to be filled with intermediate steps to fetch information from another API - it has certain tools it exposes. The tools (like `show coverage report in SonarQube for the project XYZ and the branch ABC~) will go off, gather the information and return a result.
I think I am not using them to the full extend (yet), but it’s a good start:
- obsidian: My latest addition, but (I think) one of the most important. Obsidian is a personal wiki which uses Markdown files and folder to organize information. You can link, create todos, diagrams - endless possiblities. I use Obisdian Web Clipper to capture websites offline in Obsidian to be picked up by something (I guess OpenClaw this week). Within OpenCode I am using it to create (and update!) documentation about what is going on. That helps me keep track of changes and helps the next session in OpenCode as I can just point to Obsidian for context.
- context7: My first MCP which stuck. I see the agent fetch documentation from context7 every so often. Seems to help - even though I’d like to add OpenViking eventually.
- sonarqube, gocd, neo4j: development on Metahub involves a code quality bench (sonarqube), CI/CD (GoCD), and a graph database (Neo4j) - instead of enduring multiple
curlcommands by the agent(s) I try to point them to the MCP server to be more (token) efficient and faster.
The plugin at the bottom is what keeps me out of Claude Code.
Security/Isolation
It’s April 2026 and I did not try out the OpenClaw madness, which caught fire early 2026. It’s on my list for this week since I have a talk at AgentCon in Berlin in two weeks. But until now I was afraid.
The idea of some agent going off on itself and do what it wants scares me a bit. That’s why I box my OpenCode agent into safehouse. It uses macOS primitives to box a process in so that it’s not able to access everything.
$ safehouse touch ~/temp/test123
touch: /Users/christian.kniep/temp/test123: Operation not permitted
You need to explicitly define which files (beyond the current working dir) the process is allowed to access.
safehouse --add-dirs=~/temp/ -- touch ~/temp/test123
To box in OpenCode I am using a script similar to the following:
#!/bin/bash
safehouse --enable=keychain --enable=docker --enable=kubectl \
--add-dirs-ro=${HOME}/.config/opencode \
--add-dirs=${HOME}/.local/share/opencode \
--add-dirs="${HOME}/Library/Mobile Documents/iCloud~md~obsidian/Documents/QnibObsidian" \
--add-dirs=${HOME}/.npm \
opencode $@
The process should be able to read the global opencode config (1st line) and write to the session storage of Opencode (2nd line). The other two lines are used to access the Obsidian vault on my mac and the mcp server code.
I’ll need to improve on that step-by-step. It’s a constant learning excersise to stay up to date (or at least not to far behind).
Conclusion
Conclusing might be to powerful of a word - I try to incrementally advance my setup to keep me at the edge of the confort zone.
The goal is to create a setup which I can use for my coding and research task which is
- secure enough to not blow up in my face (I think safehouse and local files in TimeMachine are good starting points)
- Flexible enough so that I can try out new MCPs, plugins, workflows
- aiming towards a memory system which let’s me persist ideas and cross reference.
Let’s see how that evolves over 2026. It’s somehow interesting that by the End of April you already feel that a lot has happend (OpenClaw, open-weight models which become more powerful every month, how coding changed - let alone political “challenges”).