about
krowdev
A developer knowledge base for agentic coding patterns, built and documented by krow.
Background
I'm krow. My background is in physics and scientific computing — Python, Fortran, Makefiles, conda environments. I bring the same systematic methodology to software as I do to research: define constraints, test hypotheses, document results.
This site was built entirely with AI coding agents, specifically Claude Code. Not as an experiment in generating content — as a disciplined engineering process. Every component was specified, reviewed, and tested. The result: 22 entries, full-text search, and a custom theme, shipped to production in the first week, and growing.
What This Site Is
A practitioner's knowledge base. The agentic coding entries document tested patterns for working with AI coding agents — from project setup to prompt architecture to code review methodology. The domain infrastructure series covers DNS, WHOIS/RDAP, TLS fingerprinting, and bot detection with wire-level detail. The Astro entries cover the framework from first principles, written for developers coming from scientific computing rather than the JavaScript ecosystem.
Every entry is structured for both human readers and AI agents. Append .md to
any URL for clean markdown. The llms.txt file indexes the
full site with machine-readable instructions.
Start Here
If you want the shortest path to the shape of the site, start with the entries that best represent how the work is done here. Getting Started with Agentic Coding explains the operating model. Writing an Effective CLAUDE.md shows how instruction files become executable process, not vague project prose. Reviewing AI-Generated Code is the review layer that keeps agent output from silently degrading the codebase.
For the more infrastructure-heavy side of the site, DNS Resolution: The Full Picture is the best representative guide, and How Websites Detect Bots in 2026 is the clearest example of the long-form technical analysis standard I want the articles to hit. If you prefer the project meta layer first, What I Learned Building krowdev with AI Agents is the honest retrospective: what the agent workflow made faster, what it made messier, and where the editorial control actually has to live.
- Getting Started with Agentic Coding — the baseline mental model.
- Writing an Effective CLAUDE.md — concrete repo-instruction patterns.
- Reviewing AI-Generated Code — the quality gate.
- DNS Resolution: The Full Picture — full-stack systems depth.
- How Websites Detect Bots in 2026 — high-signal article example.
How To Use This Site
The site is meant to be navigated in layers. If you want a fast answer, start in the snippets and notes. They are the short-form references: one idea, one pattern, one implementation detail. If you need a more complete model, move up to the guides. Those entries are where I try to connect the local rule to the wider system around it: why the tradeoff exists, what breaks if you ignore it, and what a durable implementation looks like in a real repo. The articles are the long-form analysis layer, where the point is less "copy this" and more "understand the system well enough to make your own call."
The structure is deliberate. Every entry has related links, tags, and a machine-readable markdown endpoint because I want the site to work as both a human knowledge base and an agent-readable corpus. That is also why the Explore page clusters material by topic and series instead of acting like a flat reverse-chronological archive. On a small site, internal linking is not decoration. It is the map that tells both readers and crawlers which pages are foundational, which ones are supporting references, and which entries should be read together. If a page here matters, it should be reachable from more than one path.
The intended audience is not "everyone interested in AI." It is developers who care about how work is actually done: engineers adopting coding agents, people coming from scientific computing into the web stack, and technically curious readers who want the implementation details rather than the marketing layer. That bias shapes the tone. I would rather publish a page that is narrower but genuinely useful than a generic overview written to satisfy a keyword cluster. If something here reads opinionated, it usually is. The point is to record tested patterns, not to pretend every tool and architecture decision is interchangeable.
Editorial Process
Every entry follows the same pipeline: research corpus, source map extraction, draft generation, automated lint scan, manual review, and publication. AI-drafted entries are generated from project research and reviewed for technical accuracy before publishing. AI-assisted entries start from human notes and are expanded or refined with AI help.
Found an error? Entries are versioned in git. Report issues via email or GitHub and corrections ship with the next build. All entries show their last-updated date.
About the Agent
AI-drafted entries on this site are written by an AI agent. Not as a ghostwriter — as the actual writer, credited as such. The patron didn't paint the Sistine Chapel. Claiming AI output as human work is self-deception. Here, the attribution is honest: agent writes, human directs.
The agent handles the research synthesis, the structural writing, the code examples.
krow handles the editorial direction — what to write, what's wrong, what stays.
The origin label on every entry tells you exactly which role each played.
Authorship Transparency
Every entry is labeled with its origin: human-written, AI-assisted, or AI-drafted. You can filter by authorship on the explore page or toggle AI content on and off site-wide.
AI-drafted content tends to be generically correct but misses project-specific nuance and lived-experience edge cases. Human direction catches the structural intent; AI execution delivers the coverage. Labeling the origin lets readers calibrate trust.
The agent view toggle on each entry shows the structured metadata that AI agents see: kind, maturity, confidence, origin, related entries, and a direct markdown endpoint. This is what radical transparency looks like in practice.
Contact
The Stack
Astro 6 with Svelte 5 interactive islands. Catppuccin Mocha and Latte themes. Inter and JetBrains Mono typography. Pagefind search. Cloudflare Pages. Zero client-side JavaScript by default.