At Fiberplane we were early adopters of Claude Code, and like most teams we went through a learning curve. The first sessions were promising but messy: the agent would produce code that worked but didn’t fit the codebase, handle errors inconsistently, or quietly introduce patterns we didn’t want. We spent review time correcting things that felt like they should have been obvious from context.

The quality of what the agent produces is largely a function of how explicit the codebase is. When the code itself carries information: for example, where errors come from, what a function depends on, which patterns are allowed, the agent follows those patterns and can self-correct when it drifts. Putting this information in a CLAUDE.md helps, but in a long enough session, agents start to drift from written instructions. The information needs to be in the code itself, enforced by tooling, so the agent gets corrected just-in-time rather than gradually ignoring the rules. This post shares the changes we made and what we learned along the way.

Making ts more explicit with Effect

Standard TypeScript has a particular kind of implicitness hidden inside implementations. Agents work with function signatures. They read types to understand what something does and what can go wrong. When that information isn’t in the type, the agent has to infer it from the implementation. Effect makes TypeScript code more explicit when it comes to control flow, dependencies and errors.

Explicit control flow

Normal TypeScript uses throw and try/catch, which creates a parallel, non-sequential control flow. Functions can suddenly jump to a completely different place when an error is thrown. Effect forces all errors to be represented as results in the return type, so the agent (and humans) can always read code top-to-bottom and understand every possible path.

Effect’s Data.TaggedError makes this concrete: errors have identity, a name, and typed properties:

export class UserNotFoundError extends Data.TaggedError("UserNotFoundError")<{
  readonly userId: string;
}> {
  get message() {
    return `User ${this.userId} not found`;
  }
}

The agent can handle it specifically with Effect.catchTag, instead of catching everything and hoping for the best:

yield *
  fetchUser(id).pipe(
    Effect.catchTag("UserNotFoundError", (err) =>
      Effect.logWarning("User not found, returning default", {
        userId: err.userId,
      }).pipe(Effect.map(() => defaultUser)),
    ),
  );

When a test fails or a runtime error surfaces, the agent has enough information to diagnose and fix it. It can point at the error type, find where it’s defined, understand what went wrong, and recover or report correctly.

Explicit Signatures

Every Effect<A, E, R> carries three pieces of information in its type: the success value, the typed error, and the services it requires.

// typescript: only the happy path is visible
async function fetchUser(id: string): Promise<User> { ... }

// Effect: success, failure, and dependencies are all visible
const fetchUser = (id: string): Effect.Effect<User, UserNotFoundError | NetworkError, DatabaseService> => ...

The TypeScript version’s signature tells that it returns a User if everything goes well. The possible failure modes and what the function depends on are hidden in the implementation. The Effect version surfaces all of it. An agent reading this type knows what can go wrong and what dependencies are needed without reading implementation details.

This matters beyond agents too. When you’re reviewing a 2000+ line PR, you’re not reading line by line. You’re trying to understand the moving parts. Explicit signatures make that possible without diving into every implementation detail.

The Reference Trick

Effect’s API and docs are large. One thing worth doing: clone the Effect source into a references/ folder so the agent can read the actual API and source directly, rather than relying on what it was trained on. For us the folder is gitignored and excluded from linting.

Enforcing Patterns with ast-grep

Here’s the thing about CLAUDE.md and documented conventions: in a long enough session, agents start to ignore them. They start optimizing locally rather than following the global patterns. You’ll get a try/catch block, or a console.log, or a new Error() that compiles fine but breaks the philosophy you’ve set up with Effect earlier.

The only robust solution is a just-in-time gate. When the agent tries to commit or finishes a file, something checks the code against the patterns and fails loudly if they’re violated.

We use ast-grep as a structural syntax scanner that operates on the syntax tree of code (using tree-sitter under the hood). You define patterns in a YAML file, and it matches those patterns against the codebase. Critically, it’s run as part of the CI pipeline and configured to exit with an error when a banned pattern is found. And they have to be errors, not warnings. We learned this the hard way. Warnings get completely ignored. An error blocks progress. The agent reads the message, understands what it should have done, and fixes it. Ast-grep lets the codebase enforce taste and architectural decisions automatically, so the human doesn’t have to constantly correct the agent mid-session.

We have rules for the most common Effect anti-patterns:

RuleWhat it catches
no-try-catchtry/catch in Effect code — use Effect.try or Effect.catchTag
no-bare-new-errornew Error(...) — use Data.TaggedError
no-console-logconsole.* — use Effect.log, which integrates with tracing
no-silent-catchEffect.catchAll without logging — always log before recovering
no-runpromise-in-effectEffect.runPromise inside Effect code — Effect.runPromise only belongs at entry points
no-throw-in-effectthrow inside Effect.gen — use Effect.fail
no-drift-fsDirect node:fs imports — use Effect’s FileSystem service
tagged-error-locationData.TaggedError outside errors.ts — keep errors co-located

The no-silent-catch rule came from a repeating pattern we saw: the agent would catch an error, not crash the app, but silently swallow it. That’s a failure state that becomes invisible. With structured logging enforced at the catch site, you can point the agent at the log output and ask it to find what went wrong.

The tagged-error-location rule is about navigability. When all errors live in errors.ts, there’s one place to look for everything that can go wrong in a module. The agent always knows where to define them and where to find them.

If you find a pattern you don’t like, have the agent write an ast-grep rule to ban it immediately. That’s the workflow: you see a bad pattern, ast-grep prevents it from coming back.

The key is writing the error message as an instruction, not a description. The message field is what the agent reads when it hits the violation. Pair it with a note block that shows exactly what to do instead:

message: "Avoid try-catch in Effect code - use Effect.try or Effect.catchTag instead"
note: |
  Effect code should avoid try-catch blocks. Use Effect's error handling:
  - Effect.try() for wrapping code that might throw
  - Effect.catchTag() for handling tagged errors

  // ❌ Bad
  try { const result = riskyOperation(); } catch (e) { ... }

  // ✅ Good
  const result = yield* Effect.try({
    try: () => riskyOperation(),
    catch: (e) => new OperationError({ cause: e })
  });

When the rule fires, the terminal output looks like this:

error[no-try-catch-in-effect]: Avoid try-catch in Effect code - use Effect.try or Effect.catchTag instead
  --> apps/api/src/users/service.ts:42:5
   |
42 |     try {
   |     ^^^
   = note: Effect code should avoid try-catch blocks. Use Effect's error handling:
           - Effect.try() for wrapping code that might throw
           - Effect.catchTag() for handling tagged errors
           ...

The agent reads the violation, reads the note, and fixes it without you having to intervene. The error message is doing the work a code reviewer would otherwise do.

Keeping Docs Current with Drift

Here’s a problem that gets worse with agents: documentation goes stale. You write the initial pass, the agent makes changes along the way, and the docs gradually drift from reality. Unlike types, where a change in one place will make the other yell, docs are plain text with no binding to the code they describe.

Drift solves this. It lets you anchor a markdown doc to a specific file or symbol in the codebase. When the anchored code changes, drift lint fails and tells you the doc is stale.

# frontmatter in a doc file

anchors:

- file: src/users/service.ts
  symbol: UserService

This makes the staleness visible at lint time. A similar just-in-time gate as ast-grep. The agent runs bun run check, sees a drift failure, goes back to the doc it should have updated, and updates it.

One thing worth knowing: drift has a drift link command that re-stamps the anchor. We added an explicit rule in our agent instructions: never re-link without reviewing. Otherwise the agent will just silently update the anchor without actually reviewing the doc, which defeats the purpose.

The CLAUDE.md File

Everything — the Effect conventions, the boundary pattern, the ast-grep rules, the check commands, the references folder — lives in a CLAUDE.md file at the root of the repo (or AGENTS.md if you’re not using Claude Code). Claude Code reads this on startup and loads it as instructions for the session.

This is the mechanism that ties it together. You don’t explain the architecture in every conversation.

The CLAUDE.md is a first-class part of the codebase. It’s committed, reviewed, and kept up to date as conventions evolve. When a new ast-grep rule gets added, it goes in the rule table in CLAUDE.md at the same time.

What This Changed in Practice

After this setup, reviewing AI-generated code feels like reading output from someone who already knows the conventions. The mechanical stuff is handled by the time you see it. Code review is about logic and edge cases.

The agent isn’t smarter. The codebase is more explicit, the rules are enforced at commit time, and the documentation stays current. The codebase steers the agent instead of you doing it manually.

AI agents do not get better because you prompt them better. They get better when the codebase becomes explicit enough to constrain them, correct them, and keep them grounded in reality.

Next in part 2: The Self-Driven Codebase: Full Agent Automation with Otter taking the same foundations further with autonomous agent loops, fp issue tracking, and a monorepo template built for hands-off development.