So someone in Ogrodje, this really cool Slovenian tech Discord asked how Claude Code handles tests in a 50k line codebase.
Honestly, I was surprised they were surprised it works well. But then I realized - my setup isn't the default. I've been training Claude on my repos the same way I'd onboard a junior dev.

The thing is, most people just open Claude Code and start prompting. That works for small stuff. But in order to get it to actually understand a large codebase, you have to give it context. Documentation it can reference. Conventions it can follow. A source of truth it can update. 

---

The Short Version

---

Here's my workflow in a nutshell:

1. `cd` into the repo, run `claude`
2. Tell it what I want to do (write tests, refactor X, etc.)
3. Point it to my `CLAUDE.md` and `docs/` folder
4. "If anything's unclear, just ask"
5. "Don't forget to update the docs when you're done"

That last part is key. The docs aren't just for Claude - they're for future-me who forgot what past-me was thinking. 

 

The Setup (First Time)


When I start working with a new repo, I spend 10-15 minutes setting up the documentation structure. It pays off immediately.

Step 1: Create a CLAUDE.md

1. Move to your folder in CLI -> start Claude Code (CC)
2. Start your prompt: "Write a simple README.md. Add the following: [bullet points about what this repo does]"
3. Continue your prompt (if necessary): "Follow this structure/add these conventions/[enter your additional instructions]"
4. After you've defined you README.md, continue with generating your CLAUDE.md

This gives Claude (and me) a quick reference. I keep it short - what the project is, key directories, any weird conventions. I will later on add new information, updating both as needed.

README.md gives me context, CLAUDE.md gives it to Claude.

 

Step 2: Set Up a Docs Folder

5. Prompt your docs folder: "Setup a docs folder to house folders/subfolders/files as source of truth. Ask additional questions to refine understanding, define naming conventions, purpose and other."
6. Continue to answer questions -> expand where ever necessary.
7. If you'd like, continue using additional (visual) formatting prompts for outputs
- Examples: "Be short, concise / Display answers as nested bullet points / Display in a simple table / Display an ASCII diagram."

Claude asks clarifying questions. I answer them. We build out the folder structure together. It's like pair programming the documentation. The key is making Claude ask questions. That's where the context gets captured. The crucial part is also to be explicit.

Visual formatting is not as much necessary for Claude as it is for us, coding/prompting - due to a barrage of information coming through to you at once, you need to filter out things, get the gist of it etc.

 

Writing Tests (My Most Common Use Case)

This is where it gets practical. I use Claude Code mostly for writing tests - it's repetitive, it requires understanding the codebase, and it's exactly the kind of thing an AI assistant should handle.

Step 1: Define Testing Conventions 

8. Prompt your CC: "Go to docs/testing - we're going to define testing conventions, process & workflows. Ask additional questions to refine understanding."
9. Continue to answer questions -> expand where ever necessary.
10. If you'd like, continue using additional (visual) formatting prompts for outputs
- Examples: "Display as simple table. Include following columns: purpose, success/fail criteria."

We iterate on this document until it's tight. What frameworks we use, how tests are structured, what counts as passing.

Step 2: Start With a High-Level Review 

11. Start with reviewing: "Let's start with unit tests - review the current folder @src/components and give me a high-level outline. Display as a simple table." 

I ask for tables because they're scannable. I can quickly see what exists, what's missing.

Step 3: Let It Work (But Watch)

12. Prompt for suggestions before executing: "Based on that table, suggest how to approach testing. See docs folder for conventions."

Then I let it run. I keep a separate terminal open to run tests as Claude writes them. 

The Review Prompt (Use This)

When Claude produces something - docs, code, tests, whatever - I run this review before accepting:

Take [document/file]: 
- review for assumptions
- review for missing/badly defined functionalities
- review for missing/badly defined areas of influence
- review for missing/badly defined characteristics
- review for missing/badly defined elements & sub-elements

Display as a simple table.

 This catches the gaps. Claude sometimes makes assumptions that aren't in the codebase. This prompt surfaces them.

Why This Works

It's like onboarding a new team member. You wouldn't just throw someone at a 50k line codebase and say "figure it out." You'd give them docs, conventions, context. You'd tell them who to ask when they're stuck. But in order for Claude to be useful, it needs the same things a human would need. The docs folder is the onboarding. The CLAUDE.md is the cheat sheet. The "ask questions" instruction is permission to not guess. ---

What I'm Still Figuring Out

This workflow is solid for repos I own. But what about jumping into an unfamiliar codebase? I'm experimenting with having Claude generate the initial docs by exploring the repo first, then I refine them. If you've got a workflow for unfamiliar codebases, I'd love to hear it.