Eight rules that govern every page, every deploy, every line of production code. They exist because “it works in the terminal” is not the same as “a visitor sees real data.” BSI holds itself to this standard publicly — so you can hold us to it too.
Sessions Audited
Claude Code sessions reviewed in the March 2026 accountability audit.
Repair Commits
Commits that fixed code a previous session had broken.
Frustration Markers
Explicit correction events logged across the audit period.
A 200 status code is not "done."
A page is done only when the deployed browser-rendered page shows real, non-empty user-visible data. If a visitor opens the URL and sees blank tables or empty grids, the work is not complete — regardless of what the terminal says.
Mock, fallback, placeholder, snapshot, hardcoded — banned.
Test fixtures belong in test files. Production UI shows real data from real endpoints, or it shows a truthful empty state. There is no middle ground. A visually complete component with fake data is worth zero.
No hardcoded timestamps or "live" claims.
"Live," "updated," timestamps, and refresh intervals must come from real response metadata — or be removed entirely. If the system can't prove when data was last fetched, it doesn't get to claim freshness.
Terminal output is not proof.
Every data-bearing page must be verified at the rendered-page level: the deployed URL loads, real entity names appear, non-empty rows and cards render, loading states work, error states work, and empty states are truthful — not decorative.
Build inside the real app structure.
Every page uses shared routing, components, and data utilities. No standalone HTML files masquerading as product routes. If it's on a live route, it's built with the same architecture as everything else.
Understand first. Edit second. Verify third.
Before editing any file: read the existing route, endpoint, and current production behavior. If the code works and the data flows, don't touch it. 63 out of 363 commits were repair jobs fixing code that was already working.
Automated gates block mock data patterns.
The pre-commit hook blocks: Math.random() in data contexts, mockData/mockGames/mockScores/mockStandings/mockTeams arrays, sampleData, faker library usage, hardcoded player arrays, hardcoded standings tables, and hardcoded "updated today" strings in production routes.
No PR ships without evidence.
Pull requests touching /scores, /college-baseball, /mlb, /intel, or /about must include: a deployed URL, a screenshot or DOM assertion of rendered data, the endpoint used, and a timestamp of verification. No exceptions.
Every commit is scanned for banned mock-data patterns before it can land. If mockGames, sampleData, faker, Math.random(), or hardcoded player arrays appear in production code, the commit is rejected.
After every deploy, production URLs are checked for real rendered data. Empty tables, blank grids, or zero-content pages trigger an immediate investigation — not a follow-up ticket.
Pull requests touching data-bearing routes must include a deployed URL, rendered-data proof, the endpoint used, and a verification timestamp. No proof, no merge.
Before modifying any file, the existing implementation and live endpoint are checked first. If the current code works and returns data, it stays untouched.
Most platforms keep their quality standards behind closed doors. BSI publishes them. If we claim every game gets covered with the same depth, you should be able to see the rules that enforce that promise.
These eight rules were written after a 7-hour audit of over 3,000 development sessions. They exist because internal good intentions weren't enough. The only standard that matters is the one you can point to when something breaks.
Born to Blaze the Path Beaten Less