Low Effort Vibe Coding an Entire Full-Stack Website With Claude. Here is how

The Story
Over a series of low-effort Claude Code sessions, I went from a blank directory to a fully deployed, production-grade web application that scrapes Wikipedia every 5 minutes, stores 1,250+ incidents in MongoDB, renders them in a brutalist-designed infinite-scroll timeline, and serves them from Vercel with SEO-optimized individual pages for every single record. The entire codebase uses Next.js 15, Tailwind CSS v4, Mongoose models, Cheerio scrapers, Docker containers, PM2 configs, and API routes, all generated through conversation.
This is what low-effort vibe coding actually looks like.
I didn't write this website. Not really. I directed it.
The Setup: One File to Rule Them All
Everything started with a single file: `CLAUDE.md`.
If you're not familiar, `CLAUDE.md` is a project-level instruction file that Claude Code reads at the start of every session. It's your persistent context, the thing that tells the AI what you're building, how you want it built, and what constraints to follow. Think of it as the architectural brief that never expires.
Here's what mine looked like:
I want to remake my schoolshootingstats.com website, with a timeline view that is discoverable, scrollable, and viewable on every type of device
Design System: Should be of the 2027 2026 design trend of brutalist design, yet professional and serious.
Architecture: Should use Tailwind with NextJS and MongoDB
Backend: It will need to scrape the Wiki for the first time and store it in MongoDB, and then every 5 minutes, check via a PM2 script on my VPS.
Visual: Should be a timeline view with subnavigation to navigate around and view each card with the information and tagging for common datapoints. Each card should have a link to the incident, too.
Marketing Strategies: Make the website SEO, GEO, and AEO-friendly.That's it. No wireframes. No component specs. No database schema. Just the intent, the vibe, and the constraints. Claude took it from there.
What Got Built
Let me walk through what actually materialized from those few paragraphs.
The Frontend
A Next.js 15 application using the App Router with server components. The main timeline page renders the first 50 incidents server-side for instant load and SEO crawlability, then an `IntersectionObserver`-powered infinite scroll loads batches of 50 as you browse. Every incident from 1840 to the present gets its own detail page with JSON-LD structured data.
The design system is built on Tailwind CSS v4's new `@theme` directive with custom design tokens like `brutal-black` (#0A0A0A), `brutal-white` (#F5F0E8), `brutal-red` (#CC0000) for death counts. Three fonts: Space Grotesk for display, JetBrains Mono for data, and Inter for body text. Thick borders, offset shadows, raw typography. Brutalism that takes the subject matter seriously.
A sticky decade navigation bar spans the 1840s through the 2020s. Filters for state, school level, incident type, and death count range. A statistics banner showing totals. Everything URL-driven through search params (no client state means every filtered view is shareable and bookmarkable).
The Scraper
This was the most technically challenging piece. Three Wikipedia pages needed to be scraped, parsed, deduplicated, and merged:
- List of school shootings before 2000- List of school shootings 2000-present- List by death toll
- Wikipedia tables are messy. Column orders differ between pages. Rows have rowspans. Summary rows sneak in. Sub-headers appear mid-table. The parser had to detect column layout dynamically from headers, filter out summary rows (patterns like "2025 | 68 | 36 | 109"), skip rowspan sub-rows with shifted column indices, and handle date formats ranging from "April 20, 1999" to just "1840".
- The death-toll page is treated as supplementary; it can add source tags to existing incidents, but never creates new entries that don't appear in the chronological pages.
The Database
MongoDB with Mongoose. Compound indexes for the most common filter patterns (`year + deaths`, `stateCode + year`, `decade + schoolLevel`). A weighted text search index across school, city, state, and description fields. Atomic upserts with `$set` for data and `$addToSet` for source arrays so concurrent scrapes don't clobber each other.
The Infrastructure
- The website deploys to Vercel via Git push. The scraper runs separately on a VPS, either as a Docker container (`node:22-alpine`, installing only `mongoose`, `cheerio`, `dotenv`, and `tsx`) or via PM2. It connects directly to MongoDB and runs every 5 minutes.
- Docker Compose config, Dockerfile, ecosystem.config.js for PM2, `.env.local` templates, dynamic sitemap generation, robots.txt, all generated through conversation.
Here's how the development actually flowed:
Session 1: Scaffolding
The first session read `CLAUDE.md` and built everything from scratch. The Next.js project, the Tailwind config, the Mongoose models, the scraper, the API routes, the components, and the design system. One conversation, one continuous flow.
I didn't dictate component names or file structures. I described what I wanted to see and what it should do. Claude made the architectural decisions, such as the App Router over Pages, server components for the timeline, a shared query builder for the API and the page, and, lastly, a lazy connection singleton for Mongoose.
Session 2: Data Accuracy
This is where the real work happened. After the initial seed, I audited the data against Wikipedia's own summary tables. The counts didn't match. Not because the scraper was wrong; it turned out that Wikipedia’s summary tables are maintained separately from the actual row data and contradict each other.
But we still had real bugs:
- Rowspan sub-rows in Wikipedia tables were being parsed with shifted column indices, reading wrong data- "St. Louis" and "St Louis" generated different wikiIds, creating duplicates- "Oxford Township" and "Oxford" had the same problem. The death-toll page was creating standalone entries that inflated counts
- Each of these got identified and fixed through conversation. I'd paste discrepancies, Claude would trace the root cause through the parser, and we'd iterate. The final audit showed pre-2000 data nearly perfect (13 of 16 decades exact match) and post-2000 significantly improved.
Session 3: Features and Polish
- Lazy loading. The initial page loaded all 1,250 incidents, which is, of course, too heavy. One prompt:
Add lazy loading after x number of items that load in at the time.- Claude created `TimelineInfiniteScroll.tsx` with `IntersectionObserver`, with 600px root margin for preloading, and automatic grouping by year.
- That broke the decade navigation; the buttons tried to scroll to anchors that didn't exist yet because they hadn't been lazily loaded. I reported the bug. Claude rewrote `DecadeNav` from a scroll-to-anchor model to URL-based filtering through `router.push` and search params. Clean, bookmarkable, SSR-compatible.
The Numbers
| Metric | Value ||---|---|| Total source files | 45+ || Lines of code (approx.) | 3,000+ || Database records | ~1,250 incidents || Data span | 1840 to present || Wikipedia sources scraped | 3 pages || API endpoints | 3 || Database indexes | 8+ (single + compound + text) || Design tokens | 13 colors, 3 fonts, 4 shadow variants || Scrape interval | Every 5 minutes || Docker image base | node:22-alpine || Docker memory limit | 512MB || Time from zero to deployed | A few sessions |
Lessons for Vibe Coders
- Write a CLAUDE.md.
- Seriously. It's the difference between starting every conversation with "so I'm building this thing..." and having Claude already know the entire project context. Put your stack choices, design philosophy, data sources, and deployment strategy in there.
- Validate, don't trust.
- The scraper worked on the first try, kind of. The parsed data had subtle bugs that only showed up when I compared against known totals. Vibe coding doesn't mean skipping QA. It means you can do QA through conversation, too.
- Show, don't tell.
- Paste logs. Paste error messages. Paste the actual data that looks wrong. Claude is remarkably good at pattern recognition when given raw output. "It's broken" is a bad prompt. A 50-line log dump is a great prompt.
- Scope your sessions.
- Don't try to build everything in one conversation. Build the foundation, validate it, then layer on features. Each session can read `CLAUDE.md` and pick up where the last one left off.
- Let the AI make architectural decisions.
- I didn't specify "use IntersectionObserver with 600px rootMargin." I said, "Add lazy loading." The implementation details were Claude's call, and they were good calls. Fight the urge to micromanage.
- Infrastructure is a fair game.
- Dockerfiles, Docker Compose, PM2 configs, CI/CD pipelines; these are all things Claude can generate from a one-line prompt. Don't hand-write the boilerplate.
The Takeaway
This isn't a toy project. It's a real website that scrapes real data, stores it in a real database, and serves it to real users with real SEO. The design system has a real typographic hierarchy. The scraper handles real-world edge cases in Wikipedia's messy HTML. The Docker container runs on a real VPS. I wrote maybe 20 actual prompts to build the entire thing.
Vibe coding, whether low-effort or not, isn't about generating code and hoping it works. It's about having a conversation with an AI that understands software architecture, and steering it toward the product you see in your head. The `CLAUDE.md` file is your steering wheel. The prompts are your turn signals. The AI is the engine. The code is just exhaust.
Built with [Claude Code](https://claude.com/claude-code). Deployed on [Vercel](https://vercel.com). Data from [Wikipedia](https://en.wikipedia.org/wiki/Lists_of_school_shootings_in_the_United_States).
Visit the live site: [schoolshootingstats.com](https://schoolshootingstats.com)