Every founder has a list of ideas sitting in a notes app that never got built. Not because the ideas were bad, but because the implementation cost was too high relative to the uncertainty. You don't know if a niche site will get traction, so you don't invest months building it. So it sits there.
I've been running that same calculus for years. You evaluate an idea, mentally price out the development effort, and if it doesn't clear the bar, you move on.
Most don't clear the bar.
Today, that calculus has changed completely and dramatically. But it's worth being precise about which part of it changed. The cost of infrastructure collapsed decades ago. Hosting went to dollars. Domains got cheap. Bandwidth got cheap. That part was solved. What didn't get solved was the cost of development. Taking an idea from concept to a live, functional product still required either significant technical skill or significant money to hire it. That second cost is what's collapsing now. The cost of going from zero to a live, fully-functional site is no longer a development budget problem. And that shift is making ideas viable that weren't viable before.
I was always technical enough to be dangerous. I could read code, break things, go live, give our lead developer legitimate heartburn on a regular basis. I wasn't sitting on the sidelines. But there's a fundamental difference between fiddling and building, between tweaking something that already exists and taking an idea from zero to a live, functioning product. That's a full-stack skill set I never had. Fiddling is easy. Cradle to grave is a different discipline entirely. But the mental and time tax was enormous. Every technical decision required context-switching out of strategy mode and into implementation mode, and I'm not fast there. What should have taken an afternoon took a weekend. What should have been a one-line fix became a four-hour debugging session with endless tabs open trying to make sense of examples. I could do it, but the cost of doing it meant I was constantly choosing between moving fast on the business and moving fast on the product. I was always choosing.
What I actually am, what I've always been, is a product architect and problem strategist. I see the problem clearly, I can map the solution, I know what it needs to do and why. That part was never the bottleneck. The bottleneck was the gap between that clarity and a functioning product / solution. There was / is the very real gap of communicating and the subsequent translation by the receiving party of the desired end-state as well. That gap is literally gone.
I didn't became a better developer. The tax just disappeared.
In this post I want to share my current approach: how I use a reusable static file template to migrate existing sites and spin up new ones, and how AI fits into that workflow at every step. Everything below is based on a real build from this past weekend.
TL;DR: I maintain a vanilla PHP site template (flat PHP, Bootstrap 5, no build tools, no database) that I copy for every new site. AI handles 90% of the implementation work. The result: a 41-page interactive study site (ESBGuide.com) went from idea to live in 5 hours. No agency. No freelancer. No CMS. Just a template, an AI, and direction.
The Template
After migrating CleanBrowsing, NOC.org, and PerezBox off WordPress, I ended up with a repeatable pattern that I formalized into a reusable template. Same stack, same conventions, same file structure — every time. I keep it at projects/site-template/ and this is where my AI starts from - whether that's migrating or creating new.
The stack is deliberately boring:
- Flat PHP with includes:
config.php,header.php,footer.php,schema.php - Bootstrap 5 via CDN: no npm, no Sass, no build step
- Bootstrap Icons via CDN
- Google Fonts for typography
- Vanilla JavaScript: no jQuery, no frameworks
- Apache .htaccess: clean URLs, security headers, caching rules, redirects
router.phpfor local dev with PHP's built-in server
No database. No Composer. No npm. No build tools. The entire thing deploys to any PHP-capable server via FTP or rsync. That's the point.
Every page follows the same pattern:
<?php $include_base = ''; // '../' for subdirectory pages require_once $include_base . 'includes/config.php'; $page_title = 'Page Title'; $page_description = 'SEO description for this page.'; $canonical_url = SITE_URL . '/page-slug'; include $include_base . 'includes/header.php'; ?> <!-- Page content here --> <?php include $include_base . 'includes/footer.php'; ?>
The header handles the doctype, meta tags, canonical URL, Open Graph tags, and navigation. The footer handles scripts and closing tags. Every page is standalone, a flat file that can be deployed, cached, or served by a CDN without any server-side magic.
The ancillary files are part of the template too. Every site ships with:
- sitemap.xml: hand-managed, always current
- ai-index.json: structured content index for AI discovery
- includes/schema.php: auto-generates JSON-LD structured data by URL pattern
- .htaccess: clean URLs, redirects, security headers, browser caching
The Workflow
The way I work with AI on these builds isn't what most people imagine. It's not "write me a website." It's a collaborative back-and-forth where I control direction and the AI handles implementation. As I used to tell my teams, "I want the baby, not the labor."
Below is an example based on a recent application I pushed for some of the work I'm doin in the Army Reserves - A guide to help soldiers better understand the requirements for the Expert Soldier Badge (ESB).
Here's what that looks like in practice:
1. Set the context. I give the AI enough context to make good decisions. Not a product spec. For ESBGuide, the easiest way to do that was to hand it the TRADOC pamphlet that explained the badge, the tasks, and the evaluation criteria across 111 pages. I let it form its own opinion on content structure. What I focused on was what I wanted soldiers to be able to do with the site, based on the raw content that was already available.
2. Describe the outcome, not the implementation. I don't write tickets. I describe the scenario. For ESBGuide the layout brief was: "imagine me sitting next to a building for hours in a hurry-up-and-wait scenario and all I have is my phone. I need to be able to clearly understand each section, see the key criteria, and have resources to help me understand how to achieve them." That's the whole brief. The AI figures out what that means technically. I review whether it does what I described.
3. Let the AI handle cross-file consistency. WordPress handled this through plugins and a database — until one broke on an update, conflicted with something else, or silently stopped working. With a static site those failure modes disappear, but the coordination work doesn't. Every new page still needs to land in the sitemap, the search index, the AI index. The difference is I wrote those rules once, in plain language, in a file the AI reads every session. It executes them every time. Nothing to break, nothing to configure, nothing to remember. More on this below.
4. Review, don't rubber-stamp. I look at what gets generated. If something is wrong (wrong heading level, wrong schema type, content that drifted from what I intended) I correct it and the AI adjusts. The quality of the output is a function of how clearly I can describe what I want and how carefully I review what I get.
ESBGuide: The Case Study
ESBGuide.com is an interactive study platform for the Expert Soldier Badge, one of the U.S. Army's most demanding awards for non-infantry soldiers. Soldiers have to demonstrate proficiency across 31 tasks, plus a physical fitness component. Finding organized, usable study material was harder than it needed to be.
The source material existed: TRADOC Pamphlet 672-9, TC 3-21.76 Ranger Handbook, an E3B Candidate Handbook. But it was spread across PDFs, and none of it was formatted for the way soldiers actually study: checking off tasks, drilling performance measures, reviewing key steps under time pressure.
The idea was clear. The content was available. What needed to happen was structure and implementation: 31 task pages, a lane index structure, a glossary, a quick reference, a scoring guide, schema markup, search indexing, and everything else that makes a site functional and findable. All of it.
Here's what ended up live:
41
Total pages
31
Task detail pages
108
Glossary terms
22+
Embedded DVIDS videos
0
Plugins used
0
Databases
How the Build Actually Worked
We started with a template that had been built and refined through my previous site migrations. Rather than starting from scratch, AI used that as the scaffolding for the new property. The structure, the conventions, the ancillary files — all already there, already understood.
Then I described the design direction to the AI: military professional, high contrast, readable outdoors on a mobile screen, Army color palette. For fonts I said: follow what you know about military structure and typography. The AI chose Bebas Neue for headings, Inter for body, JetBrains Mono for task codes and timer displays, and explained the rationale for each. Updated the CSS custom properties and font imports in one pass. I didn't pick a single font.
For the task pages, the structure was consistent across all 31: a two-column layout with a sticky sidebar showing performance measures on the right, the main task content on the left. A collapsible task header. Timed drill mode. A GO/NO-GO progress tracker using local Storage so soldiers can track their status across sessions without a login.
I described the layout once. The AI built the first page. I reviewed it — does it do what I asked? A couple of small adjustments, then confirmed. The AI applied that same pattern to all remaining 30 task pages with the correct content for each task pulled from the source documents.
The content work was the most time-intensive part, but it was substantive work: reviewing accuracy, verifying performance measures against the source documents, making sure the Ranger Handbook references were correct. That's the work that should take time. The scaffolding work (creating the files, setting up page structure, writing meta tags, adding schema markup, updating sitemap.xml) was handled by the AI.
What AI Did That Used to Be Slow
Here's where I want to be specific, because "AI helped me build this" is a useless statement without context. Here's what that actually looked like:
Search engine visibility across 31 pages. I needed each task page to be findable when a soldier searches for it by name. That's a structured data problem: every page needs to tell search engines exactly what it is, what it covers, and how to categorize it. I didn't spec out the technical approach. I said: "I want soldiers to be able to find each task when they search for it." The AI figured out the schema structure, generated it for all 31 pages, and wired it up so new pages get it automatically. I didn't touch a schema file once.
A naming decision that would have been a nightmare to undo. Thirty pages in I realized: the URLs weren't descriptive. If someone searches "how to perform a 9-line MEDEVAC request," I want the URL to say that, not /esb4. I wanted to rename everything to match what soldiers would actually search for. Under the old model, renaming 30 pages meant touching 30 files plus every index, every redirect, every reference. A spreadsheet job that would have taken most of a day and introduced errors. I said: "rename all the task pages to descriptive slugs, make sure nothing breaks, nothing disappears from search." One instruction. Done.
The glossary. My instruction was: "I need a glossary, too many acronyms." That's it. The AI extracted 108 terms from the source documents, organized them into categories (medical, tactical, equipment, administrative) added definitions, schema markup, a search filter, and category tabs. Vanilla JavaScript, no library, no dependency. I reviewed the output. It just works.
DVIDS video integration. My instruction was: "we need videos, find resources that expand on the guidance in each section." I didn't know where those videos would come from. The AI spun up a web search agent, identified DVIDS (the Defense Visual Information Distribution Service) as the most authoritative public source for military task demonstration footage, found the relevant video series, matched videos to tasks, and embedded them across 22+ pages. I didn't find a single video. I didn't know DVIDS was the right source. I said we need resources, and the AI figured out what that meant and went and got them.
Mobile. I said: "I need to be able to open this on a small phone and read it easily when I'm hurrying up and waiting, sitting against a wall outside the armory for two hours before an evaluation." That was the brief. Not a pixel spec. Not a breakpoint list. A real scenario the AI could optimize for. It made every tap target large enough to hit with cold fingers, kept contrast high enough to read in direct sunlight, and made the layout collapse cleanly on small screens. I tested it on my phone. It worked the way I needed it to work.
The Pattern That Emerged
After doing this across multiple sites (CleanBrowsing, NOC.org, PerezBox, ESBGuide) the pattern is consistent enough to describe explicitly:
- Copy the template. Five minutes. Done.
- Set config and design direction. Colors, fonts, nav structure. One pass with the AI. Twenty minutes.
- Describe content structure once. The AI builds the first page, you review, adjust, confirm the pattern.
- Scale the pattern. The AI applies the confirmed pattern to every subsequent page. You review for accuracy, not structure.
- AI handles ancillary file maintenance. Sitemap, search index, ai-index.json, .htaccess redirects, updated as pages are added, never manually.
- You do the substantive work. Is the content accurate? Does the page do what it needs to do? That's the work that requires judgment and domain knowledge. That's your job.
What you're not doing: setting up a development environment, configuring a CMS, installing plugins, debugging plugin conflicts, managing database migrations, writing boilerplate, or maintaining cross-file consistency manually. None of the work that consumes build time without contributing to the product. For me, this is key - it i s literally the thing I argued about for years. No one wants to be a webmaster, we just want to get our content on the web. Why is the simplest solution still so technically demanding.
Why Flat PHP and Not Something Modern
This is the question I get every time I describe this stack. The short version: it deploys everywhere, debugs easily, and the AI works in it without friction.
React or Next.js or whatever the current recommendation is. They have build pipelines, package managers, component libraries, and deployment workflows . I, and I would be willing to wager, many people going online don't have a need for that either.
More importantly: the AI's ability to work in a codebase is directly proportional to how understandable that codebase is. A flat PHP file is completely self-contained. You can read it top to bottom and understand exactly what it does. A component tree in a modern JavaScript framework requires understanding the build system, the state management approach, the component hierarchy, and the framework conventions before you can make a change to a paragraph. Again, I don't want to be that immersed.
The simpler the codebase, the more leverage the AI gets per instruction. That's not a coincidence.
When something breaks on a flat PHP site, the debugging surface is the file. When something breaks in a React app, the debugging surface is the build configuration, the bundler version, the component, the state, the props chain, and the framework version. I've spent entire afternoons debugging issues that had nothing to do with my code.
ESBGuide serves static-style content to soldiers preparing for a physically and technically demanding evaluation. It needs to be fast, readable, and reliable on a mobile connection in the field. Flat PHP with Bootstrap CDN checks all three boxes. A JavaScript framework with a hydration step and a 200KB bundle does not.
The Checklist That Keeps It Clean
One thing the AI is particularly good at: enforcing consistency I'd otherwise be inconsistent about. Every time a new page is added to ESBGuide, a new article is published on PerezBox, or a new section is built on CleanBrowsing, the same checklist runs:
- Unique
$page_title,$page_description,$canonical_url - sitemap.xml updated with correct URL and lastmod date
- ai-index.json entry added
- Search index updated with title, description, keywords, path, category, date
- Schema markup verified: correct type, correct URL, correct content
- .htaccess redirect added if a URL changed
I have this checklist in a CLAUDE.md file in the project root. Every session, the AI reads it and applies it. I don't remind it. It doesn't forget. The goal is simple: never succumb to human error and forgetfulness. The rules are written once. They run every time.
That sounds like a small thing. It isn't. Anyone who has run a content site knows how quickly these files drift out of sync when you're moving fast. URLs appear in Google Search Console that redirect to 404s. Pages that should be indexed aren't. Schema markup that was accurate on launch gets stale as content changes. Maintaining that hygiene manually, at speed, is a real operational cost. The AI handles it automatically.
What This Unlocks
The template plus the AI workflow changes the economics of ideas. Not every idea becomes viable. Execution still matters, content still matters, the audience still has to exist. But the implementation cost is no longer the gating factor.
ESBGuide would not have been built under the old model. The audience is real. Soldiers preparing for a competitive evaluation. But the niche was too narrow to justify months of development investment. With the new model, I went from idea to live in 5 hours and could find out whether the content was useful before making any meaningful time investment. That's a fundamentally different risk profile.
The barrier used to be build time. Now the barrier is the idea itself. That's a better problem to have.