From create-react-app to create-ai-app: The New Default for AI Applications
Table of Contents
It’s 2016. You want to build a React app.
You open a blank directory and start configuring Webpack. Then Babel. Then ESLint. Then a folder structure. Then build scripts. Then hot module replacement. Then environment variables. Then production builds.
Three days later, you have a working dev environment. Zero components written.
Then Dan Abramov releases create-react-app. One command: npx create-react-app my-app. Sensible defaults. Proven structure. Zero config to start. Eject when you outgrow it.
React development was never the same. Not because CRA did anything magical — but because it established a default. A starting point everyone agreed on. A floor, not a ceiling.
Fast forward to 2026. We’re in the exact same moment with AI applications. And we’re making the exact same mistakes.
The Problem CRA Solved (And Why It Matters Now)
Before create-react-app, every React project was a snowflake.
Team A used Webpack 1 with a custom config. Team B used Webpack 2 with a different config. Team C hand-rolled their own build pipeline. Everyone’s package.json looked different. Onboarding a new developer meant learning that team’s specific incantation of build tools.
CRA solved three problems simultaneously:
- Boilerplate elimination. Nobody should spend days configuring a build system to render a component.
- Standards creation. When everyone starts from the same template, folder structures become predictable. Tutorials transfer across projects. Stack Overflow answers actually apply to your setup.
- Ecosystem acceleration. Libraries could assume a standard environment. Testing tools could assume a standard build process. The entire React ecosystem sped up because there was a shared baseline.
These aren’t frontend problems. They’re complexity management problems. And AI applications have all three, amplified.
The AI Boilerplate Problem in 2026
Here’s what starting a new AI application looks like today:
Week 1 — Backend infrastructure:
- FastAPI project structure with async support
- Database setup (PostgreSQL, migrations with Alembic)
- JWT authentication (login, register, refresh tokens, session management)
- Redis for caching and rate limiting
- Environment configuration and secrets management
Week 2 — AI integration:
- AI framework setup (Pydantic AI, LangChain, LangGraph, or something else)
- LLM provider integration (OpenAI, Anthropic, OpenRouter)
- WebSocket streaming for real-time token delivery
- Conversation persistence (chat history, session management)
- Tool registration and dependency injection
Week 3 — Frontend and deployment:
- Next.js frontend with streaming chat UI
- WebSocket client with reconnection logic
- Docker multi-stage builds
- Docker Compose for local development
- CI/CD pipeline (GitHub Actions or GitLab CI)
- Observability (Logfire, Sentry, Prometheus)
Three weeks. Zero business logic. Zero differentiation. Just plumbing.
And here’s the part that should bother you: every team doing this is making different choices about the same solved problems. Different auth implementations. Different WebSocket patterns. Different Docker configurations. Different folder structures.
Sound familiar?
The Parallel Is Exact
Let me map it explicitly:
| Dimension | CRA (2016) | AI Template (2026) |
|---|---|---|
| The pain | Webpack + Babel + ESLint config hell | FastAPI + Auth + WebSocket + Docker + AI framework config hell |
| Time wasted | Days before first component | Weeks before first agent |
| The fragmentation | Every team’s build setup was different | Every team’s AI stack is different |
| The command | npx create-react-app my-app | fastapi-fullstack create my_app --preset ai-agent |
| Sensible defaults | Webpack config, ESLint rules, folder structure | PostgreSQL, JWT, Redis, Pydantic AI, Docker, CI/CD |
| Escape hatch | eject when you outgrow defaults | 75+ options when you need customization |
| Result | Working React app in seconds | Working AI application in minutes |
The structural similarity isn’t a coincidence. It’s the same pattern: a technology category matures past the “everyone hand-rolls everything” phase and enters the “we need a standard starting point” phase.
React was there in 2016. AI applications are there now.
What “Sensible Defaults” Means for AI Apps
CRA’s genius was picking the right defaults. Not the most powerful options — the most sensible ones. You didn’t get every possible Webpack plugin. You got the configuration that worked for 90% of projects.
For AI applications, sensible defaults look like this:
3 presets that cover 90% of use cases:
| Preset | What you get | Who it’s for |
|---|---|---|
| Minimal | FastAPI, no database, no auth, no Docker | Quick prototypes, learning, experiments |
| AI Agent | PostgreSQL, JWT, Redis, Pydantic AI, WebSocket streaming, conversation persistence, Docker, GitHub Actions | AI chatbot and agent applications |
| Production | Everything in AI Agent + caching, rate limiting, Sentry, Prometheus, Kubernetes, admin panel | Enterprise-grade deployments |
Pick a preset. Get a running application. Customize later.
The ai-agent preset is the equivalent of CRA’s default config. It gives you:
- A FastAPI backend with async PostgreSQL and Alembic migrations
- JWT auth with refresh tokens and session tracking
- A Pydantic AI agent with tool support and dependency injection
- WebSocket streaming with token-by-token delivery
- Conversation persistence across sessions
- A Next.js 15 + React 19 frontend with a streaming chat UI
- Docker Compose with PostgreSQL, Redis, and multi-stage builds
- GitHub Actions CI/CD
One command. All of it. Tested, configured, and working together.
Beyond Defaults: The 75+ Options
CRA had eject. Our template has 75+ configuration options.
But here’s the key design decision: you don’t see those options unless you want them. The presets abstract the complexity away. When you need to customize — switch from PostgreSQL to MongoDB, add Celery for background tasks, enable Kubernetes manifests, swap Pydantic AI for LangChain — the options are there.
This is the progressive disclosure pattern that CRA pioneered. Simple by default. Powerful when needed. Never overwhelming at first contact.
The web configurator takes this further. A 9-step visual wizard walks you through every option with real-time validation and dependency auto-fixing. Select PostgreSQL and the ORM options appear. Enable caching and Redis silently activates. You never see an invalid configuration.
It’s what CRA would have looked like if it launched with a GUI.
Standards Beat Snowflakes
The most underrated thing CRA did wasn’t eliminating boilerplate. It was creating a standard.
When every React project starts from CRA, something shifts:
- Tutorials transfer. A tutorial written for CRA works for your project.
- Libraries assume a baseline. Testing tools, CSS frameworks, and state management libraries all target the CRA environment.
- Onboarding accelerates. A developer who’s seen one CRA project can navigate any CRA project.
- Best practices emerge. When the community shares a starting point, patterns converge instead of diverge.
AI applications desperately need this convergence.
Right now, if I join your team and look at your AI project, I have no idea where to find:
- The agent definition
- The tool registrations
- The conversation persistence logic
- The WebSocket streaming handler
- The auth middleware
With a standard template, I know. Everyone knows. Because everyone started from the same place.
The template generates a consistent project structure:
my_app/ backend/ app/ api/ # FastAPI routes ai/ # Agent definition, tools, prompts auth/ # JWT, sessions, middleware models/ # SQLAlchemy models services/ # Business logic websockets/ # Streaming handlers frontend/ src/ components/ # React components hooks/ # WebSocket, auth hooks stores/ # Zustand state docker/ # Compose, Dockerfiles .github/ # CI/CD workflowsThis isn’t just a folder structure. It’s a shared vocabulary for AI application architecture.
The Ecosystem Effect
Here’s what happens after a standard template exists.
In React-land, after CRA:
- Testing libraries (Jest, React Testing Library) targeted CRA’s configuration
- CSS solutions (styled-components, CSS modules) assumed CRA’s build process
- State management (Redux) provided CRA-specific setup guides
- Deployment platforms (Vercel, Netlify) offered one-click CRA deployment
The same flywheel is starting for AI applications:
- pydantic-deepagents — our modular agent framework — provides a template integration that drops in advanced patterns (sub-agents, parallel execution, sandboxed code running) as a configuration option
- Logfire observability integrates out of the box because the template provides a standard instrumentation surface
- Testing patterns are consistent because the project structure is consistent
When AI frameworks, observability tools, and deployment platforms can assume a standard project structure, they can provide deeper, more useful integrations. The ecosystem accelerates. Everyone benefits.
The Bold Claim
Every new AI application should start from a template. Not from scratch.
Not because templates are perfect. Not because one size fits all. But because the alternative — every team independently solving the same infrastructure problems with different implementations — is a waste of collective engineering time.
CRA didn’t limit React development. It unleashed it. By eliminating the boilerplate barrier, it let more developers build more ambitious applications faster. The framework ecosystem exploded after CRA made it easy to start.
The same will happen with AI applications. When starting a new AI project takes minutes instead of weeks, more teams will experiment. More products will ship. The bar for “minimum viable AI application” will rise from “I got the WebSocket streaming working” to “I built something users actually want.”
That’s the shift. From infrastructure-first to product-first. From weeks of plumbing to minutes of configuration.
The “create-react-app moment” for AI apps isn’t coming. It’s already here.
Key Takeaways
- CRA solved three problems for React: boilerplate, standards, ecosystem. AI applications face the same three problems at larger scale — more moving parts, more integration points, more infrastructure to configure.
- Sensible defaults > maximum flexibility. Three presets cover 90% of use cases. 75+ options handle the other 10%. Progressive disclosure keeps the experience simple for beginners and powerful for experts.
- Standards create ecosystem effects. When everyone starts from the same template, tutorials transfer, libraries integrate deeper, and onboarding accelerates.
- The shift is infrastructure-first to product-first. When starting a new AI app takes minutes, teams focus on business logic and user experience instead of plumbing.
- This isn’t limiting — it’s liberating. CRA didn’t constrain React. A standard AI template won’t constrain AI development. It will accelerate it.
Try it: Web configurator — or install via CLI: pip install fastapi-fullstack
Related Articles
AGENTS.md: Making Your Codebase AI-Agent Friendly (Copilot, Cursor, Codex, Claude Code)
Every AI coding tool reads your repo differently. Here's how AGENTS.md — the emerging tool-agnostic standard — gives the...
From 0 to Production AI Agent in 30 Minutes — Full-Stack Template with 5 AI Frameworks
Step-by-step walkthrough: web configurator, pick a preset, choose your AI framework, configure 75+ options, docker-compo...
Same Chat App, 4 Frameworks: Pydantic AI vs LangChain vs LangGraph vs CrewAI (Code Comparison)
I built the same chat app 4 times with 4 different AI frameworks. Same FastAPI backend, same Next.js frontend, same Post...