How to Build a Full-Stack AI App with FastAPI and Next.js
Table of Contents
The Problem
Building a production AI app from scratch means solving the same problems every time: API structure, authentication, database setup, WebSocket streaming, frontend scaffolding, Docker configuration, CI/CD pipelines. We’ve done this 30+ times for clients and ourselves. It takes 2-4 weeks each time.
The Full-Stack AI Agent Template reduces that to 30 minutes.
TL;DR
- One CLI command generates a full-stack AI app with FastAPI backend, Next.js frontend, auth, database, WebSocket streaming, Docker, and CI/CD - all configured and tested.
- 75+ configuration options across 9 steps: database, auth, AI framework, infrastructure, integrations, dev tools, and frontend.
- 5 AI frameworks supported out of the box: Pydantic AI, LangChain, LangGraph, CrewAI, and DeepAgents.
- 3 presets for common setups - minimal (prototyping), production (enterprise), and ai-agent (chat apps with streaming).
- Web configurator available at oss.vstorm.co for visual project configuration with ZIP download.
What You Get
A single pip install gives you a complete project with:
- FastAPI backend with async API, structured logging, and error handling
- Next.js 15 frontend with App Router, TypeScript, and Tailwind CSS
- AI agent integration — choose from Pydantic AI, LangChain, LangGraph, CrewAI, or DeepAgents
- WebSocket streaming for real-time AI responses
- Authentication — JWT, API keys, or Google OAuth
- Database — PostgreSQL, MongoDB, or SQLite
- Docker Compose for development and production
- CI/CD — GitHub Actions or GitLab CI
Quick Start
1. Install the Generator
pip install fastapi-fullstack2. Generate Your Project
fastapi-fullstack create my-ai-appThe interactive CLI walks you through 75+ configuration options. Or use a preset:
# Minimal — no database, no auth, just FastAPIfastapi-fullstack create my-app --preset minimal
# Production — PostgreSQL, JWT, Docker, Kubernetes, CI/CDfastapi-fullstack create my-app --preset production
# AI Agent — Pydantic AI, WebSocket streaming, conversation historyfastapi-fullstack create my-app --preset ai-agent3. Start Development
cd my-ai-appdocker compose up -dYour app is running at http://localhost:3000 with hot reload on both backend and frontend.
Project Structure
The generated project follows a clean, scalable architecture:
my-ai-app/├── backend/│ ├── app/│ │ ├── api/ # API routes│ │ ├── agents/ # AI agent definitions│ │ ├── core/ # Config, security, dependencies│ │ ├── models/ # Database models│ │ ├── schemas/ # Pydantic schemas│ │ └── services/ # Business logic│ ├── tests/│ └── pyproject.toml├── frontend/│ ├── src/│ │ ├── app/ # Next.js App Router pages│ │ ├── components/ # React components│ │ ├── hooks/ # Custom hooks (WebSocket, auth)│ │ └── lib/ # API client, utilities│ └── package.json├── docker-compose.yml├── docker-compose.prod.yml└── .env.exampleAI Agent Setup
Here’s what the AI agent integration looks like with Pydantic AI:
from pydantic_ai import Agent, RunContextfrom app.core.deps import AgentDeps
agent = Agent( "openai:gpt-4o", deps_type=AgentDeps, system_prompt="You are a helpful AI assistant.",)
@agent.toolasync def search_knowledge_base( ctx: RunContext[AgentDeps], query: str) -> str: """Search the knowledge base for relevant information.""" results = await ctx.deps.search_service.search(query) return "\n".join(r.content for r in results)The WebSocket endpoint handles streaming automatically:
@router.websocket("/ws/chat")async def chat_websocket( websocket: WebSocket, current_user: User = Depends(get_ws_user),): await websocket.accept() async with agent.run_stream( message, deps=AgentDeps(user=current_user, db=db) ) as response: async for chunk in response.stream_text(): await websocket.send_json({"type": "chunk", "content": chunk})Web Configurator
Don’t like CLIs? The web configurator provides a 9-step visual wizard:
- Project basics — name, description
- AI framework — Pydantic AI, LangChain, LangGraph, CrewAI, DeepAgents
- LLM provider — OpenAI, Anthropic, OpenRouter
- Database — PostgreSQL, MongoDB, SQLite, or none
- Authentication — JWT, API keys, Google OAuth, or none
- Infrastructure — Docker, Kubernetes, reverse proxy
- Observability — Logfire, Sentry, Prometheus
- CI/CD — GitHub Actions, GitLab CI
- Review & download — get your configured project as a ZIP
Production Deployment
The generated project includes everything for production:
services: backend: build: context: ./backend dockerfile: Dockerfile.prod environment: - DATABASE_URL=postgresql://... - OPENAI_API_KEY=${OPENAI_API_KEY} restart: always
frontend: build: context: ./frontend dockerfile: Dockerfile.prod restart: always
nginx: image: nginx:alpine ports: - "80:80" - "443:443" restart: alwaysKubernetes manifests are also generated if you selected that option.
What’s Next
Once your project is running, you can:
- Add custom tools to your AI agent
- Implement RAG with your own data
- Add more API endpoints
- Customize the frontend UI
- Set up monitoring dashboards
The template gives you the foundation. You focus on what makes your app unique.
Get started: template.vstorm.co/configurator/
Related Articles
From create-react-app to create-ai-app: The New Default for AI Applications
In 2016, create-react-app standardized how we build frontends. In 2026, AI applications need the same moment — and it's...
AGENTS.md: Making Your Codebase AI-Agent Friendly (Copilot, Cursor, Codex, Claude Code)
Every AI coding tool reads your repo differently. Here's how AGENTS.md — the emerging tool-agnostic standard — gives the...
From 0 to Production AI Agent in 30 Minutes — Full-Stack Template with 5 AI Frameworks
Step-by-step walkthrough: web configurator, pick a preset, choose your AI framework, configure 75+ options, docker-compo...