Frequently Asked Questions
Everything you need to know about our tools and projects.
Full-Stack AI Agent Template
What is the Full-Stack AI Agent Template?
An open-source project generator that creates production-ready AI/LLM applications with a FastAPI backend and Next.js frontend. One CLI command or the web configurator generates a complete project with your choice of AI framework, database, authentication, and 75+ configuration options.
Which AI framework should I choose?
Choose Pydantic AI for type-safe, production-grade agents with Logfire observability. Pick LangChain for the largest ecosystem of integrations. Use LangGraph for complex multi-step workflows with state management. Try CrewAI for multi-agent collaboration. Select DeepAgents for autonomous agents with planning and human-in-the-loop approval.
Can I switch AI frameworks after generating a project?
Yes. Regenerate the project with a different --ai-framework flag. Your custom code outside the generated agent module is preserved if you use version control. The web configurator also lets you export your configuration as JSON and re-import it later.
Is the template free to use?
Yes, completely free. The template is MIT licensed — use it for personal and commercial projects without restrictions. No premium tiers, no usage limits, no sign-up required.
Which database should I use?
PostgreSQL is recommended for production — it supports the admin panel, conversation persistence, and full SQLAlchemy/SQLModel ORM features. Use MongoDB for document-oriented workloads. SQLite is great for development and small deployments with zero configuration. Choose 'None' for stateless API-only services.
How does WebSocket streaming work?
The template includes a pre-built WebSocket endpoint that streams AI agent responses token-by-token to the frontend. It supports authenticated connections, tool call visualization, and automatic conversation persistence. The Next.js frontend includes a chat interface that renders the streamed responses in real-time.
How does the web configurator work?
The configurator is a 9-step wizard that runs entirely in your browser — no server required. It uses Nunjucks (a Jinja2-compatible JavaScript engine) to render 246 project templates client-side, then packages them into a ZIP with JSZip. The entire process takes 1-2 seconds. You can also export your configuration as a CLI command or JSON file.
How do I deploy to production?
The template includes production Docker Compose files with health checks and restart policies. Copy .env.example to .env.prod, configure your credentials and database URL, then run: docker compose -f docker-compose.prod.yml up -d --build. Optional Traefik or Nginx reverse proxy handles automatic TLS certificates.
What Python versions are supported?
Python 3.11, 3.12, and 3.13. You select the version during project generation. All AI frameworks and dependencies are tested against each supported version.
Can I modify the generated project?
Absolutely. The generated project is regular Python and TypeScript code — no lock-in, no proprietary runtime. It includes CLAUDE.md and AGENTS.md files so AI coding assistants like Claude Code, Cursor, or Copilot understand the project structure from day one.
What observability tools are included?
Three options: Logfire (by Pydantic) auto-instruments FastAPI, database queries, Redis, Celery, and HTTPX calls — ideal for Pydantic AI agents. Sentry provides error tracking and performance monitoring. Prometheus collects metrics for Grafana dashboards. Enable any combination during generation.
Can I use multiple LLM providers?
The template configures one primary provider (OpenAI, Anthropic, or OpenRouter). With OpenRouter you get access to 200+ models from multiple providers through a single API key. You can also add additional providers manually after generation — the generated code is standard Python with no vendor lock-in.
Pydantic DeepAgents
What is pydantic-deep?
A Python framework for building autonomous AI agents inspired by Claude Code's architecture. It implements the deep agent pattern — agents that can plan, read/write files, delegate to subagents, and maintain persistent memory across sessions.
How does it differ from LangChain or CrewAI?
DeepAgents is type-safe by default (Pydantic models, not dicts), modular (compose tools, not chains), and observable (Logfire integration). It focuses on the deep agent pattern — long-running agents that plan and execute autonomously — rather than simple chain-of-thought or multi-agent chat.
Which LLM providers are supported?
Any provider supported by Pydantic AI — OpenAI, Anthropic, Google Gemini, Groq, Mistral, and any OpenAI-compatible API (like Ollama for local models). Swap providers with one line of configuration.
Can I use my own tools?
Yes. Define tools using the @tool decorator from Pydantic AI. Tools are type-checked, support async execution, and integrate with the permission system. You can also use pre-built toolsets (filesystem, database, console) from companion libraries.
Is it production-ready?
Yes. DeepAgents powers 30+ production deployments at Vstorm. It includes structured logging via Logfire, error recovery, token usage tracking, and has been battle-tested in real-world AI agent applications.
Logfire Assistant
What is Logfire Assistant?
A Chrome extension with a FastAPI backend that adds an AI-powered sidebar to the Pydantic Logfire dashboard. Ask questions about your traces in natural language, get SQL queries generated automatically, and see results as tables or charts — all without leaving your browser.
Do I need a Logfire account?
Yes. Logfire Assistant queries your Logfire data via the Logfire API. You need a Logfire account with at least one project that has trace data. The assistant uses your Logfire read token to access your data securely.
Which LLM providers work?
OpenAI, Anthropic, Google Gemini, and any OpenAI-compatible API. Configure your preferred provider in the backend settings. The AI generates Logfire-specific SQL queries optimized for the traces schema.
Can I create custom prompts?
Yes. Create reusable prompt templates with slash commands like /errors, /costs, or /slow. Templates can include variables and are saved per-project. Share them across your team for consistent debugging workflows.
Is my data stored?
Conversations are stored in your own PostgreSQL database that you host. No data is sent to third parties beyond the LLM API calls. The Chrome extension communicates only with your self-hosted FastAPI backend.
Ready to build your first production AI agent?
Open-source tools, battle-tested patterns, zero boilerplate. Configure your stack and ship in minutes — not months.