This document serves as the absolute, comprehensive single source of truth for the entire Prompt Forge Studio project. It covers the core mission, V1 consumer application, gamification engine, UI/UX structure, V2 Prompt-as-a-Service (PaaS) backend engine, database schemas, and the technology stack.
Prompt Forge Studio is an advanced development environment (ADE) for prompt engineering. It acts as middleware between human intent and large language model (LLM) execution.
- The Problem: Raw prompts are often ambiguous, missing constraints, or unstructured, leading to suboptimal LLM performance.
- The Solution: The platform uses heuristics and semantic analysis to detect weaknesses in user inputs, automatically restructuring them into high-fidelity, production-grade instructions that improve AI consistency and lower latency.
- Target Audience: Prompt Engineers, AI Developers, and Product Managers.
- Frontend Ecosystem: Next.js 15 (App Router), React 19, TypeScript
- Styling & UI: Tailwind CSS v4, Framer Motion (Animations), Class Variance Authority (CVA), Lucide Icons
- Authentication: Clerk (V1 App) + Supabase Auth
- Database: Supabase (PostgreSQL)
- Caching Layer: Upstash Redis (Serverless exact-match caching)
- AI Engine Models: Gemini (Google), Llama/Nemotron (NVIDIA), Llama/Mixtral (Groq)
- Validation: Zod
- Colors/Theme: Deep dark mode (
#050508) with vibrant purple accents (#8b5cf6). - Typography: Inter (for UI clarity) and Poppins (for impactful headings).
- Visual Elements: Glassmorphism (backdrop-blur panels), subtle gradients, floating elements, shadow-glow.
- Standard Layout (Global Wrapper): Used for marketing, legal, settings, profile. Features a fixed, frosted navbar, centered content containers, and standard footer.
- Studio/App Layout (
/studio,/dashboard): Specialized for heavy web-app workflows. Features a persistent Left Sidebar (navigation/history), full viewport height workspace, and hides the footer to maximize real estate. - Admin Layout (
/admin): Enforces role-based access control (RBAC), adds system-monitoring visualizations.
- Public/Marketing:
/(Home): Hero section, interactive demo components, features grid, pricing tease, FAQ./about: Mission statement and team (Anil Suthar)./features: Mental models detailing Understand, Build, and Compete phases./pricing: Subscription plans (Hobbyist vs. Pro).
- User Hub:
/dashboard: Post-login home. Shows KPI stats (Tokens used, Cost), subscription tier, and recent project grid./profile: User profile management, Badge showcase.
- The Core Studio Engine (
/studio):- The primary IDE workspace. Split-panel design. Left: Raw Input & Granular Controls. Right: Live Output, Diffusion View, Audit critique.
- Gamification Playground (
/playground):- Interactive educational arena. Includes "Fixer Mode" (debug bad prompts), "Builder Mode" (construct via templates), and "Battle Mode" (predict AI outputs).
- Admin Control Panel (
/admin):- System health metrics, user management, contact form inbox (with Resend email integration), and global broadcasting.
- Cognitive Depth Control: Users adjust the LLM's query expansion level (Short, Medium, Detailed, Granular).
- Real-time Output Status: Animated loaders that show what stage of injection the prompt is undergoing (Goal -> Context -> Constraints).
- Version History Diffing: Users can view side-by-side diffs (A/B testing) of how a prompt was changed over time.
- AI Prompt Auditor: A pre-execution critique engine that reads the user's prompt and suggests security/clarity improvements before costing tokens.
- Subscription Gating: Pro users (authenticated via Clerk/Supabase profiles) get access to deeper analysis and unlimited generations.
- XP & Leveling: Users earn points for using the app and doing challenges.
- Badges System (15+ Badges):
- Common: Prompt Rookie, Curious Mind.
- Skilled: Constraint Master, Builder Apprentice.
- Advanced: Prompt Surgeon, Battle Commander.
- Expert: Master Fixer, Oracle.
- Legendary: Legend of PromptForge.
(Added via the 1-Day Minimal V2 Implementation replacing complex Git architectures)
- Programmatic API Execution: A robust
/api/v1/executeREST endpoint designed for machine-to-machine calls. - Linear Versioning: A straightforward, sequential prompt versioning system (no complex branching).
- Cascading Model Router (
lib/router.ts):- Dynamically selects the AI model via the modular
lib/aiprovider system. - Supports Google Gemini, NVIDIA, and Groq.
- Heuristics: If prompt length > 4000 chars OR contains reasoning keywords ("step-by-step", "") => Routes to higher-tier models (e.g., Gemini Pro).
- Otherwise => Routes to Gemini:
gemini-2.5-flash(Default),gemini-1.5-pro, etc. for high-throughput, low-latency execution.
- Dynamically selects the AI model via the modular
- Exact-Match Caching (
lib/cache.ts):- Powered by Upstash Redis.
- Generates MD5 hash of
version_id+ sorted variables. - Bypasses the LLM entirely on a hash hit, dropping latency to <50ms.
- Asynchronous Telemetry: Non-blocking writes to
v2_execution_logsmapping the version ID, latency, model used, and cache status.
Prompt Forge utilizes two distinct schema approaches harmonized in the same database. The public schema contains tables for the V1 web IDE, and the v2_ prefixed tables represent the headless PaaS API engine.
profiles: Maps to Auth provider. Storessubscription_tier,role,avatar_url.prompts: Main container for web IDE prompts. (Columns:original_prompt,refined_prompt,intent,detail_level).prompt_versions: The version history for web IDE edits.prompt_analytics&prompt_executions: Telemetry and latency metrics for web queries.badges: Definitions of all gamification badges (Rarity, Unlock Conditions).user_badges: Join table tracking which users have earned which badges.experiments&experiment_variants: Used for the A/B prompt testing arena.notifications: In-app user notifications.admin_audit_logs: Tracks admin panel actions.contact_messages: Inbox for the/contactroute.
These are intentionally isolated from the Web Application tables for clean separation of concerns:
v2_prompts: Lightweight container for API prompt definitions (name,description).v2_prompt_versions: The linear execution templates utilized by the API (version_tag,system_prompt,template). Contains unique index on(prompt_id, version_tag).v2_execution_logs: The programmatic audit trail written to asynchronously upon every API hit (latency_ms,model_used,cached_hit).
When a client application queries the V2 backend to execute a prompt dynamically:
- Request
[POST] /api/v1/execute: Client sends JSON payload:{ "version_id": "UUID", "variables": { "name": "Alice" } }. - Zod Parsing: Payload is validated.
- DB Fetch (Admin Role):
v2_prompt_versionsis queried vialib/supabase.ts(using Service Role to bypass row level security for programmatic server-side access). - Cache Hash Generation: Variables are alphabetically sorted, stringified with version ID, and hashed via standard Node.js
cryptoMD5. - Redis Interception (
lib/cache.ts): Checks Upstash Redis for the hash. On cache hit, returns immediately. - Router Evaluation (
lib/router.ts): On cache miss, system instructions and variables are examined. The prompt is injected. The heuristic model selector chooses between Pro or Flash tiers. - LLM Execution: The Google Generative AI SDK calls the selected endpoint.
- Telemetry Dispatch: Asynchronously (without
await), a log is written tov2_execution_logsand Redis is updated. - Response Delivery: JSON returned the client with raw output and performance meta-data.
- Building Visual Studio Support for V2: Integrating the
v2_promptsUI management inside the/studiodashboard (currently manual DB). - SDK Generation: Creating
npm install @promptforge/sdkfor easy Node/Python integration for enterprise clients. - Semantic Caching: Upgrading the exact-match Redis cache to a Vector Database-backed semantic cache (understanding intent similarity rather than string equality).
- Billing Infrastructure: Hooking Stripe meters into the
v2_execution_logstable for usage-based PaaS billing.