Back to Blog
Deep DiveMarch 5, 202612 min read

Building ViralOps in Public: Lessons from an AI-First Startup

Behind the scenes of building ViralOps. Technical decisions, architecture choices, lessons learned, and the journey of an AI-first product.

Why Build in Public?

Building ViralOps has been a journey of constant learning. From choosing the tech stack to navigating AI provider limitations, from pricing mistakes to unexpected feature requests, every decision taught us something. This post shares the key lessons from building an AI-first marketing platform.

The Tech Stack Decision

We chose Next.js (App Router) with Supabase and Stripe — the modern indie hacker stack. In hindsight, these were the right choices:

  • Next.js App Router: Server components for data fetching, API routes for backend logic, and React for interactive UI. The all-in-one framework reduced decision fatigue.
  • Supabase: Postgres + auth + storage + real-time in one platform. Row Level Security keeps data secure without a separate auth middleware. The service_role key handles admin operations.
  • Stripe: Billing just works. Checkout sessions, webhooks, and the billing portal handle 90% of payment needs. The remaining 10% is custom credit logic.

The Multi-Provider Bet

Early on, we committed to supporting multiple AI video providers rather than building around a single model. This was our best architectural decision. When Google Veo had capacity issues in February, Kling handled the overflow seamlessly. When Kling's API changed, Seedance covered while we updated. Single-provider platforms suffered visible outages; ViralOps users barely noticed.

The downside: maintaining five provider integrations is significantly more engineering work than one. Each provider has different APIs, different output formats, different error handling, and different pricing models. The abstraction layer that normalizes all of this was complex to build but invaluable in production.

The Agent Architecture Evolution

Our AI agent system did not start as 13 agents. It started as one chatbot. The evolution:

  • V1: Single chatbot using Claude. Could answer questions and generate basic scripts.
  • V2: Specialized agents (Creative Director, Campaign Builder). Each with focused prompts and tools. Dramatically better output quality.
  • V3: System agents added (Supervisor, Pipeline Guardian, Quality Inspector). These background agents improved reliability and quality without user interaction.
  • V4 (current): 6 user-facing + 7 system agents with shared brand memory via pgvector embeddings. The agents learn from interactions and improve over time.

The lesson: specialization beats generalization for AI agents, just as it does for human teams.

Pricing Lessons

Our first pricing model was wrong. We started with per-video pricing, which made users anxious about every generation. They would agonize over prompts to avoid "wasting" a generation. This killed experimentation — the exact behavior we wanted to encourage.

We switched to credit-based plans with generous allocations. Users experiment freely, generate multiple variations, and find better creative because they are not penalized for iteration. Usage went up, satisfaction went up, and retention went up. The credit system with atomic deduction (Postgres RPC) ensures accuracy at scale.

The Feature Prioritization Challenge

When you are building an all-in-one platform, everything is a priority. Video generation, voiceover, music, branding, scheduling, publishing, agents, trends, training, boards — the feature list is enormous. Our approach:

  • Ship the core loop first: Generate video → add voice → publish. Everything else is an enhancement.
  • Listen to paying users: Free users request everything. Paying users tell you what they actually need.
  • Build for the workflow, not the feature: Individual features matter less than how they connect into a workflow.

Inngest for Durable Workflows

One of our best infrastructure decisions was using Inngest for background processing. AI video generation is inherently async — it takes 30 seconds to several minutes. Inngest's durable functions handle retries, timeouts, and failover without us building custom queue infrastructure. The step-function model makes complex workflows (generate → voice → music → assemble → publish) clean and maintainable.

What We Would Do Differently

  • Start with multi-provider architecture from day one. We retrofitted it, which was painful. Design for multiple providers upfront.
  • Build the agent system earlier. Agents dramatically improved user experience. We should have invested in them sooner.
  • Credit pricing from the start. The per-video model wasted months of user psychology optimization.
  • More aggressive testing. We shipped features without enough A/B testing in early months. Data-driven decisions are always better.

What Is Next

ViralOps is growing, but we are far from done. The roadmap includes deeper agent capabilities, more provider integrations, improved generation quality, expanded platform publishing, and enterprise features. We continue building in public because transparency builds trust, and the community's feedback makes the product better.

Try ViralOps and join us on this journey. Every feature request, bug report, and piece of feedback shapes what we build next.

Ready to create AI videos?

Join thousands of creators and marketers using ViralOps to produce scroll-stopping video content with AI.

Start Free — No Credit Card Required