Pricing

Start free, upgrade when you're ready. No surprises.

Free

$0/month

Perfect for trying out Coplay and small projects.

Get Started for Free

What's included

  • Access to all Coplay Features
  • Access to all Models
  • Use Coplay MCP free with your AI subscription. Learn more →

Professional

Popular
$20/month

For serious game developers who need more power.

Start Pro Trial

What's included

  • $40 worth of credits/month
  • Access to Latest AI models
  • Unlimited Top-Up Credits
  • Priority Customer Support
  • $4/day credit when you reach $0
  • Credits charged at LLM provider cost
  • We don't charge for failed tool calls
  • (See the section below for more details on how we charge for credits)

Enterprise

Custom

Perfect for studios ready to grow with AI

Contact Us

What's included

  • Custom VPC Installation
  • Usage and Access Dashboard
  • Enterprise Level Support
  • No Training on Your Data
  • Airgapped Local Deployments

How Our Pricing Works

At Coplay, we believe in transparent pricing. Unlike services that hide costs behind opaque "token bundles" or proprietary pricing, we pass through AI costs exactly as they come from the LLM providers—no markup, no hidden fees.

See Every Cost

Inside Coplay, you can see the cost of each message, every tool call, and the total for your entire conversation thread. No guesswork—just real numbers. You can also check your credit balance anytime in the profile section.

How Credits Work

Credits are deducted at cost—exactly what the LLM providers charge us. This means you're getting AI capabilities at the same price (or cheaper -- see below) as going directly to the providers.

Need More? Top Up Anytime

If you're building something ambitious and burn through your monthly credits, you can top up directly inside the Coplay app. Top-up credits are charged at provider cost, and there's no limit—scale as high as you need.

Better Than Bring-Your-Own-Key

You might be thinking: "Why not just use my own API key?" Good question. Here's the catch:

  • We don't charge you for failed tool calls. AI models sometimes make mistakes—malformed outputs, retry loops, or abandoned attempts. With your own key, you pay for all of it. With Coplay credits, you only pay for calls that actually work.

The $4/day Safety Net

Running out of credits mid-project is frustrating. That's why Professional subscribers get a $4/day credit allowance when their balance hits zero. It's enough to keep working while you decide whether to top up or wait for your next monthly refresh.

A note on the future: We're still in the early days of figuring out what pricing model works best for game developers and AI. This pricing structure may evolve as we learn more about how you use Coplay. We'll always give you advance notice of any changes, and we're committed to keeping things fair and transparent. If you subscribe before we change pricing, we'll make sure to give you a good deal as a thank you to early adopters.

Tips for Keeping Costs Low

A few simple habits can dramatically reduce your AI costs—sometimes by 10x or more.

1

Respond Within 5 Minutes

This is the single most impactful tip. LLM providers use something called KV caching (key-value caching), which stores the computed context from your conversation. When you send a follow-up message within the cache window (typically 5 minutes), the provider doesn't need to reprocess your entire conversation from scratch.

The result? Cached requests can be up to 10x cheaper than fresh ones.

If you step away for coffee, no worries—just know that your next message will cost more as the model rebuilds context. Plan your longer breaks between tasks, not in the middle of them.

2

Keep Your Context Window Light

Every token in your conversation history gets processed with each request. The more tokens, the higher the cost. Try to keep your thread below 30% of the context window for optimal pricing.

Watch the 200k threshold: Most providers charge significantly more (often 2x the price) once you exceed 200,000 tokens. Combined with an expired cache, a bloated thread could cost up to 20x what it would normally.

Pro tip: Use the /compact command in Coplay to summarize your thread and reduce token count without losing important context. It's like hitting "refresh" on your conversation while keeping the key information.

3

New Task? New Thread.

When you finish a task and move on to something unrelated, start a fresh thread. There's no benefit to carrying forward context about your player movement system when you're now working on UI animations.

Think of threads like workbenches—clear them off when you start a new project. Your wallet (and the model) will thank you.

4

Only Attach What You Need

It's tempting to attach your entire script folder "just in case," but more context means more tokens, which means higher costs. The model is surprisingly good at finding what it needs with minimal guidance.

Start with the specific file or function you're working on. If the model needs more context, it'll ask—or you can add it then. This incremental approach keeps costs predictable and conversations focused.

Quick Reference

  • Respond within 5 minutes for cached pricing
  • Stay under 30% context usage
  • Use /compact to trim threads
  • Start fresh threads for new tasks
  • Attach only necessary context
  • Avoid exceeding 200k tokens

Want more details? Check out our full cost guide with model-specific pricing.

View Full Prompt Cost Guide →