AI Pricing: Introducing Output-Based, Per Task Pricing Approach
How to price AI products and applications has been an open question since Generative AI crossed the threshold of technological viability. Not only do companies want to design a pricing strategy that’s sustainable and reflective of the value of their products, but they want to make a profit despite Generative AI’s high compute costs. This is a common struggle for all AI-related companies, even for the hyperscaler backing OpenAI. According to the Wall Street Journal, Microsoft is reportedly losing $20 per user per month on average for the GitHub Copilot product. So how should companies leveraging AI think about pricing? And how should they get it right the first time, to avoid painful pricing changes in the long-term?
I’d like to propose a new way of thinking about pricing AI products.
First, why is it hard to price AI products?
1. Variable costs are high.
Unlike SaaS products, where the cost of serving one more customer is close to zero, the variable cost for AI products can be quite high. For every customer interaction, the AI companies would incur an inference cost that scales in an almost linear fashion with the size of the model employed. And the bill could get even higher given current GPU supply constraints. Moreover, this is not a volume game for the application players — the more users you have does not necessarily mean a flatter cost curve for you. Counterintuitively, more active users are often less profitable. All this makes it challenging for AI startups to get predictable gross margins. They’d love to charge high enough to recoup infrastructure costs, but don’t want to limit their early traction with steep prices.
I believe this will be an issue until the models or infrastructure is optimized enough to bring down inference costs for application players. While the tech development is still early, it’s challenging for startups building on top to reliably forecast their margins, which creates headaches for builders and investors alike.
2. Adoption is still very early.
Although AI is the absolute 2023 buzz word, consumers and enterprises are still early in their adoption journey. McKinsey indicates only one-third of the enterprises have adopted some form of AI in their orgs, and anecdotal evidence indicates only a fraction of those enterprises have deployed AI into production. I believe there are several reasons for it. For one, the availability of many dev tools and foundation models makes it very easy to start an AI company these days. There are lots of products targeting the same use case. So products are not that different from one another. Enterprises are simply trying them out and figuring out their own AI strategies. So consumption patterns are constantly changing. Second, it is not yet clear the actual value AI brings to the table. So users themselves don’t know and probably can’t justify the ROI for adopting AI tools for now. How can they put a willingness-to-pay for the AI tools?
3. AI solutions are “work” based.
Call me optimistic, but I’ve always seen AI products as the “replacements” of human efforts and that their true competitors are not another AI product but us humans. It should not be another level of productivity improvement as what SaaS brought to us, but rather a replacement of human work to some degree. Think about the early players for example. Harvey does legal search, analyzes legal contracts, and generates insights from legal documents, and that replaces a lot of the work that paralegals would do. Tome generates story-telling presentations based on a few prompts, replacing the work of junior sales & marketing people, creators, freelancers, and even startup founders.
But just because these AI startups sell creative work, there is no standard or unified pricing terms. It’s almost like a consulting service model. Each customer values the “work” differently, and customers from different verticals have different use cases even out of the same AI products. So the usual seat-based subscription plans probably do not work well here.
Just to summarize, startups leveraging AI usually have the following pricing considerations:
- Be able to recoup or at least break even from the high compute and inference costs;
- Want to capture all the value their products deliver without leaving money on the table;
- Want flexible pricing terms for different customer profiles and leverage pricing to improve customer relationships.
How to price given the challenges?
One of the first things I learnt at Wharton was that to capture value, one has to create value. And value is derived by growing the pie — increasing willingness-to-pay, and decreasing costs to suppliers. With this framework in mind, the first thing companies should think about is to select metrics that best represent the value delivered to customers, and tie those to pricing.
Introducing Output-Based Per Task Pricing Method
Tying Pricing to ROI
As argued above, I believe AI products deliver “actual work,” so pricing should tie to how much customers are valuing the “work” they receive. And the true competitors of AI products are not other AI tools, but human ourselves.
Let’s do an example. I recently called LG customer support because my fridge was not cooling. Let’s say that instead of hiring reps to take calls, LG now deploys AI chatbots. Say the hourly rate for reps was $20, and they can take 5 calls per hour, so the cost to LG was $4/call. Now with the AI chatbots, LG does not need to hire as many reps as before but can achieve a similar number of calls taken. The AI chatbot helps LG save a big amount of salaries it used to pay out to reps. Assuming the quality of the calls AI chatbots conduct is the same as those by the reps, the highest willingness-to-pay from LG is maxed at $4/call. Of course the goal of using AI chatbots for LG in this case is cost-saving, and LG probably will still have to employ some number of reps, so in reality, LG would pay the AI chatbots for a fraction of the $4/call. And the startup building these AI chatbots would also want to price the products above costs incurred for every interaction.
In this case of using AI products to boost up revenues, the same pricing method applies. In the case of Harvey AI, ideally it would help law firms generate more revenue by taking up more cases with the same or even less number of employees. So pricing such a product should use the headcount costs of employing extra paralegals to generate what Harvey could achieve as a benchmark, factoring the quality and the complexity of the work delivered, and the willingness-to-pay of the law firms. Ideally, the pricing would be some fraction of (the extra revenue created and the employee salary saved).
This output-based pricing approach — using the value captured by end users (whether cost-saving, revenue gains, or a combination of both) as a benchmark to price AI products — is both reflective of the values provided to users, and easier for startups to gain visibility on profit margin.
I’ve seen startups leveraging AI capabilities begin to charge on a per “completion” basis. For example, the AI chatbot Fin by Intercomm is measured in resolutions. It charges $0.99 for every customer service request it resolves — only after customers “exited” a conversation with the AI chatbot. Another example is Cresta, another AI-based contact center solutions, actually shifted from per-seat pricing plans to “work”-based as well, charging on the conversations it helps contact center agents complete.
Pricing Structure
In reality, there is much more to consider — heavy users might want a discount based on volume, customers from different verticals might value the same AI products differently, customers of various sizes might have different consumption behaviors, etc. I would advise startups leveraging AI to not fall into the trap that there is one single method for pricing. I’d actually encourage them to explore the different willingness-to-pay from customers of different sizes, in different sectors, using different use cases. It might even make sense to charge them differently. Moreover, it’s okay to use customized pricing. Just like in the world of consulting, AI companies are also selling work as I had argued.
For existing players that are integrating AI solutions…
Unlike AI-native companies who can design a brand new pricing strategy for their products, existing players who are offering AI capabilities as add-on features might not have the pricing flexibility. They’d need to consider their existing pricing strategy, making the upsell easy for existing customers without disrupting their current pricing model.
Take Notion as an example. Notion AI is offered as an optional add-on, priced at $8/member/month if billed annually or $10/member/month if billed monthly. Based on what I’ve talked about above, you’d probably wonder if Notion is able to recoup its inference costs with this seat-based pricing. And that’s what I believe Notion has been doing right — Notion AI rolled out a “fair use limit” such that if a user generates 30+ AI requests within 24 hours, the user will get slower responses for that period. To me, it seems like Notion is switching that user to a lower-grade LLM (GPT 3.5 vs. GPT 4), which has lower per token inference cost. This effectively protects Notion from heavy users taking advantage of its seat-based pricing model. With that being said, I’m not 100% sure if $8-$10/member/month pricing plan is a great way to reflect the value Notion delivers to end users. Would some kind of tiered pricing plans help, to separate out those that use up lots of tokens? But there are also many competitors in the writing copilot space, what if they are charging much lower? Is the current pricing strategy sustainable for the long-term? …
Parting Words
I personally think pricing is a very complicated subject when it comes to AI solutions, and it will remain challenging for some time, until compute infra costs come down or more efficient inference methods emerge. Analyzing AI pricing also makes it very clear that it is still very, very early for meaningful AI adoption. High compute costs restrain developments and app integration, so until AI use cases and their associated economic value become more clear to customers, pricing strategy will remain clouded. Under current conditions, it is likely that only companies with superior model optimization capabilities can have some control over pricing and profit margin.
In conclusion: 1. constantly review your pricing strategy and talk to your customers to understand their willingness-to-pay; 2. don’t wait too long to monetize if you are offering a freemium model now; and 3. tie pricing to the metrics that matter to maximize the value you create and capture.