Two Requirements for Educational Institutions to Navigate the Coming AI Price Wars
Prices are rising. That's the law. Here's how educational institutions protect themselves and extract the most value.
As I write this, we are in the midst of OpenAI’s “12 Days of OpenAI”. Some how, some way, someone at OpenAI decided it was a good idea to reclaim the sacred language of Christmas so that they could promote AI products. Starting with the announcement that you could now pay $200/m instead of $20/m for a model that is a few percentage points better (trust us 😉)than the model that you already don’t use. Someone else tell me how to read this and actually justify paying 10x more than I do for all the times I need “Phd-Level Science questions” or “Competition Level Math” in my daily life:

But I digress. Now that I’ve got the blood flowing, let’s get to the point of this article.
I’m seeing a lot of negativity towards OpenAI this week (which I’ve just piled onto above). Many of the opinions see OpenAI’s focus on making money as some kind of bait-and-switch. And often that conversation leads to questions about where to go from here. Here’s my answer:
Let’s break this down…
AI is Expensive, and Educational Institutions Will be Asked to Foot the Bill
As educational institutions, we need to accept that we have very little control over the battle for AI supremacy that is happening around us. And yet, the big players are looking at education as one of their highest potential customers. The playbook for a new technology is simple:
Spend investor money as needed to scale, acquire customers, and force out competition.
Once you’ve established your foothold, raise prices as needed to recoup your investment.
OpenAI has set their vision on being the first to AGI. To do that, they’ve raised the most money and now face the most pressure from investors to recoup that. But make no mistake, AI is expensive for all these players and we will pay for it someway, somehow. We are still in the early stages of frontier models, and the scaling laws for AI say that the training costs for each successive frontier model costs 10x the previous generation. Ethan Mollick has stated that “Today’s AI is the worst AI you will use”. At a macro sense, today’s AI is also the cheapest that you will use. In a micro sense, I think there are savings to be had. More on that later.
Simultaneously, genAI is commoditizing and fiercely competitive
In the same week that OpenAI announced their upgraded models and higher pricing, Amazon announced their Nova models with benchmarks that match or exceed their competition at 75% cheaper pricing. To be clear, this is not an indictment of OpenAI or praise of Amazon. Amazon is late into the game and so this is likely a way to complete step 1 of the playbook above and lose money to acquire customers and scale, so they can eventually move on to step 2. But the fact is that they’ve release what appear to be some good models (I have not used them personally yet) for a low cost.
Additionally, the world of open source models from organizations like Meta or curated by HuggingFace offer models that are thousands of times cheaper than frontier alternatives.
Each model has strengths and weaknesses
Here is a slide from a talk I gave recently. In it, I asked AI to categorize some text blurbs that talked about squirrels and turtles but did not use the word “squirrel” and “turtle” in any of the text. 100% of the time with 100% confidence, the cheapest AI (in this case, 4o-mini) was able to complete that categorization:

Recognizing whether a paragraph is about squirrels or turtles is obviously a trivial task. But nearly any higher-order work relies on a base of trivial tasks, and the fact is that very low-cost AI models can complete these kinds of tasks with high accuracy and precision. In a real world version of the same thing, a school I’m working with is experimenting with using AI to tag student course evaluation feedback in order to create a longitudinal analysis of program strengths and weaknesses over time. It’s a low order skill that leads to high order strategy.
Half-time recap
Let’s recap so far.
Frontier models are expensive, and we will be asked to pay for them.
Alternatives are abundant and commoditizing.
All models are tools that do some things well and do some things poorly.
Therefore, to take advantage of the benefits of a competitive and commoditized AI landscape we need to:
A.) Stay flexible, so that we can take advantage of models and tools as they come available to us.
B.) Measure ROI so that we can prove that we are using the right tool for the job.
Recommendation 1: Stay Flexible
Let’s compare two hypothetical stories:
One one hand, Blue University establishes a partnership with MechaAI to provide powerful chatbots to their students. MechaAI helps Blue university quickly establish chatbots that support students with career guidance, mental health services, and course tutoring. Blue University is lauded for their quick adoption of AI.
On the other hand, Green University works with a variety of tools and vendors. They establish an LLM Gateway to give their various departments access to different models. Different schools and departments within the university experiment with different tools, and it’s honestly a headache to deal with the security and procurement issues that come with that. They get no admiration for AI leadership at a university scale, but their individual teams and projects do get some grassroots recognition.
Now fast-forward a few years down the road.
MechaAI is under pressure from investors, and they’ve raised prices on Blue University with each successive contract. At this point, Blue University is attracted by alternatives but they get pushback from students and staff who have become accustomed to the best models from MechaAI. It would be impossible at this point to roll back the AI services they offer, and unpopular to change them much.
Green University is still struggling with managing the variety of AI services. They’ve had vendors go out of business unexpectedly and a few instances where internal audits revealed that AI initiatives were not properly secured. That being said, they’ve solidified processes for piloting, measuring, securing, and integrating AI where it is effective and come up with some really solid use cases and research. And they are able to quickly experiment and determine when new models and tools can exceed the ROI of their existing ones versus not.
Flexibility comes with headaches. But AI will be part of the educational landscape for the foreseeable future, and it is prudent for institutions to avoid the temptation of a quick integration of AI that sacrifices flexibility in the future.
Recommendation 2: Measure ROI
Flexibility is a well-intentioned waste of time if we can’t measure ROI. Measuring ROI of AI is exceptionally challenging, and I think that even the AI-forward educational institutions are only just starting to think about it. Many of the early AI adopters (ASU, Yale) appear to be taking leaps of faith to establish their own foothold, and they’ll figure out ROI later.
Let’s start with just the basics. AI outcomes are hard to measure, but the majority of cases we have or can get.
The cost of the AI investment.
Usage data.
Anecdotal feedback.
Which is certainly a place to start. Often, it isn’t a huge lift to add:
Systematic satisfaction feedback.
And in certain cases with some effort we can get some more useful information by measuring:
AI accuracy
AI precision
Here is an example of a way to measure and validate LLM outputs that I’ve been using in another experiment with a university and inspired by this work on exploratory and confirmatory prompt engineering:
It essentially measures AI accuracy as compared to a source of truth which in this case is multi-tester score, and precision by measuring the variance of AI scores across multiple runs. Your experiments will vary, but the point is now we have a valid way to measure ROI. When one model raises their prices or another model is released, it is a pretty easy test to swap out a model and compare the results. If you were truly diligent about this, AI is fairly unique in its ability to find 100x and now approaching 1000x ROI improvements just by swapping around models.
Conclusion
Complaining about AI isn’t helpful, and trying to find the AI service that isn’t bound by the laws of training and compute scaling isn’t realistic. It will be up to the individual educational institutions to find ways to leverage AI’s enormous potential while protecting themselves from the coming price wars.
AI Disclaimer:
During writing, I used ChatGPT to suggest synonyms for words.
After the article was drafted, I asked ChatGPT to identify areas where the clarity of my writing could be improved. It provided several good suggestions and I rejected all of them in an act of human hubris and decision inertia.
Nice piece. I completely agree with recommendation #1, and I adore the scenarios of blue and green universities as a way of interrupting the idea that we have no choice but to gamble on a single vendor. ROI is pretty tricky because educational value is difficult to turn into a measurable return. Sure, you can always find a number in the form of dollars or variability or accuracy or whatever. But we can't really use metrics for previous forms of software for a technology that is built on incorporating improbable or weird results and does not have clear use cases.
The reason for putting these tools in the hands of faculty, students, and administrators is to explore and experiment. The return on discovering novel uses or a dead end is pretty hard to measure. It might could be done, but not easily.