3 Comments
User's avatar
Rob Nelson's avatar

Nice piece. I completely agree with recommendation #1, and I adore the scenarios of blue and green universities as a way of interrupting the idea that we have no choice but to gamble on a single vendor. ROI is pretty tricky because educational value is difficult to turn into a measurable return. Sure, you can always find a number in the form of dollars or variability or accuracy or whatever. But we can't really use metrics for previous forms of software for a technology that is built on incorporating improbable or weird results and does not have clear use cases.

The reason for putting these tools in the hands of faculty, students, and administrators is to explore and experiment. The return on discovering novel uses or a dead end is pretty hard to measure. It might could be done, but not easily.

Expand full comment
John Swope's avatar

It is very, very difficult to measure the ROI of AI in Education initiatives. And we are indeed in an experimentation stage, where most of us are OK with trying out new tools without being able to measure ROI precisely.

So I'm with you that it is very hard, unknowable in some cases, and the value of experimentation on its own it a net positive. 100%.

But that doesn't mean that finding areas where we can measure value should be thrown out (I'm not saying your saying that, I'm just on a writing kick this morning for some reason. Good coffee, maybe.) Let's take the very real case of tagging course evaluation comments so we can do some pattern analysis. I think there are folks out there that are just happy taking the default model and doing that analysis. Not realizing that the task is simple, and the cheaper models would suffice. That's a 50x mistake. It will become part of the AI culture to do this kind of analysis as we get more familiar using AI.

Expand full comment
Rob Nelson's avatar

You picked an example I happen to know a lot about. I'm extremely skeptical that the task of analyzing course comments is simple. Sure, an LLM can do a sentiment analysis, but that data has to get turned into information useful to an instructor or dean. The complex part is structuring the output so it is usable and dealing effectively with bad or weird shit students write in an open text box, like how hot their teacher is, or how racist they are, or how the course made them want to kill themselves, ha ha, lol.

Building a product that manages this well can absolutely use an LLM for the simple parts. But in edtech, as with so many things in life, solving 95% of the problem barely matters. It is the corner cases and the "Oh! Right! Students are not exactly like customers" realizations that destroy well-funded start-ups.

Expand full comment