on being here for the Giant Plagiarism Machine

on the need to remain clear eyed in the face of AI and edtech promises for society

on being here for the Giant Plagiarism Machine

DuoLingo's CEO, Luis von Ahn on AI being better than human teachers.

TL;DR. Computers, and AI (and I'm assuming specifically the proprietary Duolingo models) can teach a student everything.

This post from I wrote in 2016 on the false promise of edtech is still valid. It includes a link to an article with this quote from Daniel Willingham.

"Above all, let’s remember: Technology may change quickly. Our brains don’t."

Really useful to hold to that quote, because Mr Von Ahn closes with this, oddly non-commital commitment to the change AI will enable in education.

“I don’t think you’ll see a change where next year everybody’s learning is completely different,” he said. When it comes to most education, “it’s like government—it’s just slow.”

Almost as if despite the clickbait conviction and the shiny podcast context, that he's covering his bets in the event the societal change isn't as fundamental as he proclaims.

As if 20 years of edutech prophets haven't actually changed the self defined future of the world, by the time we got there, and instead mostly just generated profits for their companies.

AI as currently imagined by Silicon Valley and as evidenced by pretty much every demonstration and profit model, including DuoLingo's, is about benefits to the individual.

A private good.

Public education broadly speaking, as a concept designed and built over the last 150 years around the world, is about building and enabling equitable opportunities and outcomes.

A societal good.

Nation states have over the last 150 years invested in education because of the potential it offers, to lift poverty rates, to improve health outcomes, to drive economic growth, to enable arts, culture, sport and further learning.

That it does not achieve this equitably or in a linear fashion, is a measure of the reality that any system built on human aspirations, capacity, and fallibility, will be impacted by those same humans, and by the deliberate choices of those in power.

AI isn't equitable. In design or intent. It consumes public data, funded by the public to feed algorithms and models that benefit individuals and corporations. The great delusion of tech prophets is that the AI will remove all mistakes and/or accountability and be better than human.

If I was to say: "LLMs & AI could be a tremendous boon to students lacking the individual instruction they need."

I would suggest that most would nod and agree.
I mean, I agree with it.
But that's not the actual point.

If I was to say: "Teachers & teacher aides could be a tremendous boon to students lacking the individual instruction they need"

And your automatic response is "That wouldn't work because we'd need performance pay, or unions or [insert talking point]".
Then you're proving the point.

The point is, these systems and the default thinking enabled by those systems, are being built by a cohort making very clear choices about what shapes us, our children, our ability to think, evaluate, discern and make sense of ourselves, as individuals, as a species, and our place in the planet's ecosystems.

As Wilson Miner said way back in 2012, "We shape our tools and our tools shape us."

Many in ed-tech and the AI industry have chosen the shape of society they wish to build.

For now, we as a public still get to choose.

We need to be clear eyed on what this brave new world means for the shape of our society, and the society our children will inherit.

We need to be headsup about who wins and who loses because of the choices being made.

We need to hold that cohort and those individuals accountable.


Hat-tip and shoutout to McSweeney's for the title to this post.

Creative Commons Licence
Continue by Tim Kong is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.