Discussion about this post

User's avatar
Gunnar Miller's avatar

This is extremely interesting. Even though you apologize for the small sample size involved, that complete lack of recall amongst the LLM group is indeed startling. I think it might be akin to using pen and paper, or even typing into a laptop, to take notes instead of just letting voice recognition transcribe and summarize a management meeting; a bit of work is key to retention and learning.

You've touched upon something that I believe will continue to trip up complete adoption of AI: The Uncanny Valley. Originally applied to the "ick" factor one feels with humanoid robots https://en.wikipedia.org/wiki/Uncanny_valley , that "distinctive writing style" the teachers could start to recognize is already manifest in AI-created images, videos, and music. The conventional wisdom at present is that "well, just wait until the technology improves", but I think society's going to reward and advantage those who have a good nose for things ... sort of like being able to tell if a luxury watch or a handbag are fake from several paces away.

That said, I've heard that students are already a step ahead and deliberately inserting misspellings and grammatical errors into their LLM-generated schoolwork. Ironically, this might have some of them decide to just create their own prose themselves.

Anyway, with any luck, this'll get society away from overdependence on written tests, and ressurect oral exams, which are the ultimate "I know it when I see it" litmus tests for teachers https://en.wikipedia.org/wiki/I_know_it_when_I_see_it .

Expand full comment
Johann's avatar

Reminds me of a post by Scott Cunningham [1] which contained a few tell-tale signs of AI-written papers. I found two quite notable:

- AI written text generally lacks a thesis and a point of view, and

- Humans write papers whose length of paragraphs varies greatly - AI‘s don’t usually do that, unless you make it do so more actively.

Which also means that if you engage more with the AI when prompting it to write the text, then this should lead to better albeit different forms of retention:

A while ago there was another interesting observation by Venkatesh Rao [2] in relation to some of these studies where he compared people „writing papers with AI“ to managers who instead of writing themselves rather „delegate and supervise writing work of a competent intern“ (And that „prompting well“ in such a context means to know when to chase alpha and „when to let the index run“).

Juniors who jump to that „managerial“ role directly often lack the expertise and experience in managing (and delegating to) other humans properly, which is one element that leads to bad outcomes (As a side note, [3] should be of particular interest to you. Maybe you’ve come across it already. ☺️).

[1] Cunningham, Scott. “How to Tell If ChatGPT Wrote the Students Essays.” Substack Newsletter. Scott’s Mixtape Substack (blog), December 18, 2024. https://causalinf.substack.com/p/how-to-tell-if-chatgpt-wrote-the.

[2] Rao, Venkatesh. “Prompting Is Managing.” Substack Newsletter. Contraptions (blog), June 19, 2025. https://contraptions.venkateshrao.com/p/prompting-is-managing.

[3] Rao, Venkatesh. “LLMs as Index Funds.” Substack Newsletter. Contraptions (blog), April 1, 2025. https://contraptions.venkateshrao.com/p/llms-as-index-funds.

Expand full comment
2 more comments...

No posts