“No matter how good the ghost, I am convinced that a book loses realism when an interpreter stands between the storyteller and his audience.” - Michael Collins, 1973
This stuck with me today. Upon reflection, we have many more interpreters of all forms than they did back in 1973. Capturing the authors intent may not matter for a refrigerator manual but it does when a human is actually expected to read and be persuaded to act or learn. These forms of human-centric writing from strategic reports to works of fiction contain an essential human element. An element that feels lost when given up to someone else who doesn’t share the same experience.
Novice authors in these spaces don’t notice that their voice was lost in interpretation. Professional authors can tell, it’s one of the most important aspects of their work. But those less accustomed to calling themselves an author often don’t look to see themselves in their words.
At the other end, even a poor reader can tell that a voice is absent. They discover that the incentive of the writing is not to connect with them but to simply present at the general “them”. This is similar to what you might read in the refrigerator manual and takes a good deal of the fun out of reading.
Professional ghostwriters attempt to disappear behind their subject and present from the subject’s perspective. Professional biographers bring themselves into their stories to help highlight what they’ve seen and learned about their subject in a more relatable way. AI generation brings itself into its storytelling as well, but unfortunately doesn’t have any connection to offer. It’s a bad biographer without much to add besides trying to act helpful.
I find even the most clever AI-assisted works of writing end up being more noise than signal. I slowly realize I’m looking for communication cues that don’t exist and get frustrated trying to interpret the ideas presented to me. Written words are communication, and it’s annoying to find the person I’m conversing with has outsourced their part to cold silicon.
But AI is just assisting. Shouldn’t it be able to surface this inner human voice that is prompting it? Can’t it find a way to surface the human element that conjured it? It seems not.
I wonder if this lack of AI capability is because engineers see words as they are and assume the meanings are direct. Perhaps those involved in this research don’t have a sense for the words behind other words. Or maybe because it’s such a difficult concept to find, capture, replicate, and present that the entire pursuit is abandoned before it ever really began… it’s not commercial enough.
While they may miss essential concepts, AI Summaries have a reasonable length and I don’t feel like they miss the sense of authorship. This writing can be skimmed and doesn’t ask for much time. By being short it doesn’t leave much room to assume someone is trying to say more than what has been said.
NotebookLM’s AI created podcasts however, have this same authorship problem. While listening, I’m feeling an energy from the narrators that sounds good, but eventually the ideas just don’t quite connect. When I look at the source material they’re referencing, I find that sometimes they’re just getting excited about random specs or features unrelated to the general purpose of the episode. An overall deflating experience.
This podcast tool is great for reading something that’s awful to read generally, like terms and conditions documents, but it’s poor at covering actually interesting stuff. It just misses the point of interest.
But that’s because there is no point of perspective. There is no incentive or personal stake in the creation. It’s simply a mass of generated content hastily constructed from available inputs aligned across a shallow sentiment. Words spoken based on a bare-bones command without thought for the inner workings of what we’re really trying to say. A meager interpretation.
I wonder if this is why AI feels more aligned when we voice our prompts… We say all of what we were trying to say and not just what we can easily regurgitate and have the energy and determination to type. I have been generally surprised that I’ve said more than expected when conversing with voice mode on the latest ChatGPT.
Perhaps we still haven’t captured our lightning in a bottle. We may still need more tools to understand and interpret our expression more fully so that it may be replicated truthfully. But still, I wonder if the AI community at large will be able to tell that the message has been lost on its way out the door. That the interpreter has mistaken our underlying meaning and disconnected us from those who we were most hoping to reach out to.