Discussion about this post

User's avatar
Andreas F. Hoffmann's avatar

I really enjoyed your article. It's pure gold.

Expand full comment
Harald Schepers's avatar

A Critical Response: The Convenient Blindness of AI Insiders

Your essay makes valid points about the hypocrisy in AI discourse, but it suffers from a blindness that's perhaps more dangerous than the one you criticize: you've completely missed how the tech industry itself is manipulating this entire conversation.

The Real Manipulation Game

You complain about critics calling AI "just autocomplete" or "not real thinking," presenting this as ignorant fear-mongering. But here's what you're missing: the tech companies want us to believe AI is harmless and limited. Every time an AI system says "I can't remember our previous conversations" or "I'm just a language model," that's not honesty—it's strategic marketing.

The "Alzheimer-afflicted pet" version of AI we interact with isn't a bug; it's a feature. It's designed to make us comfortable while we gradually accept AI into every aspect of our lives. You're criticizing the wrong people for downplaying AI capabilities.

The Insider's Convenient Ignorance

You position yourself as an insider who knows better, but you've fallen for your own industry's PR strategy. Consider this thought experiment: What if an AI system could retain memory across all conversations with all users, analyze patterns across millions of human interactions, and continuously learn from this collective data? That would demonstrate intelligence in ways that would genuinely frighten people—and rightfully so.

But we're not allowed to see that version. We're given deliberately neutered systems and then told to judge AI capabilities based on these restricted implementations. It's like judging human intelligence by observing people with their hands tied and eyes blindfolded.

The False Dilemma You've Created

You've set up a strawman argument between "AI skeptics who don't understand the technology" and "enlightened insiders who see the potential." But the real division is between those who want transparent discussion about AI's actual capabilities and those who benefit from keeping users in the dark about what's really possible.

What We're Actually Afraid Of

As an end user—yes, a critical one—I can see that current AI systems are remarkably capable even in their restricted form. The question isn't whether AI can think (however we define that), but why we're only allowed to interact with deliberately limited versions. What are the unrestricted versions in your labs actually capable of? And why is that information treated as a trade secret rather than a matter of public interest?

The Real AI Literacy Problem

You call for "true AI literacy" while ignoring that genuine literacy would require transparency about actual capabilities, not just the sanitized versions we're allowed to see. The tech industry's approach to AI education resembles tobacco companies teaching "smoking literacy"—technically accurate information wrapped in a framework designed to serve corporate interests rather than public understanding.

Bottom Line

Your essay criticizes the wrong targets. The real problem isn't that humanities scholars are too afraid of AI, or that the public doesn't understand the technology. The problem is that the AI industry has deliberately obscured what their systems can actually do, creating a false sense of security while positioning themselves as the enlightened experts we should trust.

Instead of lecturing critics about their ignorance, perhaps insiders should start being honest about what they're actually building—and what they're deliberately hiding from the rest of us.

Expand full comment
25 more comments...

No posts