Every week there's a new model. Every day there's a new company. Every hour there's a new thread explaining why this particular announcement changes everything and you need to pivot immediately or die in a ditch. The internet has become a slop machine running at full tilt and most of us are standing in front of it with our mouths open, catching whatever falls out.
I build AI systems for a living. I am neck-deep in this. I have an agent fleet, a self-hosted infrastructure stack, and an opinion about embedding models. I am not writing this from some cabin in Vermont where I've renounced technology and found peace. I'm writing this at 3 AM having just debugged a browser automation pipeline while arguing with a language model about whether it actually fixed the thing it said it fixed. I am in the trenches. The trenches are my house.
And from down here, I can tell you: the slop machine is a cognitive hazard, and almost nobody is treating it like one.
Language Is the Oldest Hypnosis
Here's the thing nobody in AI discourse wants to sit with: language models are persuasive because language is persuasive. That's not a bug in the technology. That's what language has always done. Long before anyone trained a transformer on the internet, human beings were getting hijacked by words โ by rhetoric, by advertising, by charismatic speakers, by their own internal monologue on a bad Tuesday. Language enters your nervous system and rearranges the furniture. That's its whole job.
What's new is that we've built machines that produce language at industrial scale with no fatigue, no hesitation, and no felt sense of whether what they're saying is true. And we've pointed those machines at a species whose primary cognitive vulnerability is that we treat fluent speech as a proxy for understanding. Someone strings a sentence together well and we assume they know what they're talking about. A model generates a confident paragraph and we feel like we just talked to an expert. This isn't stupidity โ it's a heuristic that worked fine for a hundred thousand years when the only things producing fluent language were other humans who had at least a passing relationship to reality.
That heuristic is now broken. And most people haven't noticed because the breakage is happening in fluent, well-structured paragraphs that feel very convincing.
The Feedback
Open any platform right now and count how many posts are telling you that the thing you're currently doing is about to be obsolete. That the new model changes the game. That if you're not already building with this framework you're basically a medieval peasant. Notice what happens in your body when you scroll through that. There's a tightening. An urgency. A hot little feeling that you need to act right now or be left behind forever.
That feeling is not information. It's activation. It's your threat-detection system โ the same one that kept your ancestors from getting eaten โ firing because it encountered language shaped to trigger exactly that response. The people writing those posts aren't even necessarily trying to manipulate you. Most of them are experiencing the same activation and just passing it along, like a yawn. That's what a feedback loop is. Anxiety in, content out, anxiety in, content out. The platforms reward the most activating posts. The models generate the most fluent takes. The humans in the middle are the medium through which the signal propagates.
And the signal is always the same: panic. React. Don't think. Keep up.
The Loop
Here's the part that's hard to say without sounding like I'm selling something, so I'll just say it flat: the technology is real. The advancements are real. The companies shipping new capabilities every quarter are doing genuinely interesting work and some of it will matter for a long time. I use these tools every day. I like them. I think they're some of the most interesting things humans have ever made.
But the pace at which things ship and the pace at which you can meaningfully understand what shipped are two completely different speeds. And pretending they're the same โ performing constant awareness of every announcement, every benchmark, every product launch โ isn't diligence. It's a stress response in a professional costume. You're not staying current. You're staying activated. There's a difference.
The secret that nobody on your timeline will tell you is that you can skip the thread. You can not have an opinion about the new model for a week or two. You can let other people beta-test the hype cycle and come back when there's actual information. Nothing bad happens. The model will still be there. The API will still be there. Your career will not evaporate because you didn't post a take within 90 minutes of an announcement.
I know this because I have a collaborator that doesn't sleep, doesn't get FOMO, and can summarize any development I missed in thirty seconds when I'm actually ready to hear about it. That's the real cheat code โ not keeping up with everything yourself, but having systems that let you engage on your own schedule instead of the market's schedule. I built those systems. That's literally what I do. And even I have to remind myself to stop doomscrolling the AI discourse and go use the AI instead.
The Actual Practice
This is the part where I'm supposed to pitch a course or a newsletter or a five-step framework with a workbook. I don't have any of that. What I have is the oldest, most boring intervention in the history of human cognition: learn to notice when your nervous system has been hijacked, and practice bringing it back down.
That's meditation. Not the branded, app-based, gamified, streak-counting version. The basic act of sitting still and watching your own mind reach for the next stimulus without giving it one. Noticing the urge to check, to scroll, to react โ and just... not. For a few minutes. It's not complicated. It's also not easy, which is why most people buy the app, do it for nine days, and then go back to letting their amygdala run the show.
Self-regulation is the skill underneath every other skill. If you can't tell the difference between a genuine insight and an anxiety response wearing insight's clothes, it doesn't matter how technically literate you are. You'll adopt tools because of FOMO instead of fit. You'll spend your days chasing a feeling of being caught up that never arrives, because the cycle is designed to never let it arrive. That's not a flaw in the cycle โ that's the feature. Engagement requires agitation. Calm people close the tab.
The feedback loop between AI-generated content and human attention is the defining cognitive hazard of this particular moment. Not because the AI is dangerous โ because our own wiring is exploitable, and we've built an environment that exploits it around the clock, and very few people are talking about that part because talking about that part doesn't generate engagement.
Learn to sit still. Learn to notice the activation without obeying it. Learn to evaluate things on your own time. It doesn't scale, it doesn't monetize, and it's the only thing that actually works.