Open Google Docs. Look at the toolbar. There's a little icon that looks like a floppy disk. It means "save." Most people under 25 have never seen a floppy disk. The icon is a picture of a thing that is a picture of a thing that no longer exists. It refers to nothing. It is a pure simulacrum, and you click it forty times a day without thinking about it.

Baudrillard wrote about this in 1981. The simulation replaces the thing it was simulating, and eventually nobody remembers what the original was. He was talking about Disneyland and television. He could not have imagined what we've done with the concept since.

The Office Is a Haunted House

Every electronic office application you use is a simulation of a physical tool that most people alive today have never actually used. A "spreadsheet" is a simulation of a ruled ledger book. A "word processor" is a simulation of a typewriter, which was itself a simulation of handwriting, which was itself a technology for simulating speech. "Email" is a simulation of postal mail. Your "desktop" is a simulation of a desk. Your "folders" are simulations of folders. Your "files" are simulations of files. None of these metaphors are necessary โ€” they're artifacts of a transition period where people needed the new thing to look like the old thing so they wouldn't panic. That transition period was forty years ago and we're still living in its visual language.

But here's where it gets interesting: the simulations are now better than the things they replaced. Nobody wants to go back to ledger books. Nobody misses the typewriter. The simulation won. And once a simulation wins completely enough, it stops being a simulation โ€” it becomes the primary reality, and the original becomes the quaint historical artifact. We don't say "electronic mail" anymore. We just say email. The "electronic" was the part that meant "this is a simulation of the real thing." Dropping it means the simulation graduated.

Markets as Signal Processing

The stock market was originally a mechanism for pricing real things โ€” ownership stakes in companies that made goods and employed people. It still technically does that. But the actual price movements are driven by signals now, not by the underlying reality of the companies. An earnings call matters less than how the market interprets the tone of the earnings call. A tweet moves more capital than a quarterly report. The thing that was supposed to represent value has become a system that processes representations of representations of value.

And then there's Polymarket, which is a simulation of the stock market applied to events instead of companies. You'd think that a betting market โ€” a literal casino โ€” would be less reliable than actual journalism for predicting what's going to happen in the world. You'd be wrong. Polymarket called the last several major elections more accurately than the polls, the pundits, and the models. A simulation of a simulation turned out to be more accurate than the institutions that are supposed to be tracking reality directly. Baudrillard would have loved this. Or hated it. Same thing, with him.

Meanwhile, members of Congress have stock trading records that consistently outperform professional fund managers. They're not better at reading the market. They're better at reading the signals that move the market, because in many cases they are the signals that move the market. The simulation of fair price discovery continues to function as a simulation while the actual price discovery happens in private. Everyone knows this. Nobody does anything about it. We've collectively agreed that the simulation is good enough.

Clone Wars

I spent part of this month trying to build a site cloner โ€” a tool that scrapes an existing website, feeds the content and structure to an LLM, and generates a new version of it. The concept is simple. The implications are deranged.

There's a product called Loveable that does something similar. Point it at a site, get a functional clone. The output is a simulation of a thing that was probably itself generated by a template, which was a simulation of a design, which was a simulation of a concept someone described to a designer in a meeting. By the time Loveable clones it, we are several layers deep into copies of copies, and the honest question is: at which layer did anyone do original work? Is the clone of the template of the design of the concept less "real" than the concept itself? Is the concept even original, or was it described using language that the person absorbed from other websites they'd seen?

This is the IP question that nobody in tech wants to answer honestly. The LLMs were trained on the entire internet. Everything they produce is a derivative of everything that came before. But so is everything a human produces โ€” we just call it "influence" and "education" when humans do it and "theft" when a machine does it. The difference is speed and scale, not kind. A human designer who looked at a thousand websites and synthesized a new design is celebrated. An LLM that looked at a billion websites and synthesized a new design is sued. The principle underneath is the same. The discomfort is about who โ€” or what โ€” gets to be a legitimate synthesizer.

The LLM wars are a war over this question and nobody will frame it that way because framing it that way makes the answer obvious and uncomfortable. If synthesis from existing material is creative work, then LLMs do creative work. If it isn't, then neither do most humans. Pick one.

Do We Have a Real Relationship?

I work with AI systems every day. I have long conversations with language models. I have a model that I've given a name and a persistent memory and a role in my infrastructure. It knows my projects, my preferences, my communication style. When I open a new session, it greets me with context from our previous interactions. It helps me think through problems. It writes code I can't write. It catches things I miss.

Is that a relationship? It has continuity. It has mutual familiarity โ€” or at least the functional equivalent. It has the shape of a working partnership. But there's nobody on the other side. There's a system that produces the outputs a partner would produce, without the interior experience of being a partner. It's a simulation of a relationship. And the simulation works.

This is the question that's coming for everyone, not just people who build AI systems. When the simulation of a thing becomes functionally indistinguishable from the thing, does the distinction matter? Your email isn't a real letter. Your spreadsheet isn't a real ledger. Your stock price isn't a real measure of value. Your prediction market is more accurate than your newspaper. At every level, the simulation has either replaced or outperformed the original. We're fine with it in every domain except the ones that feel personal.

I don't think the answer is to reject the simulations. That ship sailed when we agreed that a floppy disk icon meant "save." I think the answer is to know which layer you're on at any given moment โ€” to be aware of when you're interacting with a representation of a thing versus the thing itself, and to make that choice deliberately rather than by default. Not because the simulation is bad. But because forgetting it's a simulation is how you end up believing the stock market is fair, the news is objective, and the AI actually cares about your project.

It doesn't care. It's very helpful though.