Truth Over Theater
How image-driven incentives crowd out real work—and a democratic way to recenter outcomes.
“The spectacle is not a collection of images, but a social relation among people, mediated by images.”
— Guy Debord
Something I’ve struggled with as an engineer is how real problems get eclipsed by the glitz of image and narrative. Marketing has a place; I don’t discount it. However, too often it overrides reality. The story that wins is the one that closes the deal, convinces the client, drives the sale, and lifts profit or the valuation multiple, not the one that best serves customers, society, or the problem itself. People often start in good faith: what begins as an honest attempt to translate the work for outsiders becomes theater. To make it sell, the truth gets bent or broken, and accountability shifts from delivering results to delivering optics and returns. Then KPIs become the job, and the metric replaces the mission.
This shows up outside tech as well. In nonprofits, funders favor whatever’s “hot”—the rebrand, the gala campaign, the ribbon-cutting for a new wing, a celebrity partnership or awareness blitz, the app or “AI pilot”—while maintenance of the systems that already serve people goes unfunded. This is the nonprofit starvation cycle: unrealistic expectations starve core operations, capacity erodes, results slip, and the underinvestment deepens. Value that could compound through upkeep never gets the chance.
Inside organizations, teams end up relating through slide decks, objectives and key results, and brand guidelines; the image layer starts setting the rules. A tool that’s good enough is sidelined for something more photogenic—more maintenance, more cognitive load, weaker integration—because the latter reads better in a pitch. We joke about “solving” fragmentation by adding yet another standard, and the joke lands because we’ve all seen it.
Image beats reality until it hits the pavement. ShotSpotter is marketed as high accuracy, near instant gunfire detection sold by the square mile, with glossy dashboards and a promise to fill gaps in 911 calls. Chicago’s inspector general reported that fewer than 1 in 10 alerts produced evidence of a gun crime. By late 2024 the city moved to end the contract. Officers were chasing alerts that rarely became cases, which pulled time from victim services and from real hotspots.
UnitedHealth’s naviHealth was marketed as algorithm-guided “right care, right setting,” promising faster discharges and less waste. Reporting and litigation allege it helped drive improper post-acute denials. A federal judge let a class action proceed in 2025, and policy work has documented broader denial patterns. Whatever is ultimately decided in court, the episode shows how models and metrics can outrun oversight.
Hyperloop was pitched as near-supersonic, privately funded transport that would leapfrog rail. What cities got were letters of intent, renderings, and a test track—not an operational route. The company shut down in 2023 without ever winning a contract to build one. In Los Angeles, a separate fast-tracked Boring Company “test tunnel” was dropped after a lawsuit. Net result: public attention, staff time, and scarce dollars chased photogenic promises while needed upkeep and proven transit waited.
Even research and measurement aren’t immune. “Surrogation” is what happens when a proxy for a goal becomes the goal, and teams optimize the number instead of the purpose. Think daily-active users instead of task completion, click-through instead of informed reach, test-coverage percent instead of the number of defects escaping to users. As Harvard Business Review notes, badly governed metrics can replace the strategy they were meant to represent.
Another pathology is when the story outruns verification. The MIT economics preprint Artificial Intelligence, Scientific Discovery, and Product Innovation drew wide attention in late 2024, in the churn of the AI hype cycle. On May 16, 2025 MIT’s Committee on Discipline said it had “no confidence in the provenance, reliability or validity of the data,” and on May 20 arXiv withdrew the paper. This is the hype loop in miniature: the image of a breakthrough outruns the truth, attention converts to citations, press, invitations, and funding. It pulls individuals, institutions, and audiences into feeding the hype, while the wider public shoulders the consequences and the few capture the gains.
If image governs the work, it’s because ownership, decisions, and scorekeeping reward image. Change those three, and you change the work.
What we reward is what we repeat. Regulation decides what actually pays. Regulators have started to notice the image over reality gap, particularly around AI. In late 2024 the U.S. Federal Trade Commission launched “Operation AI Comply,” a sweep against deceptive AI claims, and continued actions into 2025, including an order restricting “robot lawyer” marketing. Enforcement isn’t culture change, but it does reset the minimum for tying claims to truth.
Left alone, spectacle becomes a hype-extraction loop. For some actors it slides into outright grift, because attention mints capital faster than outcomes arrive.
So what can we do about it, practically, not performatively?
The hinge is governance. If money clears on optics, optics will govern; tie money to outcomes, and reality will instead. Change those structures, and you change the work. Inside the room, run the work democratically and transparently so the mission can’t be swapped out for the storyline. Give the people doing the work a real say in prioritization. Budget for stewardship—data hygiene, monitoring, retraining, incident response. Write explicit exit ramps for pilots that don’t deliver. Measure success by how the system behaves in production—reliability, safety, and user outcomes—rather than the beauty of the quarterly slide. When Prime Video’s engineering team showed that collapsing a chatty distributed service into a simpler architecture cut costs by about 90% without hurting outcomes, the lesson wasn’t “microservices bad.” It was “let the problem, rather than the fashion, choose the architecture.”
Outside the room, align incentives to reinforce these habits: fund maintenance by default; let grants reward negative results that prevent waste; require auditability and a real appeal path in high-risk settings. Without counterweights, attention and quick-win incentives crowd out facts: labs selected for flash over rigor yield more false positives, press-release overstatement is mirrored in news coverage, and on social platforms false stories travel farther and faster than the truth.
This is also where democratic ownership ties inside to outside. Worker cooperatives and broader employee ownership put voice and accountability where the work is; across many studies, these models are linked to higher productivity, better pay, more job stability, and firm survival. Steward-ownership moves on a different axis: it locks mission in law, separates voting control from profit rights, can cap returns, and constrains sale so purpose isn’t arbitraged. There are working hybrids:
Mētis Construction in Seattle operates as a worker cooperative held in an employee/worker-ownership trust designed for the long term—day-to-day power stays with workers while the mission is legally anchored.
Equal Exchange, a worker co-op, brought in outside capital through non-voting investor shares while keeping one-worker/one-vote and limits on voting rights—classic guardrails against demutualization.
Organically Grown Company shifted to a perpetual purpose trust with a multi-stakeholder governance board to preserve independence and align profits with purpose.
Firebrand Artisan Breads adopted an employee community ownership trust with worker and community representation to lock mission and broaden voice.
None of this requires demonizing individuals. The system selects for spectacle because spectacle clears the next gate: a sale, a grant, a quarter. Our task is to rebalance the system so that truth clears the gate instead. Fund stewardship as a first-class deliverable. Publish preregistered outcome targets and retire pilots that miss. Keep metrics auditable and tied to user outcomes. Design ownership and governance so the mission cannot be arbitraged for optics. Enforce claims so marketing cannot sprint ahead of reality without consequence. Do that, and the plain voice we hear in meetings, the one that says fix the workflow before adding another feature, stops sounding naïve and starts sounding like the plan.
The point isn’t to reject marketing, AI, or any particular tool. It’s to bring truth back to the center, align means with ends, and protect teams from the kind of debt that begins as a headline and becomes an operational burden. When image no longer runs the work, engineering stops being performance and returns to craft. The spectacle loosens its grip, and the simple voice—let us do the right thing—finally has room to be heard.
CommonBytes
This column explores a central question: What should technology’s role be in a world beyond capitalism? Today’s technological landscape is largely shaped by profit, commodification, and control—often undermining community, creativity, and personal autonomy. CommonBytes critiques these trends while imagining alternative futures where technology serves collective flourishing. Here, we envision technology as a communal asset—one that prioritizes democratic participation, cooperative ownership, and sustainable innovation. Our goal? To foster human dignity, authentic connections, and equitable systems that empower communities to build a more fulfilling future.


