Keys to the internet

← back


October 6, 2025

Boris and I have spent the last 3 months learning everything there is to know about browser agents. They've gotten quite good - much better than I would have expected them to be when first starting out. If this is just the start, it makes me wonder whether humans will even need to surf the web in the future. Here, I want to explore the need for both agents and humans to live on the web harmoniously and what the framework for them to do so might be.

To start, let's cover a few reasons why humans would still want to use the internet directly - moments where being present matters. These are emotion-driven interactions, where the value comes from human engagement, rather than just information exchange.

Next, let's go a step deeper: situations where an agent could fetch the content for you, but the human author's credibility and voice are still the source of value. These are authenticity-based interactions, where reputation and exclusivity make the difference.

Finally, let's go deep into agent land - places where agents aren't just helpful, but where they'll almost certainly dominate. These are utility-driven interactions, where the primary value is efficiency, breadth, or precision, not human presence.

Together, these three categories form a spectrum of how humans and agents interact with the web - from direct human experience, to mediated authenticity, to fully automated execution.

Authenticity-driven interactions

The most interesting use case of these 3 is the authenticity-driven interactions because that is where the agent-human relationship is least defined. The shift towards AI is clear: it's more common to ask chatGPT a question, than finding answers directly from a source on the web. Which poses the first big question:

Should agents be allowed to index the web? If indexed, should the original author be compensated?

Companies like Perplexity have already come under fire for scraping websites that had explicitly opted out of AI access through the long-standing robots.txt convention. The obvious flaw with this standard is that compliance is purely voluntary, so if there's economic upside to ignoring it, someone eventually will.

The other "default" defense of the authenticity-driven web has been CAPTCHAs, but those haven't meaningfully evolved in over a decade. Today, they can often be bypassed, simply by running requests through a residential IP alongside a decent computer-use agent. This is a problem on its own, and a space that feels ripe for innovation.

Cloudflare is experimenting with a new approach: a marketplace where scraping agents must pay to access content. On paper, it sounds elegant - align incentives by turning monetizing data access. But in practice, this model runs into a few problems:

Taken together, this suggests the problem of compensating authenticity on the internet is not just technical, but deeply game-theoretic. You can't simply bolt on a marketplace or a standard; you need mechanisms that resist leakage, respect author intent, and balance the incentives of publishers, users, and agents alike. Then,

Should the agent retrieve the information and simply surface it on the website?

Ok so, the marketplace is likely a no-go. Instead of trying to meter every scrape, maybe the more natural evolution is that agents themselves become skilled navigators of the existing web. Rather than replacing websites with paid gateways, agents could do the heavy lifting of finding the right place, parsing the noise, and surfacing the parts that matter to you.

We're already seeing this trend emerge. Perplexity just launched Comet, OpenAI has Operator, and Anthropic has Computer Use. All of these point toward a future where you don't manually tab-hop across 20 sources. Your agent does the hard work, and brings you back a distilled, context-aware answer directly from the source.

The real edge, then, isn't in building walled gardens. It's in how well agents can access and index the free and semi-free layers of the web while preserving attribution. Free information has always been the backbone of the internet, and agents will need to respect that ecosystem rather than hollow it out. The challenge becomes:

Maybe this is the deeper point: browsing the web was never the ideal model - it was just the scaffolding. Humans became the browsers because we lacked good intermediaries. Now that agents are getting good enough, it's time to rethink the design.

The web of the future might not look like one giant marketplace of locked-down content, nor a lawless free-for-all of scraping. Instead, it could be a layered system:

Because in the age of browser agents, the real keys to the internet aren't just who gets to browse; they're who gets paid, who gets heard, and what still belongs to us as humans.


← back