Adversarial Observers

Every observability vendor's business model depends on you not thinking about this. Suggesting that observation should be constrained, scoped, and trust-aware is suggesting the pipes should carry less. This isn't a conspiracy. It's an incentive structure.

Adversarial Observers
Photo by Sergey Semin / Unsplash

To my knowledge, the OpenTelemetry world has a blind spot so fundamental it's almost embarrassing to name. The entire observability ecosystem — every collector, every SDK, every dashboard vendor — operates on a single unexamined assumption:

The observer is benign.

Nobody is asking what happens when the observer is a threat.

The Void

OTel has a Security SIG. They do good work. They talk about hardening collectors, authenticating OTLP endpoints, encrypting telemetry in transit, preventing denial of service. All important. All aimed at the same thing: protecting the observation pipeline from external attackers.

But this is "adversarial environment" thinking, not "adversarial observer" thinking. The difference matters enormously and the community hasn't made a distinction there yet.

The threat model looks like this:

  • What OTel considers: Bad actors trying to tamper with, intercept, or overwhelm your telemetry data.
  • What OTel doesn't consider: The observer itself as an extraction vector. The entity receiving your application telemetry as a party whose interests may not align with yours.

The first is a plumbing problem. The second is a philosophical and structural one.

Observation Is Not Neutral

In physics, we learned this lesson a century ago. The act of observation changes the system being observed. In distributed systems, we've somehow convinced ourselves that observation is for the builders — that instrumenting your application and emitting spans is equivalent to measuring temperature with a thermometer.

It isn't. Telemetry is behavioral data. Your traces describe what your system does, why your system did it, and how it works. Not to mention when it does it, who triggers it, and what patterns emerge. This is intelligence. And the moment you ship it to an observer you don't control, you've granted that observer a view into your system's internal state that you may not have intended to share.

When you think about it, every SaaS observability vendor is, by definition, a third-party observer of your system's behavior. That doesn't make them malicious. But it makes them interested parties. The relationship between vendor and customer will change over time, and we need to recognize that the problem of adversarial observation already exists within this paradigm.

Oh no, your observability vendor got acquired!

Their privacy policy is different now. A government issues a subpoena for your telemetry. Now a competitor gains access to aggregated behavioral data about your users and your products? Gross!

None of that inspires long term confidence in the ecosystem above the pipes. The pipeline exists. The data is already flowing. But now your relationship is different, and someone else is observing your system.

The Cooperative Assumption

OTel's architecture embeds cooperation so deeply it's invisible. The Collector model assumes you want to always ship data out. The exporter ecosystem assumes the destination is trusted. Sampling decisions happen at the source, which means the observed system always shares — but only within a framework that assumes sharing is the default.

There is no first-class concept of:

  • Authorization. Answer whether a collector is permitted to receive data based on telemetry classification.
  • Scoping. A developer's collector gets a fully view of the system, and the customer can get some but not all, based on trust level.
  • Resistance. Allow an observed system to detect, limit, or refuse observation from untrusted parties.

These aren't exotic requirements. I'm calling for the observability equivalent of access control.

Where This Gets Interesting

The infrastructure layer is one thing. OTel could, in theory, grow span-class scoping, observer attestation, and trust-aware export. Whether the community will build it is a different question — the incentive structure points the other way, and spec changes move slowly.

But the problem gets sharper when you move up to agents.

AI agents are about to instrument everything. As autonomous systems proliferate, they'll generate telemetry about their own decision-making processes. This telemetry isn't operational data — it's cognitive data. It reveals reasoning patterns, decision boundaries, capability envelopes. Shipping this to an untrusted observer isn't a privacy issue. It's a strategic one.

And agents don't just emit telemetry. They operate under observation. An agent interacting with an external API is being observed by that API's owner. An agent using a tool is exposing its reasoning to the tool provider. Every interaction is an observation surface, and the agent has no mechanism for presenting a different operational identity to different observers based on trust.

This problem emerged from trying to do continuous evaluation of Masques to solve for quality over time. Masques is aspirationally AssumeRole for agents. I want to bundle aspects of a good teammate with skills and mcp connections coupled to intent, context, knowledge, access, and lens into assumable cognitive identities.

The connection to adversarial observers is direct: an agent with a Masque needs an audience, a collector and processor, to continuously rate the Masque's performance and provide feedback for the agent. It's a scoped operational surface that determines what the agent is in a given context. Claude Code emits traces for Anthropic, but not for the user. Are logs and metrics enough fidelity for accurate interpretations of a performance?

It's the same pattern as trust-scoped telemetry. At the infrastructure layer, you scope which events an observer can see. At the agent layer, you scope which cognitive surface an observer interacts with. The underlying agent is fully capable. The Masque is supposed to be helpful, and a rating system based on continual observation of the performance is important for trust.

The Uncomfortable Truth

Every observability vendor's business model depends on you not thinking about this. They need your data to flow freely, abundantly, and without friction. The entire commercial ecosystem around OTel is optimized for maximum telemetry export. Suggesting that observation should be constrained, scoped, and trust-aware is suggesting that the pipes should carry less.

This isn't a conspiracy. It's an incentive structure. And incentive structures produce blind spots.

The OTel project itself is vendor-neutral by design — that's one of its greatest strengths. But vendor-neutrality in transport is not the same as sovereignty in observation. You can choose which backend to send your data to without ever questioning whether you should be sending it at all.

Adversarial observers aren't a theoretical concern. They're a design space that the observability community has collectively refused to enter. And the longer we wait, the more telemetry pipelines get built on the cooperative assumption, and the harder it becomes to retrofit trust.

The question isn't whether your observer is adversarial today. It's whether your architecture gives you any recourse if it becomes adversarial tomorrow.


Chris Baldwin writes about systems, sovereignty, and the structures we build without examining at voidtalker.com.