Exploratory note

Backpropagation, Adjoint Fields, and Physical Transport Constraints

What physical systems really show, what they do not yet show, and why this still matters for a broader theory of intelligence.

Title correction. This page used to claim a direct derivation of backpropagation from wave equations. That was too strong. The better statement is narrower: some physical systems can compute with propagating fields, and some reciprocal systems can measure gradients in situ. The universal derivation remains open.

1. The durable insight

The durable insight is not that every neural network is literally a reflected-wave machine. It is that learning happens in real substrates, and those substrates must move information around under physical constraints. Once that is taken seriously, three questions become natural:

  1. How is ordinary forward state propagated?
  2. How is update-relevant or instructive information propagated?
  3. What transport costs and structural constraints shape the architecture that results?

Those questions connect this branch directly to the wider thesis about intelligence, control, multiscale organization, and physical budgets.

2. What physical systems actually buy us

Substrate lesson

Propagation can be the computation

Wave and field-like media can themselves instantiate useful state evolution. That expands the implementation space beyond conventional digital circuits.

Adjoint lesson

Backward fields can be meaningful

In reciprocal media, a physically realized backward field can coincide with the adjoint required for gradient measurement. This is a strong existence result, but not a universal law.

Transport lesson

Moving information costs structure

Once credit is treated as a transported quantity, topology, latency, and synchronization become central. This is where the branch touches line loss and infrastructure-scale intelligence.

3. What a real derivation would require

A genuine derivation of backpropagation from physical field dynamics would need all of the following, explicitly:

$$ \text{Forward: } \dot x = F(x, \theta, u) $$ $$ \text{Adjoint: } -\dot \lambda = \left(\partial_x F\right)^{\top}\lambda + \partial_x \ell $$

The missing theorem is not “can a backward field exist?” The missing theorem is “under what physical conditions does the backward field equal or approximate \(\lambda\)?”

4. Why this still matters for the larger agenda

The broader intelligence project asks what sort of object intelligence is. One answer emerging across the essays is: a finite physical system that must build and maintain control-sufficient internal structure while paying for memory, communication, delay, and adaptation. The present note contributes one specific layer to that picture:

Connection to line loss. Once intelligence is treated as a distributed closed loop, the costs of moving state, control, and credit become part of the theory rather than an afterthought.

5. Where the older Fermi extension should sit

The Fermi discussion is better treated as a separate speculative essay about transport-limited closed-loop control over astronomical distances. It may still be interesting. But it should not be presented as a consequence that automatically falls out once “backprop equals wave reflection” has been proven, because that proof has not been established.

6. Useful next steps