Revised research branch

Wave / Credit Transport Research Notes

A narrower branch inside the broader intelligence agenda: how learning-relevant signals move through finite physical systems, and when wave / adjoint language is genuinely informative.

Revision note. Earlier versions of this section overstated the case. The project no longer claims that backpropagation has already been derived from wave reflection, or that cortex has already been shown to implement literal reflected-wave backprop. The more careful question is: what are the physically plausible ways to transport credit or instructive information?

Layered view of the evidence

More solid

Physical media can compute

Wave and field-like substrates can realize neural-network-like inference or temporal computation. This makes “dynamic propagation system” a serious implementation lens rather than a metaphor.

More solid

Some substrates can carry gradients

In reciprocal or adjoint-friendly physical systems, backward fields can locally encode exact or approximate gradients. That is a hard engineering fact, but only under specific conditions.

More solid

Cortex has segregated instructive signals

Recent dendritic work supports the idea that local plasticity can be modulated by task-conditioned signals that are spatially separated from ordinary forward drive.

Plausible hypothesis

Learning needs credit transport

Any nontrivial learner needs a way to move update-relevant information through the system. Digital backprop is one realization; dendritic mismatch dynamics or adjoint fields may be others.

Plausible hypothesis

Wave language can be useful

Wave, reciprocity, and impedance can sometimes reveal real design constraints. But the vocabulary becomes misleading when it is treated as a universal theorem without explicit conditions.

Open question

Exact equivalence remains open

Reflection is not yet the same thing as gradient. A true derivation would need explicit dynamics, explicit boundary conditions, and a proof that the physical backward mode matches the adjoint.

Updated notes

Working note

Hebbian Appearance, Instructive Signals, and Physical Credit Transport

The most up-to-date note. Reframes the project around credit transport, dendritic instructive signals, and a self-consistent continuous-time Part VI.

Exploratory note

Backpropagation, Adjoint Fields, and Physical Transport Constraints

What physical systems really show, what a genuine derivation would require, and how this branch connects back to transport costs and the larger intelligence thesis.

Speculative extension

Sleep, Replay, and Offline Credit Reorganization

A restrained reinterpretation of the sleep note: replay, renormalization, and associative exploration are plausible; literal global impedance optimization remains unproven.

Archived / under revision

Archive note

中文短版:从波动方程推导 BP

Kept online as a historical path, but now marked as under revision. The stronger “direct derivation” language is being retired.

Archive note

Short English note

Now reframed as an archive pointer rather than a stand-alone derivation.

Archive note

Response to Lillicrap–Hinton

Older argument preserved only as background. The current position is more cautious and cortex-first.

What this branch is for

This branch is not a finished theory of intelligence. It is an implementation-level investigation nested inside a larger picture. The bigger theory still has to explain control-sufficient abstraction, endogenous viability, multiscale feedback, and why distributed systems organize the way they do. The credit-transport branch asks one narrower question: how does a learning system move and localize update information?

Working research direction. The most promising path right now is not “prove the brain literally does reflected-wave backprop.” It is “build a physics of credit transport that can connect digital backprop, reciprocal physical media, and compartmentalized instructive dynamics in cortex.”