Wave / Credit Transport Research Notes
A narrower branch inside the broader intelligence agenda: how learning-relevant signals move through finite physical systems, and when wave / adjoint language is genuinely informative.
Layered view of the evidence
Physical media can compute
Wave and field-like substrates can realize neural-network-like inference or temporal computation. This makes “dynamic propagation system” a serious implementation lens rather than a metaphor.
Some substrates can carry gradients
In reciprocal or adjoint-friendly physical systems, backward fields can locally encode exact or approximate gradients. That is a hard engineering fact, but only under specific conditions.
Cortex has segregated instructive signals
Recent dendritic work supports the idea that local plasticity can be modulated by task-conditioned signals that are spatially separated from ordinary forward drive.
Learning needs credit transport
Any nontrivial learner needs a way to move update-relevant information through the system. Digital backprop is one realization; dendritic mismatch dynamics or adjoint fields may be others.
Wave language can be useful
Wave, reciprocity, and impedance can sometimes reveal real design constraints. But the vocabulary becomes misleading when it is treated as a universal theorem without explicit conditions.
Exact equivalence remains open
Reflection is not yet the same thing as gradient. A true derivation would need explicit dynamics, explicit boundary conditions, and a proof that the physical backward mode matches the adjoint.
Updated notes
Hebbian Appearance, Instructive Signals, and Physical Credit Transport
The most up-to-date note. Reframes the project around credit transport, dendritic instructive signals, and a self-consistent continuous-time Part VI.
Backpropagation, Adjoint Fields, and Physical Transport Constraints
What physical systems really show, what a genuine derivation would require, and how this branch connects back to transport costs and the larger intelligence thesis.
Sleep, Replay, and Offline Credit Reorganization
A restrained reinterpretation of the sleep note: replay, renormalization, and associative exploration are plausible; literal global impedance optimization remains unproven.
Archived / under revision
中文短版:从波动方程推导 BP
Kept online as a historical path, but now marked as under revision. The stronger “direct derivation” language is being retired.
Short English note
Now reframed as an archive pointer rather than a stand-alone derivation.
Response to Lillicrap–Hinton
Older argument preserved only as background. The current position is more cautious and cortex-first.
What this branch is for
This branch is not a finished theory of intelligence. It is an implementation-level investigation nested inside a larger picture. The bigger theory still has to explain control-sufficient abstraction, endogenous viability, multiscale feedback, and why distributed systems organize the way they do. The credit-transport branch asks one narrower question: how does a learning system move and localize update information?