Working note

Hebbian Appearance, Instructive Signals, and Physical Credit Transport

A revised hypothesis about why local plasticity can look Hebbian while still carrying task-dependent update information — and where wave / adjoint language may genuinely help.

What changed. Earlier versions of this project claimed too much: that backpropagation had already been derived from wave reflection, and that the biological plausibility problem was basically solved. This note steps back. The stronger theorem is not established. The narrower and more promising problem is credit transport.

1. What already seems real

Engineering fact

Wave-like substrates can compute

Physical wave systems can implement recurrent or field-based computation. That makes “neural network as propagation system” more than a metaphor at the implementation level.

Engineering fact

Adjoint-friendly media can carry gradients

In some reciprocal photonic or mechanical systems, local measurements of backward fields can yield exact or near-exact gradients. This is a real result, but it depends on explicit physical conditions.

Neuroscience fact

Forward and instructive signals can be separated

Recent dendritic work supports a picture in which perisomatic / basal pathways carry ordinary drive while distal apical pathways carry task-conditioned instructive information with causal importance for learning.

Those three facts matter. They show that update-relevant information can, in principle, be a physically transported quantity, and that cortex may use a specialized compartmentalized version of this idea. But they do not yet imply that “reflection equals gradient” in the general case.

2. The more careful core hypothesis

The strongest useful version of the project is:

  1. Any capable learner needs a way to transport credit-relevant information. The system must somehow mark which internal states or couplings should change, and in what direction.
  2. That transported quantity need not look like textbook digital backprop. It may be an exact gradient, an approximate local mismatch, a target-like activity signal, or a neuron-specific instructive field.
  3. Wave / field language is one implementation-level candidate. In the right substrate, backward modes, reciprocity, or interference may provide the physical carrier.
Implementation, not ontology. The useful claim is not “all intelligence is literally wave reflection.” It is “some learning systems are better understood as dynamic propagation media, and that perspective may reveal real constraints on how credit is moved.”

3. Why the stronger argument overreached

4. A cortex-first mental model

For an ML reader, a pyramidal neuron can be treated as a rough two-port unit:

That means local plasticity can look Hebbian on the surface while still being shaped by a hidden teacher-like component. In modern ML terms, the neuron may have something closer to a private gradient hook or control side-channel than a purely correlation-based rule.

$$ b_l \;\text{= basal forward drive}, \qquad a_l \;\text{= apical instructive drive}, \qquad s_l \;\text{= somatic output}. $$

If the apical component is informative, then a synapse can update using only local quantities and still implement something more than naïve Hebbian correlation.

5. Part VI — a self-consistent minimal formalization

The old Part VI had a real bug: the steady-state limit dropped the feedforward term. A safer formulation is a continuous-time, two-compartment network in which the forward and instructive streams remain present at equilibrium.

5.1 Minimal dynamics

$$ \tau_b \dot b_l = -b_l + W_l s_{l-1} $$ $$ \tau_a \dot a_l = -a_l + B_l e_{l+1} $$ $$ s_l = \phi\!\left(b_l + \beta a_l\right) $$

Interpretation:

5.2 Output objective and top-level signal

$$ \mathcal{L} = \mathcal{L}(s_L, y), \qquad e_L := \nabla_{s_L}\mathcal{L}. $$

For hidden layers, we do not assume that the cortex computes textbook backprop by fiat. Instead we leave the hidden instructive dynamics explicit:

$$ e_l = D_l\big(s_l, a_l, e_{l+1}; \theta_D\big). $$

Here \(D_l\) is the local rule or dendritic mechanism that generates a useful hidden-layer instructive signal. Exact backprop is recovered only in the special case where \(D_l\) happens to implement the adjoint recursion.

5.3 Local mismatch and local update

Define the somatic departure from purely forward activity as

$$ m_l := s_l - \phi(b_l). $$ $$ \text{For small } \beta, \quad m_l \approx \beta\, \phi'(b_l) \odot a_l. $$

This quantity is useful because it isolates the task-conditioned component. A natural local forward-weight update is then

$$ \Delta W_l = -\eta\, m_l s_{l-1}^{\top}. $$

That rule is local, Hebbian-looking, and task-modulated. It does not automatically equal exact backprop. But it defines a coherent family of local learning rules whose quality depends on how informative \(a_l\) is.

5.4 A cleaner mismatch language

Instead of using an unknown hidden target \(x_l^*\) and calling its distance an “impedance,” the safer move is to distinguish two notions:

$$ Z_l^{\text{local}} := \|m_l\|^2 \quad \text{or} \quad Z_l^{\text{free-nudged}} := \|s_l^{\text{free}} - s_l^{\text{nudged}}\|^2. $$

That keeps the optimization claim honest: the learner is reducing a local task-relevant mismatch, not yet a literal transmission-line impedance unless a physical derivation has been supplied.

5.5 Where exact backprop would live

The exact-backprop limit can be stated conditionally:

$$ \text{If } a_l \propto J_{l+1:L}^{\top} e_L, \quad \text{then } m_l \text{ becomes a local proxy for the adjoint signal.} $$

That is the real missing bridge. A true wave theorem would need to prove that some physical backward mode actually supplies that adjoint quantity under explicit symmetry, reciprocity, or linearization conditions.

6. Where wave language can honestly re-enter

Once the local formalism is in place, wave / field language can be reintroduced more carefully. Suppose a substrate supports a physically realizable backward mode \(\lambda\) generated by a boundary perturbation at the output. Then the interesting question is whether

$$ a_l = \mathcal{A}_l(\lambda; \text{substrate parameters}) $$

approximates the adjoint signal needed for useful credit assignment. In reciprocal media this can happen. In arbitrary biological tissue it remains an open empirical and theoretical question. So wave language is best treated as a candidate implementation map, not yet the universal derivation.

7. Discriminative predictions

Good predictions should separate this picture from generic “there is feedback” stories.

8. Relation to the broader intelligence thesis

This note tackles only one subproblem inside a larger theory of intelligence. The broader agenda still has to explain why some internal structures are control-sufficient, how objectives become endogenous, how multiscale systems preserve viability, and why transport costs shape architecture. The present contribution is narrower: it asks what kinds of physically plausible mechanisms can move credit inside a learner.

Most promising reframing. The productive question is not “has wave reflection already proven what learning is?” It is “can we build a physics of credit transport that connects digital backprop, reciprocal physical media, and compartmentalized instructive dynamics in cortex?”

Selected references for this revision