Computational Reasoning: From Straight Lines to Synchronised Minds.

Begin with a simple promise. If you can picture a straight line and a smooth curve, you already own the mental toolkit for half the world’s problems. Arithmetic progressions move by steady steps; geometric progressions grow by steady factors. Put them on Cartesian axes and the difference becomes viscerally clear: one stacks like bricks, the other blooms like compounding interest. That contrast is not classroom trivia. It explains why salaries feel predictable while markets do not, why wear and tear creeps but ecosystems tip, and why modest errors in multiplicative domains turn into headlines.

Shift from points to arrows and the view deepens. Vectors make change directional. Add a vector and you translate a scene; scale it and you dilate the axes; rotate and shear and you discover that “context” is not a metaphor but a transformation. Linear algebra then tidies these moves into operators, and eigenvectors quietly take centre stage as directions that keep their heading while the world reconfigures around them. In networks they expose influence; in data they expose structure; in behaviour they expose the habits that resist noise. Keep that picture in mind because it is the bridge from mathematics to mind.

Now widen the canvas to tensors. Most modern data are not lists but lattices, width by height by colour, time by channel by subject. Tensors let us shift and stretch along many dimensions at once, so the old duo of “add” and “multiply” becomes a disciplined choreography across spaces, not a slog along a line. This is why contemporary modelling feels geometric: we are forever choosing coordinate systems where problems look simpler, then learning the few invariants worth protecting while everything else flexes.

Enter gradients as the navigation system. A gradient points where change pays off fastest; constraints tell you which moves are allowed. Lagrange multipliers fuse the two in static settings, balancing ambition against rules; Hamiltonians do the same in dynamical settings, keeping score of energy while systems evolve. The jargon sounds grand, yet the intuition is plain. Optimisation is handrail plus slope: know what you are allowed to touch, then move in the direction that improves the objective fastest without letting go of the rail. This is how machines learn weights; it is also how humans learn skill, iterate, respect limits, follow feedback.

With that scaffolding, neurocomputation stops looking mystical and starts looking measurable. Spike trains march like arithmetic sequences; synaptic potentiation can climb like a geometric series; population activity lives in a high-dimensional tensor that slices by time and task; learning descends loss landscapes via gradients while constraints, metabolic, anatomical, social, shape the feasible set. When oscillations appear, the story becomes orchestral. Phase, frequency, and coupling rewrite the same mathematics in time, and suddenly synchrony is not romantic prose but a property of graph structures and coupling strengths. Which is why expander graphs matter: sparse yet well connected, they spread influence quickly without bottlenecks, a topology brains and power grids both seem to prefer when robustness beats elegance.

The commercial implications are unvarnished. Linear thinking works where change is additive; exponential thinking is mandatory where change compounds. Organisations that model costs additively and revenues multiplicatively go broke on variance; those that model both as multiplicative keep buffers and survive. In engineering teams, you hire for frame-shifting: people who know when to translate a problem, when to rescale it, and when to lock on to an eigen-direction and ride it. In product, you expose gradients to users, clear affordances, reversible actions, and you limit their action space when the underlying dynamics are unstable. In research, you prefer architectures that co-locate memory and compute, because shipping tensors is dearer than updating weights.

Ethics slots in naturally once you accept that most high-stakes systems are constrained optimisers, not free agents. Declare the objective. Declare the rails. Log the descent. If a policy moves downhill on one metric while climbing a cliff on another, demand the Lagrange bookkeeping in public. This is not theatre; it is the minimum for governable intelligence, human or artificial. The same clarity holds for neuroscience. “Explaining behaviour” means showing the geometry: the subspaces where signals live, the couplings that synchronise them, and the constraints that keep the whole system inside safe energy.

The charm of this chapter, if it has one, lies in how the parts rhyme. Arithmetic and geometric progressions introduce shift and scale; vectors formalise direction; tensors generalise space; gradients decide movement; multipliers encode limits; oscillations reveal coordination; expander connectivity makes cohesion cheap. String them together and you get a language that travels from lab bench to boardroom without changing shirt. The scientist gets predictive models that do not crack when dimensionality rises. The engineer gets algorithms that converge with fewer gimmicks. The business lead gets levers that price risk correctly in multiplicative worlds. And the citizen gets systems that can be audited, not merely admired.

So the invitation is modest and practical. Learn to see whether a process adds or multiplies. Learn to feel translations versus dilations. Learn to ask which directions are invariant and worth defending. Learn to follow gradients only where constraints are explicit. Learn to value synchrony that emerges from structure over control that is shouted from the centre. Do this and “computational reasoning” stops being a slogan and becomes a craft. It turns uncertainty into a surface you can walk, complexity into a space you can reframe, and intelligence into a habit of small, reversible moves that compound. In noisy markets and messy brains alike, that is how progress actually happens.

Listen up.

The section frames computational reasoning as a geometry of change: arithmetic progressions model additive translation, geometric progressions multiplicative scaling; vectors add direction, tensors lift these operations into high-dimensional data; gradients supply the steepest admissible move, with Lagrange and Hamiltonian formalisms enforcing static and dynamic constraints. Neural systems are cast as constrained optimisers whose rhythms synchronise over sparse, well-connected graphs. The practical yield is governance: expose objectives, constraints, and descent paths to make complex systems auditable and robust.

Get PDF
    • Boyd & Vandenberghe — Convex Optimization: Core gradients, constraints, duality.

    • Luenberger & Ye — Linear and Nonlinear Programming: Lagrange and KKT, with applications.

    • Sutton & Barto — Reinforcement Learning: Optimisation dynamics and value gradients.

    • Strang — Linear Algebra and Learning from Data: Vectors, tensors, eigenstructure.

    • Goodfellow, Bengio, Courville — Deep Learning: Tensors, backprop, regularisation.

    • Arnold — Mathematical Methods of Classical Mechanics: Hamiltonians for dynamical systems.

    • Strogatz — Nonlinear Dynamics and Chaos: Synchronisation, phase locking, stability.

    • Hoory, Linial, Wigderson — “Expander Graphs and their Applications”: Sparse connectivity and mixing.

    • Shapiro — Multivariable Calculus (MIT OCW): Gradients, Jacobians, Hessians.

    • Papoulis & Pillai — Probability, Random Variables and Stochastic Processes: Noise, signals, inference.

    • Pascanu, Mikolov, Bengio — “On the difficulty of training RNNs”: Gradient flow and constraints.

    • Software: JAX/Autograd, PyTorch, CVXPY, NetworkX—practical tooling for tensors, optimisation, graphs.