podcast

Tom Griffiths on The Laws of Thought

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas · Apr 8, 2026

The Rational Use of Cognitive Resources - Tom Griffiths, Feb 2026

Karl Friston - The Free Energy Principle. Proposes that all living systems, from cells to brains, minimize a mathematical quantity called "free energy" to maintain their integrity and resist disorder. It acts as a unified theory of brain function, suggesting that action and perception work together to minimize sensory prediction errors, effectively making the brain a proactive prediction machine.

Algorithms to Live By - Tom Griffiths and Brian Christian. Argues that many human dilemmas—like deciding when to stop looking for an apartment or how to organise a closet—are essentially computational problems that computer scientists have already solved

Computer science is a better guide to rationality than economics

The ability to break down problems and set goals is a product of a resource constraint. It’s a tool that emerges when you can’t reason endlessly.

Bayesian reasoning - you have your priors, you get new data, you calculate a likelihood function and you update your priors. Where do your priors come from? We have rough feelings. Intuitions. How do we learn language in the first place.

The big difference between human minds and brains and LLMs is inductive bias. Human children acquire language with about 5 years of data. LLM training requires the equivalent of 5000 years of data. What explains the 4995 years of priors found in a human child?

Kant's "synthetic a priori" judgments. Our minds are hard-wired with a framework to process reality.

The amount of data it takes to train an LLM to undertand a language supports Chomsky’s views of how complex an object language is.

Metalearning to train neural networks with less training by including inductive biases. For example initial weights to help make language learning possible with 5 years of data rather than 5000. Outer loop and inner loop. Optimise for initial weights that allow the model to learn better from limited data. What biases are needed to improve language learning?

Inductive bias and generalisability are the two man differences between human and machine learning

Marvin Minsky - built a learning neural network at Harvard. Gave up. Would have to be ridiculously large to learn anything interesting. Frank Rosenblat in his PhD came up with a device for tabulating data and created the perceptron with a single layer of adjustable weights.

The maths for the back propagation algorithm in neural nets uses the chain rule calculus attributed to Leibniz.

← Back to sources