David Larpent — Notes on AI, Philosophy, and Product
-
Hardening AI Agents Against the 'Lethal Trifecta'
Mar 24, 2026Personal AI assistants like Openclaw are fantastically powerful, and quite dangerous. Here's how to harden a personal assistant without making it useless.
-
Decision Systems
Feb 9, 2026Decision Systems, part 3 of 3: a practical framework for becoming 'decision first' with the help of AI
-
When Decisions Stop Scaling
Feb 8, 2026Decision Systems, part 2 of 3: Synthesis Bottleneck
-
Ralph Loops Redux
Feb 8, 2026Opus 4.6 Sub-agents turn a solo bash loop into a squad.
-
Becoming data-ready for AI projects
Feb 7, 2026Decision Systems, part 1 of 3: The data architecture that de-risks the ROI on expensive AI projects
-
The Unbearable Lightness of Prompting
Jan 30, 2026On skill atrophy and doing things the hard way.
-
Ralph Loops
Jan 30, 2026How to run Claude Code like you're Charles Montgomery Burns
Tom Griffiths on The Laws of Thought
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas · Apr 2026
The maths for the back propagation algorithm in neural nets uses the chain rule calculus attributed to Leibniz.
▶ 77:09 #ai#neural-networksMarvin Minsky - built a learning neural network at Harvard. Gave up. Would have to be ridiculously large to learn anything interesting. Frank Rosenblat in his PhD came up with a device for tabulating data and created the perceptron with a single layer of adjustable weights.
▶ 73:53 #ai#neural-networksInductive bias and generalisability are the two man differences between human and machine learning
▶ 63:35 #cognitive-science#aiMetalearning to train neural networks with less training by including inductive biases. For example initial weights to help make language learning possible with 5 years of data rather than 5000. Outer loop and inner loop. Optimise for initial weights that allow the model to learn better from limited data. What biases are needed to improve language learning?
▶ 61:33 #cognitive-science#learning#model-trainingThe amount of data it takes to train an LLM to undertand a language supports Chomsky’s views of how complex an object language is.
▶ 57:16 #linguistics#ai#model-trainingKant's "synthetic a priori" judgments. Our minds are hard-wired with a framework to process reality.
▶ 52:21 #kant#philosophy#cognitive-scienceThe big difference between human minds and brains and LLMs is inductive bias. Human children acquire language with about 5 years of data. LLM training requires the equivalent of 5000 years of data. What explains the 4995 years of priors found in a human child?
▶ 51:33 #cognitive-science#neuroscienceBayesian reasoning - you have your priors, you get new data, you calculate a likelihood function and you update your priors. Where do your priors come from? We have rough feelings. Intuitions. How do we learn language in the first place.
▶ 50:13 #bayes#philosophy#cognitive-science#inductionThe ability to break down problems and set goals is a product of a resource constraint. It’s a tool that emerges when you can’t reason endlessly.
▶ 48:20 #cognitive-scienceAlgorithms to Live By - Tom Griffiths and Brian Christian. Argues that many human dilemmas—like deciding when to stop looking for an apartment or how to organise a closet—are essentially computational problems that computer scientists have already solved
▶ 44:24 #booksComputer science is a better guide to rationality than economics
▶ 44:45 #cognitive-science#computer-science#economicsKarl Friston - The Free Energy Principle. Proposes that all living systems, from cells to brains, minimize a mathematical quantity called "free energy" to maintain their integrity and resist disorder. It acts as a unified theory of brain function, suggesting that action and perception work together to minimize sensory prediction errors, effectively making the brain a proactive prediction machine.
▶ 43:12 #cognitive-science#neuroscience#491 – OpenClaw: The Viral AI Agent that Broke the Internet – Peter Steinberger
Lex Fridman Podcast · Mar 2026
Don’t use haiku or local models with personal agents if security is a concern. They are gullible and susceptible to prompt injection
▶ 67:10 #prompt-injection#securityThe latest generation of models have a lot of post training to avoid prompt injection attacks.
▶ 66:08 #prompt-injection#model-trainingThere’s a subculture of people who follow viral growth and try to quickly snaffle the name on socials etc.
▶ 42:20 #securityA lot has changed recently. Four months development of the frontier AI models is all it took to go from miserable attempts to convert a codebase from one language to another to a completely successful conversion (in the case in question: into the slightly obscure Zig).
▶ 22:03 #agentic-codingMitchell Hashimoto’s new way of writing code
The Pragmatic Engineer · Feb 2026
It doesn’t have to replace you as a person. Just find the corners of your work and let it replace those
▶ 109:36 #productivity#agentic-codingFirst step for new adopters should be to reproduce your own work with an agent.
▶ 109:28 #agentic-coding"Rejecting AI before spending serious time with it ...is like trying git for an hour and deciding you’re not more productive with it."
▶ 109:17 #agentic-coding#productivityAlways have an agent running in the background because there is always something to do. Disable its ability to notify you: remain in control of interruptions. You choose when to interrupt the agent.
▶ 106:17 #agentic-coding#productivityIdentify the tasks that require thinking and the tasks that don’t, and delegate the only the tasks that don’t require thinking
▶ 106:57 #productivityOn “harness engineering”. When you see AI do a bad thing, try to build tooling it could have called out to avoid or course correct away from that bad thing.
▶ 97:54 #harness-engineering#agentic-coding#productivitySome projects are now allowing PRs only from vouched-for contributors. Those who break trust are denounced and gated.
▶ 87:46 #open-source#agentic-codingThe flood of Claude-generated PRs is causing a lot of pain in the open source community and an inevitable backlash reaction. Where does this lead? Blanket rejection of AI contributions to open source projects is also not the answer.
▶ 85:02 #agentic-coding#open-sourceThere is an unspoken compact in open source that effort in the submission is met with effort in the review. The surge in low-effort, and often low-value AI generated open source contributions breaks the compact.
▶ 82:59 #open-source#agentic-codingAbundance
Ezra Klein & Derek Thompson · Feb 2026
Interesting parallel between housing permitting and AI regulation -- both suffer from a bias toward inaction dressed up as caution.
#governance#aiKlein argues the problem isn't left vs right but vetocracy vs action. The permitting system has become a tool for blocking rather than building.
#housing#abundanceHow Quickly Will A.I. Agents Rip Through the Economy?
The Ezra Klein Show · Feb 2026
"Half of all entry level white collar jobs could be replaced in the next few years. Can maybe see the hints of graduate job losses and hints of the productivity boom but it's very early."
▶ 49:17 #ai#agents#economics#social-policyOn government regulation. There has been close to no successful movement on AI regulation. Open question if the testing can actually be effective.
▶ 46:06 #regulation#ai