Learning in the Age of AI
AI-Induced Educational Nihilism: The feeling that there is no point in learning something since AI systems can currently do this thing better than you could ever conceivably learn.
Is this a valid feeling? Is there a point in learning something that current AI systems can do much more quickly and adeptly than you ever could?
This question is most relevant to students currently in college, especially those in computer science courses. A naive utilitarian answer is in the negative: “AI is better at implementing binary search, writing code for a hidden Markov model, and building a frontend application in any particular framework, so there is no reason for me to learn how to do these things.” Because if the purpose of learning is simply to be able to do something successfully and efficiently, then, yes, it seems that there is no point in learning something that an AI system could do better.
However, if the purpose of learning is more personal, more existential, and more like the purpose of art in which the journey and the artist’s transformation along that journey is as important as the work itself, then learning still has value. Value primarily in the way it changes a person.
Still, I think there is an argument along utilitarian lines for why it is functionally valuable to learn skills that largely overlap with AI’s skills.
This argument extends from some epistemic limitations to how AI systems can develop. It is likely true that when you are first learning something, the initial skills you build are a subset of the things that AI systems can do much faster, much more cleanly, and more completely. What is less clear is whether going through these initial phases eventually leads to you developing skills that, when compounded over time, can grow beyond the capabilities of any current or future AI.
Overlaps of AI and Human Skills
Let’s represent the skill set (combination of both knowledge and action) of both humanity and AI systems as circles. In this post, I’ll define “AI systems” as all computational systems that transform information. This includes modern large language models and even simpler “non-intelligent” systems such as a computer implementing a search algorithm [1]. Fifty years ago, AI systems were just learning how to play chess and categorize fuzzy numbers between 0 and 9. In this scenario, the AI skills were a subset of the human ones.

Of course as time went on, humans improved AI systems to the point that they were capable of doing things that humans could not do. Things like beyond-grandmaster-level performance at Chess or simulating millions of steps of a Metropolis Hastings algorithm in 1 minute.
However, there are still some things AI systems cannot do as well as humans. Things like discovering paradigm-shifting scientific frameworks or successfully running a business. Thus, the current relationship between the skill circles looks like

The question that follows from this is “How far can this skill expansion go?” or “How do the relationships between these circles change over time?” There are two possible scenarios:
- Educational Optimist: The abilities of AI systems will never encompass all human abilities
- Educational Pessimist: The abilities of AI systems will eventually encompass all human abilities
In the second scenario, the set relationship would look like

This is the educational nihilists’ fears made real. There is no functional value in humans building new skills because AI systems can already do everything that a human can [2].
Long-term Learning of the Educational Optimist
Assuming we lived in the first scenario (the educational optimist one), if we were to plot the way humans develop skills over time and how AI systems accrue skills over time, the plots might look like

A random human starts off doing something worse than existing AI systems can (e.g., solving textbook homework problems fairly quickly), but as the human’s skills develop, the human is eventually capable of doing things (e.g., developing a paradigm-changing mathematical formalism) that exceed the capabilities of AI systems, even given the latter’s own concurrent development.
How true is this? Can we really claim that there is a ceiling to how much these AI systems can develop? Is there something fundamental to how these systems are constructed or trained that will make them incapable of replicating the best efforts of humanity?
Given current systems, I think we can. The reasons are partly reflected in what AI is currently quite good at. AI can write solutions to technically difficult differential equations, it can solve math olympiad problems, and recognize and extrapolate geometric patterns. But could AI have developed general relativity? Or could it have sought our and then proved Gödel’s incompleteness theorems [3]?
In other words, AI systems currently function best in “well-defined epistemic spaces” where they are told a specific question and can string together steps to obtain an answer in a way that seems like deduction. But can they adequately explore a body of knowledge and then inductively and iteratively build theoretical formalisms that extend beyond that initial corpus of knowledge?
I don’t know. I don’t think anyone knows. And as long as we do not know, it is still probably useful to learn how to do seemingly pedestrian things like calculate derivatives and implement binary search ourselves. Primarily because learning how to do these basic things is a conduit to learning how to do the epistemically more complex tasks that (at least currently) seem beyond the reach of AI.
Footnotes
[1] You might reasonably take issue with my including algorithms and large language models in the same category, but I am specifically stating that the "implementation" of an algorithm by a computer is in the same category as a large language model. Large language models themselves are computational implementations of many different algorithms.
[2] There's a tacit question of access here. In a future where AI systems can encompass all human skills, it is unlikely that all humans will have access to all such omnipotent systems. In Western nations like the U.S., it is likely that such access, like most valuable assistive technology, would be distributed along socioeconomic lines.
[3] This is somewhat of a sloppy counterfactual in that I'm comparing the abilities of AI systems that solve current well-defined problems to a conjecture about their inability to solve past scientific problems. A better comparison would be an AI system attempting to do paradigm-changing work today and failing. It is already failing to do so, but in the trivial sense in that it hasn't done such work so far. It would be more rigorous to analyze the way these models are trained and deployed and then determine, from that foundation, their science-making capabilities. That analysis will be in another post.