From Richard Hind, Chapel Haddlesey, North Yorkshire, UK
Since 2002, I have been teaching in the further education sector and have seen the impact of new technologies. AI is the most disruptive so far (27 December, p 24).
I started to think about how these models learn, in the context of some well-established theories of learning. While large language models (LLMs) appear to be capable of higher-order cognitive skills, such as analysis, application of knowledge and evaluation, I question the depth of their “understanding”.
There has been much written around this problem, including the book The Neural Mind, which you reviewed. LLM intelligence is based purely on absorbing vast amounts of information and being able to extract complex patterns. However, unlike us, it can’t understand that information in the context of physical and emotional experience.
Until artificial neural networks can experience the world as we do, their intelligence will remain artificial, a mere simulation of human intelligence, albeit a very powerful and quite useful one.
Advertisement
