In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas AI systems (at least current ones) do not possess such understanding. The internal representations learned by (or programmed into) AI systems do not capture the rich “meanings” that humans bring to bear in perception, language, and reasoning.
In this talk I will assess the state of the art of artificial intelligence in several domains, and describe some of their current limitations and vulnerabilities, which can be accounted for by a lack of true understanding of the domains they work in. I will explore the following questions: (1) To be reliable in human domains, what do AI systems actually need to “understand”? (2) Which domains require human-like understanding? And (3) What does such understanding entail?