The question around sentience in AI continues to oscillate between confusion and clarity. Meta’s chief AI scientist Yann LeCun said that a machine trained solely on language could never hope to possess human intelligence. The notion that human knowledge is purely linguistic is a very traditional and narrow view that rose in the 19th and 20th century. But the AI community has come to recognise that the Turing test could deduce how well a machine was able to imitate human intelligence instead of truly being intelligent. A big chunk of information can be given out briefly—that is the power of language. However, neither is a cheap way of decoding nor a piece of information that can be easily written, nor does language encompass every piece of knowledge. . . .