It is difficult to better the summary of the author than that which appears on his Wikipedia page:
Terrence Joseph Sejnowski (born 13 August 1947) is the Francis Crick Professor at the Salk Institute for Biological Studies where he directs the Computational Neurobiology Laboratory and is the director of the Crick-Jacobs center for theoretical and computational biology. He has performed pioneering research in neural networks and computational neuroscience.
Sejnowski is also Professor of Biological Sciences and adjunct professor in the departments of neurosciences, psychology, cognitive science, computer science and engineering at the University of California, San Diego, where he is co-director of the Institute for Neural Computation.
With Barbara Oakley, he co-created and taught Learning How To Learn: Powerful mental tools to help you master tough subjects, the world’s most popular online course,[5] available on Coursera.
At Ideas for Leaders we are always seeking out polymaths and those with cross-disciplinary knowledge and understanding, as they bringing a more interconnected and grounded set of insights to those of us living in the messy, real world – in Professor Sejnowski we may have found the perfect examplar of this!
AI, Generative AI, large language models, autonomous agents, generative adversarial networks, natural language processing, hallucination, model selection – the AI jargon is all around us these days, and every week brings another breakthrough or milestone being reached. AI has followed a succession of buzzideas over recent years – we have had digital transformation, ecosystems, virtual reality/metaverse to name a few. All have or are making an impact, but none have quite lived-up to their initial hype.
Organizations have adapted to them and increasingly include them in their thinking and strategies, but no paradigms have shifted. AI is certainly on the same initial trajectory, but there is a sense that this one may really be different, and rather than explode and dissipate like a spent flare, it may continue to soar up to the heights projected and beyond. The principal reason for this is perhaps ChatGPT and now its cohort of competitors, Microsoft’s CoPilot, Google’s Gemini and others.
For the earlier buzztech only created some slightly clunky gizmos for early adopters, your reviewer recalls feeling queasy using the Oculus Quest VR headset mid-pandemic, but never for anything more than some games to while away the time, and has not seen one since. When ChatGPT arrived in publicly accessible fashion almost two years ago – in November 2022 – there was a flurry of excitement followed by not very much, but slowly as it and its competitors have become more user-friendly and accessible, the world has started to integrate AI into its everyday routines, and we can see the power it brings.
An article in today’s paper argues that professional firms are going to have to stop charging by time taken, and focus on outcome achieved, as the curation and analysis of data is no longer a time-consuming task; what may have taken half a day or longer can now be achieved as well, if not better, in seconds by a well-directed AI.
Not since the iPhone fostered the world of apps in 2008 has a new ecosystem of applications been so available to us – and like the arrival of the smartphone it is going to radically change how we live our lives.
This book is split into three parts each of which grapples with a different aspect of AI. The first is the standard overview of ‘where are we today?’; the second, unusually for a non-technical book, explores the core working element of AI, the ‘transformer’ and how it works; and the third, and most valuable section perhaps is Sejnowki’s views on where are we going.
Sejnowski writes very engagingly and clearly. He recognizes that today’s AI is hugely sophisticated, although we still don’t really know how intelligent it is, in as much as ‘does it understand what it is saying?’ LLMs, he acknowledges, are great for first drafts, whether that is a journalist’s article, lawyer’s contract or a developer’s code ‘often with new insights, which speeds up and improves the final product. There are concerns that AI will replace us, but so far, LLMs are making us smarter and more productive.’
By parroting back elegant and (apparently) fact-filled responses, we (humans) are easily persuaded of LLMs brilliance. But what Sejnowski reminds us is that LLMs are just the conscious bit of our brain, the neocortex, they lack the animal part that lies beneath that directs our behaviour… our ethics, our values and our emotions.
AI is a moving target, no sooner is something written about it than the next breakthrough occurs and we are materially further ahead once again. Nonetheless, Sejnowski outlines current use-cases, highlighting medicine, education, law, architecture and language/translation examples amongst others… and ingeniously writes his summaries and brief overview enquiries with ChatGPT output.
He also explores the roles of ‘priming’ and ‘prompts’ in using ChatGPT. Priming is when you provide context or setting for the LLM, such as ‘you are a neuroscientist’ or ‘you are a friendly and helpful tutor. Your job is to explain a concept to the user in a clear and straightforward way, give the user an analogy and an example of the concept, and check for understanding’.
Prompts are the way you enquire of the LLM for answers. When asked nonsense questions, nonsense answers were returned. However, when prompted to highlight nonsense questions it did so. Similarly, the responses given varied when prompted. Sejnowski points out that LLMs vary in their ability to perform, but asks ‘can any human pass all the tests for all professions? LLMs have been around for only a few years. Where will they be in ten years or a hundred years?’
The ability to Prompt well is going to become a key skill, he quotes one such expert: “Good prompt engineering mainly requires an obsessive relationship to language” and understanding how we humans use language is integral to understanding how AI – or ‘large language models’ behave.
Sejnowski is broadly upbeat about the benefits AI will bring. He doesn’t deny that jobs will be lost with the advent of more AI, but he also asks ChatGPT ‘What new jobs were created by the introduction of the Internet?’ and a string of roles is presented – give it a go yourself!
The second section focuses in on the central element of AI, the ‘transformer’ (who knew that GPR stood for ‘Generative Pretrained Transformer’?). In terms of content to do with leadership this section is relatively unimportant – but in terms of context, and understanding how AI works, it is a goldmine. Everyone should read these 60-odd pages if they truly want to understand how AI functions.
LLMs are fundamentally different to computers (CPUs) in how they work. And our own brains work a million times slower than silicon-based networks, but we have many more synapses/connections than an LLM (currently) does. But for how long? Sejnowski later states that today’s LLMs are at the equivalent stage of development as the Wright brothers were in the development of airplanes.
The third section looks to how AI will develop and how we can benefit from that. The goal is Artificial General Autonomy (AGA), which has been described as the prodigy that could rival human thinking, its special power being it actually gets the meaning behind its answers. But we are not there yet. Sejnowski believes that in order to achieve this we need to treat LLMs more like human brains – giving them time to learn from their parents (older LLMs), ensuring ‘reinforcement learning’ occurs, and that LLMs need to evolve bodies so they can not only do, but also control and sense what they are thinking – pure cybernetics. They also need to extend their interaction memory; currently LLMs start afresh after a finite number of data tokens have been processed (which is why consumer LLMs like ChatGPT can only summarize a relatively small amount of text). This token memory needs to become much larger so it can continue to build its knowledge from prior interaction. Humans memory is significantly processed during sleep in the hippocampus – maybe AGA needs sleep time too. The author notes that ‘The new conceptual frameworks in AI and neuroscience are converging, accelerating their progress. The dialog between AI and neuroscience is a virtuous circle that is enriching both fields.”
The final section delves deeply into neuroscience and biology and what we can draw from that to show us a way forward with AI. It can get quite technical and loses some of the lightness of touch he displays earlier in the book.
Everyone is talking about Artificial Intelligence, what it can do for us – and what it will do to us, at he moment. And yet, how many of us, outside of the AI-tech world actually understand the basic elements of it? This book is an excellent and multi-faceted introduction to the context and foundations of AI. If you want to get a grasp on how AI has evolved, and where it might take us – there can be few better guides than this book and Terrence Sejnowski.
Title: ChatGPT and the Future of AI: The Deep Language Revolution
Author/s Name/s: Terrence J. Sejnowski
Publisher: MIT Press
ISBN: 978-0-262-049-25-2
Publishing Date: October, 2024
Number of Pages: 272