Neuroscientific approaches have played an important role in AI research from the very beginning. Leaving aside the confusions of logic-based AI, ideas about the reliability of thinking machines and their implementation by means of neurons have been put forward by Alan Turing and Frank Rosenblatt since the 1940s and 1950s.
We now have AI systems, such as Bard or ChatGPT, which have en passant mastered the Turing test. They show human traits precisely in their errors: e.g. when an AI "hallucinates" knowledge - as computer scientists currently call it, when ChatGPT freely invents plausible-sounding answers to questions. In cognitive neuroscience, we describe the equivalent phenomenon in humans as "confabulation". These AI systems are rapidly evolving and will continue to evolve on their own. So we've been in a competition for quite some time between developers developing AI systems to enable more and more human-like communication and those building better and better (AI) systems to recognize AI utterances.
In neuroscience, we see features of our biological intelligence that cannot (yet) be experienced by a machine. There are the personal sensory experiences that are not digital, but analog, and are captured directly by our senses: The human conversation, the concert or the theater visit will become more important and hopefully more prominent. Another important aspect considered essential to our human intelligence is embodiment - the fact that our intelligence develops in a body and can only interact with the world through the body. This embodiment imposes physical and temporal limits on our intelligence that shape the products we make.
Knowledge builds trust
With each development, the question of which activities should be automated and which should not becomes more virulent for society. In the field of medicine, AI systems have proven to be better, faster, and more accurate than human experts in diagnostics when properly trained. With our AI Clinician, we are working on implementing this in digital therapeutics as well. We see the future of such systems as members of a team in which humans and AI complement each other, freeing up time and capacity.
Trust is important for all of this - and this is also a mandate for teaching: Only those who understand how a technology works, who know its limits and possibilities, can ultimately trust it. This means that we need to start introducing AI literacy education in schools (e.g., covering topics like AI fairness and bias). Simultaneously, AI content will need to be integrated into all university disciplines. I feel fortunate to have initiated a master's program in AI over the years, offering advanced studies for students from all disciplines.
One of the big questions we address in these courses is that of responsible AI - and I'd like to turn this over to Jerry John Kponyo: How can an AI learn to be responsible, and which disciplines are particularly needed?