Screening von Life after BOB zum AI Monday am 12. September © Light Art Space

19 September 2022

As part of AI Monday’s visit of “Life after BOB” at Halle am Berghain, an expert panel discussed the overarching theme of AI-human interaction.

This month’s edition of AI Monday definitely had something special up its sleeve: Together with Light Art Space (LAS) the Berlin AI community was invited to an exclusive visit of “Life after BOB” at Halle am Berghain. Set in “a great anomic era”, Life After BOB by Ian Cheng is a 50-minute episodic anime series that imagines a future world in which AI entities co-inhabit human minds. For this major exhibition, Cheng brought his animation into the physical plane for the first time, creating a mesmerising new environment that will allow visitors to materially inhabit the psychological thematics, lore and world of Life After BOB.

The AI Monday screening was then followed by a panel discussion on the future of AI-human interaction featuring Rahel Flechtner (Researcher at DFKI and FHP Interaction Design Lab), Florian Dohmann (Founder & Chief Creative at Birds on Mars) and Prof. Dr. med. Surjo Soekadar (Professor for Clinical neurotechnology at Charité Berlin), joined by host Andreas Schepers, Head of Communications DFKI Berlin.

Prof. Dr. med. Surjo Soekadar, Rahel Flechtner, Andreas Schepers and Florian Dohmann (from left to right) © Light Art Space

Here is the transcript from the panel discussion:

Andreas Schepers: Thank you all for being here. Let’s start with the panel discussion and let’s try to understand what the current state of Human Machine Interaction and Human AI Interaction looks like. And maybe the first question will go to Rahel as a researcher. Today we already interact with AI on a daily basis. Everyone has their own smartphone or ordering things on Amazon’s Alexa. What projects are you working on to achieve the next step in human machine interaction?

Rahel Flechtner: What I think at the moment is that there is a lot happening in terms of how our relationship with technology might change. We use technology as a tool for a long time, but there is change over the last years that technology becomes more and more some kind of counterpart. That we start to in a way negotiate things with technology and this is a very interesting concept that’s discussed in research.

Andreas Schepers: Tell us more about the concept. What does this exactly mean?

Rahel Flechtner:  It’s that we need to develop new paradigms and also guidelines for designing the relationship we want to have with technology. Products like Alexa are designed in a very human-like anthropomorphic way. But could there be ways that maybe also take into consideration the systems characteristics and the underlying functionality?

Andreas Schepers: Super interesting. Let’s get Florian on board with this. Whereas Rahel is more the research part, for me you represent the creative and the business side. Your agency works with actual clients, and you do real projects. When you think of Human-AI-interaction today, what is really out there?

Florian Dohmann: A few things that you mentioned, Rahel, I find very interesting. Talking about the corporates here in Germany, I would say especially in the last few years AI got more and more production-ready. When I started in the data science and machine learning world, where you develop real-world systems, it was a lot about exploration and building prototypes and proof of concepts, but then during the last few years it really became IT. It is not only about math and theoretical physics – that’s in the background to understand the networks – but to really bringing it to the production-ready moment and in the end applied-AI software. So mathematical theory is used to bring it into software solutions. It is also getting more and more interactive and enjoyable. You really start to experience AI.

Andreas Schepers: We will definitely get back to that later. Surjo, you’re an expert on a fascinating topic, which is the computer-brain interface. A lot of things we saw in the movie, they reference this technology as if it was available today. Can you give us a short introduction of what it is today? And is it really as scary as I imagine it today?

Surjo Soekadar: Well, we will see. We are in a decisive moment now, so BCI – brain computer interfaces – have been conceptualized 50 years ago and only in the last years they really became clinical applications. For example if you are paralyzed and you cannot move and cannot speak, it is possible to train you brain activity to communicate. So that was the first clinical application of a brain computer interface and now we are at a stage where we can translate brain activity for example into finger movements of a prosthetic device that allows you if you are completely paralyzed to grab a bottle and drink for example. This was one-directional brain computer interfacing, but now we also have implemented bi-directional brain computer interfaces, so we are not only reading out the brain but also directly stimulating the brain. For example if you have a prosthetic device and sensors in the fingertips and you grab a bottle we can stimulate the brain and you can feel the bottle with your brain without actually having the stimulation of your body.

Now you can also imagine that this can be used in the cognitive domain or affective domains. We are currently working on the motor domain, because there is pressing need for BCIs, but of course there are a lot of people – 1 billion people who suffer from a brain disorder that relates to other domains, like depression, obsessive compulsive disorder and that’s what’s been shown here. There are lots of reasons why you can get mentally ill and the question is now if you have such a BCI to detect a shift in your brain activity that indicates that you are actually getting out of your, let’s say, healthy zone and get to a disease zone and then directly stop this process. We are already doing this with patients who are depressed using non-invasive brain stimulation and that helps them to overcome their depression, and in the future, this will be automatized. So, we will already stimulate the brain before you have a clinically relevant depression and that’s what we are aiming for.

Rahel Flechtner: The first question that came to my mind for you was will we be able to start our coffee machine with our brains in the future and decide that it’s the coffee machine or our mixer? Do you think that will be possible?

Surjo Soekadar: Sure, it’s associated with a lot of effort. It’s extremely expensive, but you could essentially do it already now.

Andreas Schepers: But could we accidentally mix up things?

Surjo Soekadar: That is an issue but that’s already an issue without the BCI, but what is very important with all these interactions is that you always have a veto possibility. And that is also something that is inherent now in our nerve system, so that you realize that you are doing something wrong you have direct veto. There was a big discussion about free will and the essence was that we might not have full free will in all degrees, but we always have the freedom to veto. For the interaction between human and machine, the human has to be always on top of it.

Florian Dohmann: Can I ask something here? How does it feel to touch a bottle?

Surjo Soekadar: I don’t have this implant, but if you ask a patient they will say that it’s a strange feeling of tingling, without being very defined. It’s an electric signal that comes from your upper limb but you can’t really say what it is. And the more you are exposed to this kind of feedback the more you can discriminate.

Rahel Flechtner: I can image how our brains look in this very moment and thinking about a special action might be very individual. So I guess that there is a very hard training process that I have to go through? From a design perspective it would be interesting how this whole process would look like.

Surjo Soekadar: What I should add here, the BCIs right now are all working on a basis of operant conditioning, so we are recording from your brain cells and by closing the loop with your environment, you learn to modulate these activities of your cells and then you learn to control the external device. So we are not really decoding what’s in your brain and there are lots of things going on we don’t understand, but we train certain nerve cells to exert this kind of learned conditioning. To decode what’s actually going on in your brain is a super difficult task and it can work within certain degrees of freedom, so if you have 5 different coffee machines and you want to use one, there is only five degrees of freedom. But that’s not how our brain is actually composed. In certain areas there might be the possibility to use a BCI with limited degrees of freedom and the classification accuracy is pretty high, but in real life it’s really far away from being usable.

Rahel Flechtner: So, it won’t happen to me that when I’m in public transport and I decide I would love to listen to my music, but forgot to bring my headphones, I wouldn’t be embarrassed in public because it would play out loud?

Surjo Soekadar: I think the interaction would be like this: We are recording lots of bio signals not only from your brain, but from your behavior we can infer what you would like to do. Whether you would actually like to play the music louder or not and then we combine these bio signals to make sense without you saying something.

Andreas Schepers: That’s an interesting perspective on what kind of applications we can think of. You as a creative, Florian, what do you make of this? What is your dream application for a BCI?

Florian Dohmann: Oh, very nice question. What I currently find very interesting is that we all read about these huge milestones in the development of AI. Not so long ago there was the first chess computer and since then a lot of stuff happened, like autonomous driving and all those cases to forecast certain developments. All of this is traditional AI machine learning. Systems learning from the past, being able to analyze that we can use it for prediction for example. I think what we are currently seeing especially is that we can use more and more for much diverse cases. It’s like a new path that is currently starting. In the creative field, for example we have DALL·E commercializing the creative AI industry, which was just a niche thing just a few years ago. It’s now scaling. You can prompt text and based on that AI is generating an image, which is super fascinating and crazy, and it changes the paradigm of thinking about AI. There is for example a new discipline called Prompt Design now, where you learn how to ask questions in order to get the AI to create the output that you want.

Andreas Schepers: That’s kind of ironic, because we were talking about computer interfaces and now we’re back to command lines. You have to have an idea how a machine might work in order to create a proper prompt…

Florian Dohmann: It’s great, because it brings us to the point of how valuable it is to ask questions and maybe the human creativity is better at asking questions and the machine is maybe in some parts better at giving the answer for that. I think that AI can be much more diverse, more colorful, using AI not only for this classical casing, but using it for climate purposes. All that is just starting and there are so many possibilities. My ideal AI would be an AI that is more diverse and actively developed by everyone. We now have the chance to work on that. It’s not that the AI decides what the future would look like, we are the ones.

Andreas Schepers: You mentioned diversity and inclusion and we already see that AI technology can help to include people with special needs for instance. Rahel, in the DFKI, there are currently projects where you use AI to facilitate for people with special needs, for example TEXAS. Can you talk about that a little?

Rahel Flechtner: It’s an interactive trouser you can wear for rehabilitation that measures body movement and helps you train your motor skills. There is also another project I am currently working on in the field of mental health in which we work on some kind of chatbot to support vulnerable and anxious people. These could also be fields of application that are very interesting.

Andreas Schepers: Coming back to Florian’s very positive outlook on AI that we can influence the AI development. I would like to know from you, Surjo, where will your personal research lead? What is your expectation, where will we be in 5 or 10 years?

Surjo Soekadar: I am a doctor, so my aim is to empower patients. That’s something that you mentioned and that’s something that’s shown in the movie. The questions is where is assistance stopping and where is rehabilitation starting. For instance, if you use your smartphone and your GPS I can tell you after one year, your hippocampus will probably shrink. So we assist you and your brain is actually degrading, because you don’t need the skill anymore. So the big question is, how do we use the technology to empower us and make us better. But what is better? All this has been addressed in the movie and I think that’s a very critical question here, because what you could see is that in the end the technology is assisting you here and there, but it’s not actually empowering you. And that’s a very small, but decisive difference.

Florian Dohmann: And it’s also not only about humans, but also about animals and our planet, there’s a lot things to do – not only for us.

Surjo Soekadar: Yes, that’s a very important point to create connection and to understand what we are. This is of course a question that nobody has really answered, but in the end we need to ask what we actually are, and what the difference is, between us and animals.

Andreas Schepers: Florian, you mentioned the giant leaps that have been achieved in the past few years. Do you dare to predict what AI might be able to do in five years?

Florian Dohmann: We will be somewhere further ahead for sure, but in the real world, where all the decisions are made, where the resources are used and where the economy is going crazy, I think there we will be very far from what is possible. But it will get faster and faster. For me it’s also important to slow down a bit to think about choosing what from all those possibilities out there and use it for what. In five years, this will probably be even more important than thinking about what is possible with AI, because it will be a lot.

Andreas Schepers: Interesting that you are seeing a gap between what would be technically feasible and what we will actually see in the real world. Why? What are the obstacles?

Florian Dohmann: Organizational context, I would say. For me personally it’s easy to use my Python code to run an AWS machine and download a Github project and then train a model and create some beautiful things. But I can’t change so much with that, because all the organizations out there are very slow world-wide, especially the bigger ones. To adapt to these new movements that are out there takes time. The fancy algorithm in a company is less than one percent of what the people are building when they are building an AI solution. It’s about structures, about governance, processes, hiring people, data quality. You need so much to create value out there.

Andreas Schepers: Rahel, what is your specific view on the future and what would be your research questions?

Rahel Flechtner: A question that I have is if we should do all that we can do and in which direction should we go? Or are there directions we shouldn’t go? Should we build AI systems that mimic human behavior and pretend to be humans – chatbots or whatever – or is there a different path? These are the questions that hopefully will be answered or researched in the future.

Find out more about AI Monday Berlin here.

More info and tickets for „Life after Bob“ here.