When we think of chatbots, we usually think of unsatisfactory examples of online communication with digital assistants. This is exactly where Alexander Weidauer comes in with his Berlin start-up Rasa and its Open Source machine learning tools for developers and product teams to automate contextual multi-turn conversations and put frustrating experiences with chatbots to one side. Several thousand developers are now using its toolkit, which is used in a number of Fortune 500 companies - time to talk to #ki_berlin.
How did the idea for Rasa come about?
In 2015, my co-founder Alan Nichol and I wanted to develop a chatbot ourselves, but we found that there were hardly any technologies which could be adapted to our individual use case. Also, everything was in the cloud and you had no real control over your own data. We then exchanged ideas with other chatbot developers and quickly realized that there was a great demand for an Open Source product. That's how the Rasa concept came about. At the end of 2016, we made Rasa NLU, our product for the detection of entities and intents [editor’s note: the user’s intention and further modification], available via Open Source. Now, two and a half years later, we have already had more than half a million downloads and over 300 developers from all over the world have contributed to Rasa.
Rasa is an Open Source tool. How does the model work and where is it used?
There are several Open Source tools which pursue a similar business model. Rasa offers software which is completely freely available. So it runs on your own infrastructure and you have full control over your data. Also, the code is open and can therefore be adapted to the individual requirements of a company. As soon as large companies scale with their digital assistant, there are special requirements on security and support. Then we come in with our Enterprise version.
You recently received millions of euros of funding. What is the promise of success for investors in Rasa?
The "Conversational AI" market is still in its infancy and offers a lot of long-term potential. However, we see a tendency for expertise to be developed more and more in-house. Companies understand that a sustainable competitive advantage can be built up with the technology and that they have their future in their own hands with our technology. Rasa has great potential to become a standard infrastructure for Conversational AI on a large scale.
What aroused your interest in chatbots und Natural Language Understanding (NLU)?
We've always wanted to build tools that make it easier to use computers. With treev we tried to do this for document search. We believe that Conversational AI is the best interface to enable everyone to use computers efficiently.
You founded your first company treev in London. What makes Berlin a good location for Rasa?
We went to Berlin to participate in the Techstars Program. So far we have been very happy that we took this step. Berlin offers the right infrastructure and a good mixture of great talents, start-ups and innovative labs from big companies.
We now also have another location in San Francisco, so as to be close to our community and customers in the USA. Nevertheless, the Berlin location will remain an important R&D centre for us and will be significantly expanded.
How do you see the AI scene developing in Berlin?
The AI scene has definitely changed very much in a positive way. While a few years ago many companies were founded on the basis of initial ideas, many have now taken the next step and developed a successful business concept. I think the mixture of start-ups, investors, corporates and catalysts such as meetups and accelerators is at the forefront and unique in Germany.
What is the most important factor to make chatbots more intelligent and to be perceived as equal help? And how far are we from chatbots no longer being distinguishable from humans?
Language and conversation are very complex concepts, and development is still in its infancy. However, there has been more research in these areas recently and we are seeing good progress. One of the most important aspects for success is to build a scalable product that covers many use cases and also well masters so-called "edge cases", in other words rather rare and difficult situations.
Currently, existing bots are focused on handling certain use cases. We are still a long way from a general understanding of all possible topics in a bot. Nevertheless, I believe that AI wizards will prevail. After all, language is one of the most natural means of communication for us humans.
The question of all questions: where does machine learning stop and where does artificial intelligence begin?
From a scientific point of view, there is no transition between these terms because machine learning is part of artificial intelligence. It is best to philosophise over the question as to when machines can be described as intelligent over a glass of wine! In any case, we still have a lot of work to do.
Thank you very much for your time.