Google announced it has built a chatbot that can have conversations that are far better than any other state-of-the-art chatbot currently out there.
If you’ve ever tried to have a general conversation with any virtual assistant or chatbot, you will have noticed how they only perform well if you stick to what they were built to do. That’s because modern chatbots, or “conversational agents” as Google calls them, are specialized and aren’t good at handling the wide variety of topics that humans usually discuss.
On the other hand, “open-domain chatbots” – those that can handle more general conversations as opposed to responding to specific keywords – have a critical flaw: “They often don’t make sense.”
To solve both problems, Google attempted to develop a chatbot that is not specialized and can chat about virtually anything. The result is Meena, “a 2.6 billion parameter end-to-end trained neural conversational model.” Meena can conduct conversations that are both more sensible and more specific than those with existing state-of-the-art chatbots.
To measure this, researchers came up with a new metric called Sensibleness and Specificity Average (SSA). In a recent paper, they explain that SSA “captures key elements of good conversation,” and that research has shown a “strong correlation between perplexity and SSA.” The higher the SSA, the lower the perplexity.
Remarkably, Meena scores an SSA of 79%, while human-level SSA is around 86%. The next best chatbot currently out there scores 23% lower than Meena.
This means that there is significant potential to reach human SSA if perplexity is further optimized and, in this case, lowered.
This is exactly what Google’s researchers are aiming for, but more research is necessary at this point.
A chatbot with the ability to chat about virtually anything that users can throw at it has a wide variety of interesting applications, such as “further humanizing computer interactions, improving foreign language practice, and making relatable interactive movie and videogame characters.”
While the team has so far focused on sensibleness and specificity, future attributes – like personality and factuality – are also worth considering.
Finally, the research doesn’t come without challenges, and that’s why Google will not be releasing a demo for external use right now. The team needs first to tackle things like safety and bias before this can happen.
You might also like
More from Google
Google Maps Will Soon Suggest More Eco-Friendly Routes
As part of Google's commitment to helping users reduce their environmental footprint, Maps will soon default to routes that have …
Google Makes It Easier To Find Shared Files On Drive
Google is updating search operators in Google Drive to make it easier to find shared files.
Google Brings Live Captions For Media With Audio On Chrome
Google has announced that its Live Caption feature is now available to all on Chrome, letting users automatically add captions …
Google Now Lets You Snooze Calendar Desktop Notifications
Google is rolling out a new feature that lets you snooze your desktop Google Calendar notifications.
Google Calendar Now Lets You Split Your Workday Into Segments
Google Calendar is introducing work hour segmentation and repeating out-of-office replies to help you organize your hectic schedule.
Google Photos Gets A New Video Editor
After launching its new photo editor on Android last year, Google Photos is now rolling out a new video editor …
Chrome Now Mutes Notifications When You Are Sharing Your Screen
Chrome is giving users more privacy and fewer distractions, by muting web notifications while screen sharing.
Google Now Lets You Check Your Video And Sound Before Joining A Meet Call
Google has announced a new feature that lets you quickly preview how you look on camera before going on a …
Google Now Adds More Context To Search Results Before You Click On Them
Google has started testing a feature that provides information about search results - letting users get more context about them.