Tech big guns get chatty with virtual agent support

Published on the 14/04/2020 | Written by Heather Wright


Healthcare Chatbots_Covid19

Covid-19 accelerating chatbot technology…

Love them or loath them, chatbots – and the machine learning and artificial intelligence which helps drive them – are vying for a front row seat in the fight against Covid-19. Now, Google has jumped into the fray, offering a program to streamline deployment the virtual agents.

The Covid-19 pandemic has seen an influx of queries to a range of organisations – from travel agents and airlines in the early days as people rushed to change bookings and get home, to financial service providers, healthcare organisations and government help lines.

“When we emerge from this crisis, chatbots are likely to become digital portals for interactive healthcare.”

But the sudden, unprecedented, demand has put a strain on customer support resources for many organisations.

Google’s response is the Rapid Response Virtual Agent program to help healthcare, public health, non-profit organisations and other businesses including travel, financial services and retail, to quickly launch Google’s Contact Center AI agents to answer questions across voice, chat and social media channels.

Contact Center AI launched late last year.

The Rapid Response Agent program includes open source templates to add Covid-19 content to virtual agents. The new program is available globally in the 23 languages supported by Dialogflow. The company says it’s also working with its contact centre and system integrator partners, including 8×8, Avaya, Cisco, Genesys, Mitel and Twilio, to ensure deployments and integrations happen quickly.

Google head of product for conversational AI, Antony Passemard, says “The work we’re doing today is part of our focus on helping businesses and organisations most impacted by the COVID-19 pandemic.”

The rapid rise of Covid has seen a rapid surge in chatbot deployment globally, many focused on disseminating accurate information about the virus in the face of wild claims on social media of ‘cures’ such as drinking hot drinks, inhaling hot air or drinking industrial bleach.

Both the World Health Organisation and Centers for Disease Control and Prevention (CDC) have launched chatbots and Microsoft said last week that since March more than 1,200 Covid-19 self-assessment bots have been created by health organisations, using the Microsoft Healthcare bot service. Microsoft says those bots have reached 18 million individuals, serving more than 160 million messages.

Microsoft says Emergency Medical Service Copenhagen created and launched a bot in less than two days in mid-March, with the bot answering 30,000 calls the first day, lowering the number of inquiries to Denmark’s emergency number and reducing demand on healthcare workers.

Venkataraman Sundareswaran, World Economic Forum fellow, digital trade, and Kay Firth-Butterfield, World Economic Forum head of artificial intelligence and machine learning, say  chatbots’ intuitive interfaces offer a low-friction approach to disseminating critical information to vast populations, with the information available 24/7. They also offer strong potential for curated information, customised to the needs and symptoms of individuals, the pair note in a recent report.

While Sundareswaran and Firth-Butterfield say chatbots are likely to become digital portals for interactive healthcare, post-Covid, they also caution that there are still challenges to address.

Those include inconsistent results, with several reports noting that ‘patients’ have been provided with conflicting advice when trying multiple bots. Miscommunication between chatbots and users, customer misperceptions, incorrect/poor guidance, wrong diagnoses and failure to achieve timely interventions are also flagged as possible issues, along with broader ethical concerns.

“Clear and effective governance frameworks are needed to guide chatbot developers, platform providers and healthcare systems,” Sundareswaran and Firth-Butterfield say. “These frameworks must take into account a range of factors, including validation/accreditation, performance assurance, patient expectation management and access, legality, privacy, and security. Further, for the use AI in chatbots, the frameworks must address transparency, bias, fairness, and data privacy and data rights issues.”

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere