Google uses AI to better recognize search queries from people in crisis situations

In a personal crisis, many people turn to an impersonal source of support: Google. Every day, the company conducts searches on topics such as suicide, sexual assault, and domestic violence. But Google wants to do more to guide people to the information they need and says new AI techniques that better analyze the complexity of language are helping.

Notably, Google is integrating its latest machine learning model, MUM, into its search engine to “more accurately recognize a broader range of personal crisis searches.” The Company reveals MUM at his IO conference last year and has been using it ever since expand search with features that attempt to answer questions related to the original search.

In this case, MUM will be able to recognize searches related to difficult personal situations that previous search tools couldn’t, says Anne Merritt, Google product manager for health and information quality.

“MUM can help us understand longer or more complex questions like ‘Why did he attack me when I said I didn’t love him,'” Merrit said The edge. “It may be obvious to humans that this query is about domestic violence, but long, natural language queries like this are difficult for our systems to understand without advanced AI.”

Other examples of queries MUM can respond to are “the most common ways suicide is completed” (a search Merrit says that previous systems “may have previously understood as information searches”) and “suicide hotspots in Sydney” ( where again previous answers would have probably returned travel information – ignoring the mention of “suicide” in favor of the more popular “hot spot” search). When Google detects such crisis searches, it responds with an information box telling users, “Help is available,” usually along with a phone number or website for a mental health charity like Samaritans.

In addition to using MUM to respond to personal crises, Google says it also uses an older AI language model, BERT, to better identify searches looking for explicit content like pornography. By using BERT, Google says it has “reduced unexpected shocking results by 30% year-on-year.” However, the company couldn’t provide absolute figures on how many “shocking results” its users encounter on average, so while this is a comparative improvement, it doesn’t give any indication of how big or small the problem actually is.

Google is keen to share that AI is helping the company improve its search products — especially at a time when there’s an uplifting narrative that “Google search is dying.” However, the integration of this technology also has its disadvantages.

Many AI experts warn that Google could increasingly use machine-learning language models new problems for the company arise, such as introducing bias and misinformation into search results. AI systems are also opaque, providing engineers with limited insight into how they reach certain conclusions.

For example, when we asked Google how it checks in advance which search terms identified by MUM are associated with personal crises, representatives were either unwilling or unable to respond. That’s what the company says rigorously tested Changes to its search products using human raters, but that’s not the same as knowing in advance how your AI system will respond to specific queries. For Google, however, such compromises are obviously worthwhile.

Leave a Comment