He launched Throughline in December 2021. By early 2022, it had gained seed funding from investors including Blackbird New Zealand (the local arm of Australasia’s largest venture capital fund) and Sir Stephen Tindall’s K1W1.
ThroughLine is an intermediary.
The start-up doesn’t detect when users of an AI chatbot, search engine, gaming platform or dating app are in mental distress. That’s done by the platforms themselves (and despite increasing safeguards, it’s an area where they can still stumble badly).
Once a user in distress is identified, a platform like OpenAI’s ChatGPT, Anthropic’s Claude or Google’s Gemini can pop-up information on where to find help from a selection of more than 1600 helplines in 178 countries, collated by ThroughLine, which has identified trusted sources of help in multiple languages and territories for those discussing suicide, self-harm, domestic abuse or other subjects indicating high risk.
The various platforms using ThroughLine’s service collectively serve its content to “tens of millions” of people per month, Taylor says.
Google: First on, and expanding its efforts this month
ThroughLine’s key breakthrough came when Google signed on as a partner in 2023.
The tech giant flew Taylor to its offices in New York for an announcement that it would use ThroughLine for crisis help for Google Search, YouTube and Google Assistant users.
At the time, ThroughLine had just two staff. Its partnership with Google – which now extends to 50 countries – was central to helping the young company expand.
It also opened the floodgates, with the likes of ChatGPT maker OpenAI, Microsoft, Claude maker Anthropic, gaming platform Roblox, Discord and Match Group (owner of Tinder and Hinge) signing up.

Last year, ThroughLine also became one of the only two New Zealand companies to join the Google AI First Accelerator programme (the other was Atomic Tesselator).
“Google has a long-standing relationship with ThroughLine. We partner with them to add relevant local helpline information to search and this is now expanding to Gemini,” a Google spokeswoman told the Herald earlier this week.
“We’re updating Gemini to streamline the path to support for those who need it. When a conversation might signal a user may need information about mental health, Gemini will surface a redesigned ‘Help is available’ module – developed with clinical experts – to provide more effective and immediate connections to care.
“When Gemini recognises a conversation that indicates a potential crisis related to suicide or self-harm, we’re introducing a new, simplified ‘one-touch’ interface that will provide an immediate connection to crisis hotline resources, enabling a user to chat, call, text or visit the crisis hotline website.
“Within this new interface, we design responses to encourage people to seek help. Once the interface is activated, the option to reach out for professional help will remain clearly available throughout the remainder of the conversation.”
At the same time as its Gemini upgrades, Google is providing US$30 million ($50.72m) to various mental health hotlines around the world.
Taylor said his organisation would benefit indirectly from the new funding as it helped to build up the global ecosystem of helplines.
ThroughLine’s own AI
ThroughLine also benefited from Google’s AI accelerator programme, which has helped it develop its own AI agent, currently in prototype.
“It’s an AI care navigation experience developed by our clinical team,” Taylor says.
“It asks a person what they’re struggling with, so we can get context about their need and match them with the most relevant support.”
ThroughLine will also work with the Christchurch Call on radicalisation, and other expert groups in other areas, as it develops its AI.
Taylor says legislative developments around the world, including phase two of Australia’s online safety code and California’s Senate Bill 234 (aka SB234), are helping to stoke demand for more sophisticated help services.
SB234, which came into force in January, regulates “companion chatbots” designed for emotional interaction, specifically requiring safety guardrails to protect minors, mandatory AI-human disclosure and crisis protocols for self-harm.
‘Empathetic crisis response’
A spokeswoman for OpenAI said, “We work with ThroughLine for crisis helplines to connect people experiencing emotional distress, suicidal thoughts, or a mental health crisis with trained listeners.”
The AI giant is assessing a possible expansion on its relationship with the Kiwi firm, but says it’s too early to comment.
Anthropic pointed the Herald to a post on “Protecting the wellbeing of our users”, which details new features to identify when a Claude user might require professional support, and a new banner that helps direct them to that support.
“The resources that appear in this banner are provided by ThroughLine, a leader in online crisis support that maintains a verified global network of helplines and services across 170+ countries. This means, for example, that users can access the 988 Lifeline in the US and Canada, the Samaritans Helpline in the UK, or Life Link in Japan,” the post says.
“We’ve worked closely with ThroughLine to understand best practices for empathetic crisis response, and we’ve incorporated these into our product.”
Chris Keall is an Auckland-based member of the Herald’s business team. He joined the Herald in 2018 and is the technology editor and a senior business writer.

