
As the world’s longest-running underwater dolphin study, the Wild Dolphin Project since 1985 has analyzed generations of wild Atlantic spotted dolphins in the Bahamas. Now, aided by an A.I. model from Google (GOOGL), researchers at the nonprofit science organization are one step closer to finally deciphering the whistles, clicks and chirps that these aquatic mammals use to communicate with each other.
DolphinGemma, built on the same technology behind Google’s Gemini models, will tap into the Wild Dolphin Project’s (WDP) extensive library of acoustic dolphin data to uncover potential patterns and rules in dolphin sounds that remain indecipherable to humans. “I’ve been waiting for this for 40 years,” said Denise Herzing, WDP’s founder and head of research, in a video posted by Google.
WDP has long attempted to hammer out evidence of a dolphin language by closely observing dolphin behavior. Over the years, researchers have managed to link various sounds with particular behavioral contexts. Mothers and calves, for example, use distinct signature whistles—unique names—to call out to each other. Dolphins have also been observed squawking during fights and emitting buzzes while courting or chasing sharks through the ocean.
But uncovering the myriad meanings of dolphin communication remains a behemoth task for humans. WDP researchers are hoping that DolphinGemma’s model will pick up on previously unseen structures and sequences as it parses through a reserve of dolphin audio. The ultimate goal is for the A.I. system to predict subsequent sounds in a sequence of dolphin communication—similar to how other large language models (LLMs) predict the next word in a string of human text.
Optimized to run directly on researchers’ Google Pixel phones, DolphinGemma will hit the seas soon. It’s expected to be deployed by the nonprofit during this year’s field season.
Contributing new sounds to dolphin communication
Beyond identifying hidden patterns in dolphin audio, Google’s A.I. model could also help researchers generate entirely new sounds to communicate with the animals. That’s the goal of the Cetacean Hearing Augmentation Telemetry (CHAT) system—a collaborative project between WDP and the Georgia Institute of Technology. CHAT is an underwater computer capable of producing novel whistles linked to objects dolphins enjoy playing with, such as seagrass or scarves.
By chirping out whistles while researchers pass these objects to each other, CHAT aims to teach dolphins to mimic them in order to request the sought-after items—therefore adding new sounds to their roster of audio. DolphinGemma could potentially aid CHAT in identifying these mimics at a faster rate, according to Google.
Although DolphinGemma was trained on audio from Atlantic spotted dolphin sounds, the tech company believes its new model could have useful applications for other species like bottlenose or spinner dolphins. To that effect, Google will share the LLM with other researchers as an open model this summer.
Despite having spent four decades dedicated to studying dolphins, language remains “the last barrier,” noted Herzing in Google’s video. “The goal would be to somebody speak dolphin, and we’re really trying to crack the code.”