banner

Blog

Sep 09, 2023

Earth Is Noisy. Why Should Its Data Be Silent?

Volcanic eruptions can engage all our senses. Dramatic scenes of lava flows and ash clouds, the sound and feel of seismic vibrations, the smell and taste of gas emissions and ash, the intensity of the heat—scientific instruments can measure and record the physical and chemical causes of these sensations and preserve them as numerical data. However, when scientists analyze data to look for patterns and anomalies, they turn most often to visual representations. Could our other senses tell us things that our eyes are missing?

The human auditory system sometimes outperforms vision in helping people detect subtle temporal patterns or teasing out cause-and-effect relations.

Graphs, photos, maps, and videos are familiar and well-used tools for visual display. However, the human auditory system sometimes outperforms vision in helping people detect subtle temporal patterns or teasing out cause-and-effect relations among multiple data streams. In research, new ways of examining data often lead to discoveries. Auditory display and sonification—the representation of data through sound—thus hold great potential for advancing science by helping scientists take fuller advantage of their creative and deductive capabilities.

Sonification has been used in limited ways in the past, such as through the well-known sounds of sonar displays and Geiger counters. The time is ripe for bringing this capability into wider use in research. Science education and outreach efforts can also leverage current cultural trends and technological developments that facilitate immersive multimedia experiences to make information accessible to broader, nontechnical audiences using sound [e.g., Holtzman et al., 2014]. What's more, sonification provides a framework for data to be perceived and evaluated by visually impaired scientists [e.g., Song and Beilharz, 2007], who potentially have more highly developed aural perception and awareness than the visually unimpaired.

Sonification involves data processing steps that are sometimes analogous and complementary to those used in machine learning methods [Holtzman et al., 2018], which can quickly reveal important features and useful workflows for exploring data sets [Barth et al., 2020]. Combined with model outputs, which may also be represented aurally, features identified through sonification of physical data can lead to new understandings of complex natural systems in the solid Earth and surface environment.

Direct sonification is one of the simplest forms of auditory display and can be readily applied to a diverse array of oscillatory data.

In geoscience, sonification has been used in seismology since the Cold War [Speeth, 1961]. Scientists recognized that the human ear could distinguish between bomb blasts and tectonic earthquakes simply by speeding up recordings of ground shaking into the range of human hearing (~20 hertz to 20 kilohertz). This direct sonification is one of the simplest forms of auditory display and can be readily applied (with appropriate preprocessing) to a diverse array of oscillatory data, such as those detailing planetary orbits, seismicity, infrasound, ice core or sedimentary records, and paleomagnetism.

Recent work has demonstrated that even without special training, humans can distinguish characteristics of seismic wave propagation through Earth from signatures of the earthquake source in auditory displays of seismograms, and this ability improves after training [Boschi et al., 2017]. Because the frequencies of interest in teleseismic earthquake data (0.0001–10 hertz) are far below the lower limit of the human hearing range, direct sonification requires that raw data be shifted to higher frequencies. This frequency shift—multiplying discretely sampled observation times by a speed factor—represents an aesthetic parameter that must be chosen, like the color or size of a symbol on a visual graph.

In sonifying catalogs of earthquakes or other oscillatory data-generating events, researchers can use multiple speed factors to stretch or compress individual events while accurately preserving the time sequencing of the catalog. Audio spatialization may additionally help distinguish sounds or represent spatial parameters, such as earthquake hypocenters, relative to a chosen observation location [Paté et al., 2022].

Get the most fascinating science news stories of the week in your inbox every Friday.

Sonification can also be applied to data sets that are not simply oscillatory, although additional choices beyond speed factors are required to represent other parameters. A popular approach is parameter-mapping sonification, in which data parameters are mapped to the parameters of sound. For example, the pitch of a continuous whistle from a tea kettle goes higher as the water boiling rate increases. The rising sound frequency indicates a rise in steam flux through the hole in the spout, a physical process that is a sonic proxy for boiling rate. General sonification approaches accomplish this by transforming data into sound. Other approaches may, for example, depict specific events like earthquakes with a specific sound called an auditory icon. The auditory icon can be modulated on the basis of aspects of the data, such as using a higher pitch and less reverberation for lower-magnitude seismic events.

Pairing sonification with animation so that the aural and visual systems can work together is another often useful approach.

As musicians well know, many parameters can help distinguish among sounds: Pitch, loudness, timbre, and harmonic and temporal complexity can all be manipulated and mapped to data. Approaches such as granular synthesis that are common in audio engineering and computer music also represent powerful tools for representing diverse and high-dimensional scientific data [Roads, 2004].

Pairing sonification with animation so that the aural and visual systems can work together is another often useful approach [Holtzman et al., 2014]. The resulting "data movies" typically include an audiovisual key that explains the auditory and visual algorithmic rules for generating the data representation. Animations incorporate visual data representation tools and allow inclusion of more data types as well as models that facilitate interpretation.

To demonstrate the range of sonification techniques and the data movie approach described above, we have worked with multiple data sets recording volcanic activity at Kīlauea Volcano in Hawaii.

Kīlauea, one of the most active volcanoes in the world, is fed by decompression melting of a mantle plume that also feeds other active volcanoes on the island of Hawai‘i (Figure 1). The summit vent of Kīlauea was active from 2008 to 2018, and in that time, it hosted an active lava lake that provided an open window into the underlying magma system [Patrick et al., 2021].

This summit activity was accompanied by intermittent effusive (nonexplosive) eruptions along Kīlauea's East Rift Zone, such as the Kamoamoa eruption in 2011. The activity culminated in 2018 with an East Rift Zone eruption that produced roughly 1 cubic kilometer of magma, damaging nearby neighborhoods and infrastructure, and induced a months-long sequence of earthquake-generating caldera collapse events at the summit. Analysis of this decade-long eruption, as well as more than a century of preceding study, has made Kīlauea one of the world's best understood active volcanoes.

The general structure of Kīlauea's shallow magma system has been known for decades, although researchers are continually refining the picture and important questions remain unanswered.

The general structure of Kīlauea's shallow magma system has been known for decades, although researchers are continually refining the picture and important questions remain unanswered. Above a deep magma transport network rising from the underlying mantle plume, magma is inferred to be stored in a few locations: in a reservoir about 1–2 kilometers beneath the summit Halema‘uma‘u Crater vent, in another region 3–5 kilometers beneath the summit vent, and along dike-like structures extending laterally from the summit to the volcano's rift zones. During much of 2008–2018, a direct conduit existed between the shallower magma reservoir and the summit lava lake.

Although the spatial structure and temporal connectivity of subsurface magma at Kīlauea are not fully understood, we have incorporated available information in a series of conceptual sketches of the magma system. These sketches contextualize our focus on the uppermost plumbing of the volcano in a data movie that represents the evolution of the shallow summit magma system over two time windows during the recent eruptive period.

The composite data movie includes an introduction and aural key (with voice-over of text for accessibility) to introduce the sonification techniques and geologic context. The first time window illustrates decadal-scale dynamics of the Halema‘uma‘u Crater from 2008 to 2018, during which a wide range of activity occurred (Figure 2, left); the second zooms in on the summit caldera collapse sequence and lower East Rift Zone eruption in 2018 (Figure 2, right). For the first window, the 120-second duration of the data movie means that each second in the movie represents roughly 1 month of real time; for the zoomed-in 2018 collapse sequence between 11 May and 7 August, each second represents about 1 day. (For both cases we have also made 60-second versions of the movies to demonstrate how the time scaling changes the detail with which events can be examined.)

We chose three data sets to sonify for the 2008–2018 data movie. Near-summit earthquakes, from an island-wide seismic catalog, track the volcano's time-evolving stress state. A separate catalog [Crozier and Karlstrom, 2021] of small earthquakes associated with rockfalls from crater walls into the lava lake reflects very long period (VLP) seismicity, with dominant oscillation periods longer than 10 seconds. And radial ground deformation data gathered by near-summit Global Navigation Satellite System (GNSS) sensors track inflation and deflation of the ground surface [Patrick et al., 2021]. We sonified these three data sets using methods designed to represent qualitatively the physical processes that the different data sampled.

For near-summit earthquakes, we used a simple direct sonification of near-summit vertical ground motions from a station (NPT/NPB) in the U.S. Geological Survey's Hawaiian Volcano Observatory network. For each earthquake, we applied a speed factor of 150, meaning that seismic frequencies originally above about 0.13 hertz are audible in the movie. Differences in the magnitudes and durations of individual earthquakes are reflected in loudness and timbre. Combined with an animation of the hypocentral location (directly under each earthquake's epicenter), this approach permits clear identification of stress changes and patterns of fracturing inside the volcano through time. We further used left-right stereo panning (i.e., partitioning sound unevenly into different channels) to represent the longitudinal distance of each earthquake from the crater center.

As soon as the Halema‘uma‘u Crater opened in 2008, exposed crater walls began progressively collapsing into the churning lava lake.

VLP seismicity, on the other hand, tells a story about the evolving magma system beneath Kīlauea. As soon as the Halema‘uma‘u Crater opened in 2008, exposed crater walls began progressively collapsing into the churning lava lake, and by 2018 the lava lake diameter had grown by a factor of 4. Seismometers detected damped, resonant sloshing of magma in and out of the shallow (1- to 2-kilometer-deep) Halema‘uma‘u reservoir caused by large rocks falling onto the lava lake surface. The duration and frequencies (both the fundamental mode and overtones) of this remarkable resonance depend on the geometry of the magma system and on multiphase magma properties such as temperature and bubble content. Variations in resonant characteristics over time thus reflect changes within the magma system [Crozier and Karlstrom, 2022].

Although these VLP waveforms are often quite tonal (i.e., exhibiting few overtones), direct sonification leads to short sounds that do not represent the complexity of the real events well. To allow listeners to hear the temporal structure in the VLP seismicity, we used an approach in which we sonified the strongest spectral peaks in each seismogram by synthesizing pure sinusoidal tones—with frequencies between 200 and 500 hertz and sound durations of 1 second—that preserve the relative frequency spacing and temporal envelopes in the seismic data (Figure 3).

Animation again provides a key aid in interpreting the sound: We associate each VLP event with a splash of color within a sketch of the summit lava lake and shallow plumbing system. The color scale corresponds to the period of the VLP fundamental mode, with cooler colors representing shorter periods. Because the period and decay rate of VLP seismicity track magma temperature and volatile contents within the shallow reservoir and conduit [Crozier and Karlstrom, 2022], this animation provides a tool for examining how internal dynamics of the volcano likely evolved through the eruption.

Finally, we sonified geodetic (radial GNSS) data collected around Kīlauea's summit that track magma buildup below the summit. This magma helped drive lava lake dynamics and supply the climactic eruptive sequence in 2018, and the GNSS data capture deformation occurring over larger areas and longer timescales than the earthquake data do. We chose a sonification method that represents the relatively slow increases or decreases in radial deformation by adding or removing notes, respectively, from a chord of synthesized tones. We built this chord using notes of Lydian mode (a seven-note musical scale) spanning three octaves. The gradual swelling of the volcano leading up to the sudden collapse in 2018 is thus represented aurally with steadily increasing tone density and frequencies. Small variations in deformation embedded in the long-term inflationary trend are represented by scaling the loudness of the chord through time with these short-term, detrended fluctuations. Visually, the deformation is represented as a circle of varying radius located at the inferred centroid of the Halema‘uma‘u reservoir where magma was accumulating.

For the 2018 caldera collapse sequence, we sonified only geodetic and earthquake catalog data, allowing a more focused examination of cyclic behavior during the climactic eruptive episode.

For the 2018 caldera collapse sequence, we sonified only geodetic and earthquake catalog data, allowing a more focused examination of cyclic behavior during the climactic eruptive episode. In this sequence, 62 earthquakes of about magnitude 5, occurring roughly daily, accompanied steplike drops in the caldera floor recorded by a near-summit tiltmeter (a different type of geodetic measurement than GNSS). Hundreds of smaller earthquakes occurred between these collapse events, with interevent times decreasing as stress increased approaching the next large drop.

We used the chord sweep approach developed by Barth et al. [2020] to sonify the tiltmeter data, resulting in continuous tone clusters that rise and fall on a symmetric octatonic musical scale with the caldera collapse. Direct sonification of earthquakes, using a speed factor of 280 (original frequencies above about 0.07 hertz are audible), permits clear differentiation among different event magnitudes, and left-right stereo panning relative to the center of the caldera provides a spatial sense of caldera collapse. We sonified roughly 16,000 earthquakes with magnitudes greater than 1.5 [Shelly and Thelen, 2019] recorded at a station (PUHI) away from the summit to avoid signal clipping. For the accompanying visualization, we animated and colored the hypocenters and depths of earthquakes atop an image of regional topography and under a timeline of ground tilt. This visual approach illustrates the dramatic caldera collapse at the end of the 2018 sequence using topographic data collected afterward.

Well-constructed data movies can be viewed on multiple levels. More so than standard, static plots of time series data, data movies excite curiosity even in viewers with no scientific background or training. If an audiovisual key is provided, visual patterns and sounds promote rapid assessments of causality and spatial structure. One can also view and listen to data movies purely as aesthetic creations independent of the underlying science. Indeed, our approaches to sonifying data for the Kīlauea data movie mirror data-driven techniques used in computer music composition, and volcanic unrest episodes naturally involve compelling musical elements of tension and release.

Data movies hold layers of meaning that can emerge with multiple viewings and listenings and with increased scientific knowledge.

Beneath the aesthetic appeal, however, like any good technical graph, data movies hold layers of meaning that can emerge with multiple viewings and listenings and with increased scientific knowledge. For example, if you watch our movie a few times (or perhaps even only once, if you’re very perceptive), you may notice shifts in VLP period that covary with patterns of other earthquakes around the volcano and ground inflation between 2008 and 2018. You may also notice the remarkable sequences of small foreshock earthquakes that preceded larger-magnitude events in 2018 and were spatially localized around multiple evolving fault structures that hosted the large-scale caldera collapse.

Some of these patterns have already been addressed in the peer-reviewed literature. But others have yet to be studied or explained. So what do you hear? Do clear patterns of deformation, VLP seismicity, or earthquakes seem to precede the 2018 eruptive sequence or other eruptive events? Do patterns of earthquakes and deformation change throughout the 2018 eruptive sequence?

Sonification as a tool for representing Earth science data is in its infancy. We hope that the application presented here, further details of which can be found at the Volcano Listening Project, inspires others to experiment with listening to their data. We look forward to seeing—and hearing—the results.

We thank Adam Roszkiewicz for mixing the composite sonifications and Katie Mulliken for constructive review. A.B. and B.H. were supported by NSF-CISE grant 1663893. B.H. was supported by a Meijerjergen Fellowship to the University of Oregon to collaborate with L.K. and by a Columbia University Collaboratory grant for his course "Sonic and Visual Representation of Data," in which methods used here were developed (available through https://seismicsoundlab.github.io). L.K. was supported by NSF CAREER grant 1848554. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. government.

Barth, A., et al. (2020), Sonification and animation of multivariate data to illuminate dynamics of geyser eruptions, Comput. Music J., 44(1), 35–50, https://doi.org/10.1162/comj_a_00551.

Boschi, L., et al. (2017), On the perception of audified seismic data, J. Acoust. Soc. Am., 141(5), 3,899–3,899, https://doi.org/10.1121/1.4988767.

Crozier, J., and L. Karlstrom (2021), Wavelet‐based characterization of very‐long‐period seismicity reveals temporal evolution of shallow magma system over the 2008–2018 eruption of Kīlauea Volcano, J. Geophys. Res. Solid Earth, 126(6), e2020JB020837, https://doi.org/10.1029/2020JB020837.

Crozier, J., and L. Karlstrom (2022), Evolving magma temperature and volatile contents over the 2008–2018 summit eruption of Kīlauea Volcano, Sci. Adv., 8(22), eabm4310, https://doi.org/10.1126/sciadv.abm4310.

Holtzman, B., et al. (2014), Seismic sound lab: Sights, sounds and perception of the Earth as an acoustic space, in International Symposium on Computer Music Multidisciplinary Research, pp. 161–174, Springer, Cham, Switzerland, https://doi.org/10.1007/978-3-319-12976-1_10.

Holtzman, B. K., et al. (2018), Machine learning reveals cyclic changes in seismic source spectra in Geysers geothermal field, Sci. Adv., 4(5), eaao2929, https://doi.org/10.1126/sciadv.aao2929.

Paté, A., et al. (2022), Combining audio and visual displays to highlight temporal and spatial seismic patterns, J. Multimodal User Interfaces, 16(1), 125–142, https://doi.org/10.1007/s12193-021-00378-8.

Patrick, M., et al. (2021), Kīlauea's 2008–2018 summit lava lake—Chronology and eruption insights, US Geol. Surv. Prof. Pap., 1867(A), https://doi.org/10.3133/pp1867A.

Roads, C. (2004), Microsound, MIT Press, Cambridge, Mass.

Shelly, D. R., and W. A. Thelen (2019), Anatomy of a caldera collapse: Kīlauea 2018 summit seismicity sequence in high resolution, Geophys. Res. Lett., 46(24), 14,395–14,403, https://doi.org/10.1029/2019GL085636.

Song, H. J., and K. Beilharz (2007), Concurrent auditory stream discrimination in auditory graphing, Int. J. Comput., 1(3), 79–87.

Speeth, S. D. (1961), Seismometer sounds, J. Acoust. Soc. Am., 33(7), 909–916, https://doi.org/10.1121/1.1908843.

Wilding, J. D., et al. (2022), The magmatic web beneath Hawai‘i, Science, 379, 462–468, https://doi.org/10.1126/science.ade5755.

Leif Karlstrom ([email protected]), University of Oregon, Eugene; Ben Holtzman, Columbia University, New York, N.Y.; Anna Barth, University of California, Berkeley; Josh Crozier, California Volcano Observatory, U.S. Geological Survey, Menlo Park; and Arthur Paté, Junia/Institut Supérieur de L’electronique et du Numérique, Lille, France

Citation: Text © 2023. The authors. CC BY-NC-ND 3.0
SHARE