Signal processing
OpenSoundscape provides a set of signal processing tools developed in house. In this notebook we provide a comparison of two of these tools on a published bioacoustics dataset.
RIBBIT (Repeat-Interval Based Bioacoustic Identification Tool) is a tool for detecting vocalizations that have a repeating structure. This tool is useful for detecting vocalizations of frogs, toads, and other animals that produce vocalizations with a periodic structure. RIBBIT is also available as an R package.
Published here: Automated detection of frog calls and choruses by pulse repetition rate
Continuous wavelet transform
Published here: Automated recognition of ruffed grouse drumming in field recordings
Run this tutorial
This tutorial is more than a reference! It’s a Jupyter Notebook which you can run and modify on Google Colab or your own computer.
Link to tutorial |
How to run tutorial |
---|---|
The link opens the tutorial in Google Colab. Uncomment the “installation” line in the first cell to install OpenSoundscape. |
|
The link downloads the tutorial file to your computer. Follow the Jupyter installation instructions, then open the tutorial file in Jupyter. |
[1]:
# if this is a Google Colab notebook, install opensoundscape in the runtime environment
if 'google.colab' in str(get_ipython()):
%pip install opensoundscape
Setup
Import required packages
[2]:
# Data handling and plotting
import pandas as pd
pd.set_option('mode.chained_assignment', None)
from pathlib import Path
from sklearn.metrics import precision_score, recall_score
# figures
import IPython.display as ipd
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = [10,3] # spectrogram / figure size - adjust for your display
%config InlineBackend.figure_format = 'retina'
# Audio handling
from opensoundscape.audio import Audio
from opensoundscape.spectrogram import Spectrogram
# Signal processing
from opensoundscape.ribbit import ribbit
from opensoundscape.signal_processing import detect_peak_sequence_cwt
# ignore deprecation warnings
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
/Users/SML161/opensoundscape/opensoundscape/ml/cnn.py:18: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
from tqdm.autonotebook import tqdm
Download data
We choose one particular bird species Northern Flicker (Colaptes auratus) that has a song conducive to analysis by the signal processing methods.
Run the cells below to download the sample clips for this analysis: have already been provided been with the demo repository demos-for-opso in the data
folder.
[3]:
!mkdir signal_processing_examples
!cd signal_processing_examples && curl 'https://raw.githubusercontent.com/kitzeslab/demos-for-opso/main/resources/04/nofl_keek_labels_df.csv' -sLo 'nofl_keek_labels_df.csv'
!cd signal_processing_examples && curl 'https://raw.githubusercontent.com/kitzeslab/demos-for-opso/main/resources/04/nofl_keek_df.csv' -sLo 'nofl_keek_df.csv'
!mkdir signal_processing_examples/clips
!cd signal_processing_examples/clips && curl 'https://github.com/kitzeslab/demos-for-opso/raw/main/resources/04/clips/Recording_1_Segment_23.mp3' -sLo 'Recording_1_Segment_23.mp3'
!cd signal_processing_examples/clips && curl 'https://github.com/kitzeslab/demos-for-opso/raw/main/resources/04/clips/Recording_1_Segment_29.mp3' -sLo 'Recording_1_Segment_29.mp3'
!cd signal_processing_examples/clips && curl 'https://github.com/kitzeslab/demos-for-opso/raw/main/resources/04/clips/Recording_2_Segment_09.mp3' -sLo 'Recording_2_Segment_09.mp3'
!cd signal_processing_examples/clips && curl 'https://github.com/kitzeslab/demos-for-opso/raw/main/resources/04/clips/Recording_4_Segment_21.mp3' -sLo 'Recording_4_Segment_21.mp3'
!cd signal_processing_examples/clips && curl 'https://github.com/kitzeslab/demos-for-opso/raw/main/resources/04/clips/XC645833%20-%20Northern%20Flicker%20-%20Colaptes%20auratus.mp3' -sLo 'XC645833 - Northern Flicker - Colaptes auratus.mp3'
mkdir: signal_processing_examples: File exists
mkdir: signal_processing_examples/clips: File exists
[4]:
# set the data path
data_path = './signal_processing_examples/'
Let us look at a particular type of song called ‘keek’ for this species which has a repeating sequence of shrill calls. The clip used to get a good idea of the song has been downloaded from xeno-canto, a website for sharing crowd-sourced recordings of wildlife sounds from all across the world.
The particular clip XC645833 - Northern Flicker - Colaptes auratus.mp3
is from Ted Floyd, XC645833. Accessible at www.xeno-canto.org/645833.
[5]:
# load audio file and display it
nofl_keek_audio_xc = Audio.from_file(data_path + 'clips/XC645833 - Northern Flicker - Colaptes auratus.mp3')
Spectrogram.from_audio(nofl_keek_audio_xc).bandpass(0,10000).plot()
nofl_keek_audio_xc #can also show this widget with .show_widget()
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/site-packages/matplotlib_inline/config.py:68: DeprecationWarning: InlineBackend._figure_format_changed is deprecated in traitlets 4.1: use @observe and @unobserve instead.
def _figure_format_changed(self, name, old, new):

[5]: