Audio and Spectrograms

Annotations

functions and classes for manipulating annotations of audio

includes BoxedAnnotations class and utilities to combine or “diff” annotations, etc.

class opensoundscape.annotations.BoxedAnnotations(df, audio_file=None)

container for “boxed” (frequency-time) annotations of audio

(for instance, annotations created in Raven software) includes functionality to load annotations from Raven txt files, output one-hot labels for specific clip lengths or clip start/end times, apply corrections/conversions to annotations, and more.

Contains some analogous functions to Audio and Spectrogram, such as trim() [limit time range] and bandpass() [limit frequency range]

bandpass(low_f, high_f, edge_mode='trim')

Bandpass a set of annotations, analogous to Spectrogram.bandpass()

Out-of-place operation: does not modify itself, returns new object

Parameters:
  • low_f – low frequency (Hz) bound
  • high_f – high frequench (Hz) bound
  • edge_mode – what to do when boxes overlap with edges of trim region - ‘trim’: trim boxes to bounds - ‘keep’: allow boxes to extend beyond bounds - ‘remove’: completely remove boxes that extend beyond bounds
Returns:

a copy of the BoxedAnnotations object on the bandpassed region

convert_labels(conversion_table)

modify annotations according to a conversion table

Changes the values of ‘annotation’ column of dataframe. Any labels that do not have specified conversions are left unchanged.

Returns a new BoxedAnnotations object, does not modify itself (out-of-place operation). So use could look like: my_annotations = my_annotations.convert_labels(table)

Parameters:conversion_table – current values -> new values. can be either - pd.DataFrame with 2 columns [current value, new values] or - dictionary {current values: new values}
Returns:new BoxedAnnotations object with converted annotation labels
classmethod from_raven_file(path, annotation_column, keep_extra_columns=True, audio_file=None)

load annotations from Raven txt file

Parameters:
  • path – location of raven .txt file, str or pathlib.Path
  • annotation_column – (str) column containing annotations
  • keep_extra_columns – keep or discard extra Raven file columns (always keeps start_time, end_time, low_f, high_f, annotation audio_file). [default: True] - True: keep all - False: keep none - or iterable of specific columns to keep
  • audio_file – optionally specify the name or path of a corresponding audio file.
Returns:

BoxedAnnotations object containing annotaitons from the Raven file

global_one_hot_labels(classes)

get a dictionary of one-hot labels for entire duration :param classes: iterable of class names to give 0/1 labels

Returns:list of 0/1 labels for each class
one_hot_clip_labels(full_duration, clip_duration, clip_overlap, classes, min_label_overlap, min_label_fraction=1, final_clip=None)

Generate one-hot labels for clips of fixed duration

wraps helpers.generate_clip_times_df() with self.one_hot_labels_like() - Clips are created in the same way as Audio.split() - Labels are applied based on overlap, using self.one_hot_labels_like()

Parameters:
  • full_duration – The amount of time (seconds) to split into clips
  • clip_duration (float) – The duration in seconds of the clips
  • clip_overlap (float) – The overlap of the clips in seconds [default: 0]
  • classes – list of classes for one-hot labels. If None, classes will be all unique values of self.df[‘annotation’]
  • min_label_overlap – minimum duration (seconds) of annotation within the time interval for it to count as a label. Note that any annotation of length less than this value will be discarded. We recommend a value of 0.25 for typical bird songs, or shorter values for very short-duration events such as chip calls or nocturnal flight calls.
  • min_label_fraction – [default: None] if >= this fraction of an annotation overlaps with the time window, it counts as a label regardless of its duration. Note that if either of the two criterea (overlap and fraction) is met, the label is 1. if None (default), this criterion is not used (i.e., only min_label_overlap is used). A value of 0.5 for ths parameter would ensure that all annotations result in at least one clip being labeled 1 (if there are no gaps between clips).
  • final_clip (str) –

    Behavior if final_clip is less than clip_duration seconds long. By default, discards remaining time if less than clip_duration seconds long [default: None]. Options:

    • None: Discard the remainder (do not make a clip)
    • ”extend”: Extend the final clip beyond full_duration to reach clip_duration length
    • ”remainder”: Use only remainder of full_duration (final clip will be shorter than clip_duration)
    • ”full”: Increase overlap with previous clip to yield a clip with clip_duration length
Returns:

dataframe with index [‘start_time’,’end_time’] and columns=classes

one_hot_labels_like(clip_df, classes, min_label_overlap, min_label_fraction=None, keep_index=False)

create a dataframe of one-hot clip labels based on given starts/ends

Uses start and end clip times from clip_df to define a set of clips. Then extracts annotatations associated overlapping with each clip. Required overlap parameters are selected by user: annotation must satisfy the minimum time overlap OR minimum % overlap to be included (doesn’t require both conditions to be met, only one)

clip_df can be created using opensoundscap.helpers.generate_clip_times_df

Parameters:
  • clip_df – dataframe with ‘start_time’ and ‘end_time’ columns specifying the temporal bounds of each clip
  • min_label_overlap – minimum duration (seconds) of annotation within the time interval for it to count as a label. Note that any annotation of length less than this value will be discarded. We recommend a value of 0.25 for typical bird songs, or shorter values for very short-duration events such as chip calls or nocturnal flight calls.
  • min_label_fraction – [default: None] if >= this fraction of an annotation overlaps with the time window, it counts as a label regardless of its duration. Note that if either of the two criterea (overlap and fraction) is met, the label is 1. if None (default), this criterion is not used (i.e., only min_label_overlap is used). A value of 0.5 for ths parameter would ensure that all annotations result in at least one clip being labeled 1 (if there are no gaps between clips).
  • classes – list of classes for one-hot labels. If None, classes will be all unique values of self.df[‘annotation’]
  • keep_index – if True, keeps the index of clip_df as an index in the returned DataFrame. [default:False]
Returns:

DataFrame of one-hot labels (multi-index of (start_time, end_time), columns for each class, values 0=absent or 1=present)

subset(classes)

subset annotations to those from a list of classes

out-of-place operation (returns new filtered BoxedAnnotations object)

Parameters:
  • classes – list of classes to retain (all others are discarded)
  • the list can include np.nan or None if you want to keep them (-) –
Returns:

new BoxedAnnotations object containing only annotations in classes

to_raven_file(path)

save annotations to a Raven-compatible tab-separated text file

Parameters:path – path for saved test file (extension must be “.tsv”) - can be str or pathlib.Path
Outcomes:
creates a file containing the annotations in a format compatible with Raven Pro/Lite.

Note: Raven Lite does not support additional columns beyond a single annotation column. Additional columns will not be shown in the Raven Lite interface.

trim(start_time, end_time, edge_mode='trim')

Trim a set of annotations, analogous to Audio/Spectrogram.trim()

Out-of-place operation: does not modify itself, returns new object

Parameters:
  • start_time – time (seconds) since beginning for left bound
  • end_time – time (seconds) since beginning for right bound
  • edge_mode – what to do when boxes overlap with edges of trim region - ‘trim’: trim boxes to bounds - ‘keep’: allow boxes to extend beyond bounds - ‘remove’: completely remove boxes that extend beyond bounds
Returns:

a copy of the BoxedAnnotations object on the trimmed region. - note that, like Audio.trim(), there is a new reference point for 0.0 seconds (located at start_time in the original object)

unique_labels()

get list of all unique (non-Falsy) labels

opensoundscape.annotations.categorical_to_one_hot(labels, classes=None)

transform multi-target categorical labels (list of lists) to one-hot array

Parameters:
  • labels – list of lists of categorical labels, eg [[‘white’,’red’],[‘green’,’white’]] or [[0,1,2],[3]]
  • classes=None – list of classes for one-hot labels. if None, taken to be the unique set of values in labels
Returns:

2d array with 0 for absent and 1 for present classes: list of classes corresponding to columns in the array

Return type:

one_hot

opensoundscape.annotations.combine(list_of_annotation_objects)

combine annotations with user-specified preferences Not Implemented.

opensoundscape.annotations.diff(base_annotations, comparison_annotations)

look at differences between two BoxedAnnotations objects Not Implemented.

Compare different labels of the same boxes (Assumes that a second annotator used the same boxes as the first, but applied new labels to the boxes)

opensoundscape.annotations.one_hot_labels_on_time_interval(df, classes, start_time, end_time, min_label_overlap, min_label_fraction=None)

generate a dictionary of one-hot labels for given time-interval

Each class is labeled 1 if any annotation overlaps sufficiently with the time interval. Otherwise the class is labeled 0.

Parameters:
  • df – DataFrame with columns ‘start_time’, ‘end_time’ and ‘annotation’
  • classes – list of classes for one-hot labels. If None, classes will be all unique values of self.df[‘annotation’]
  • start_time – beginning of time interval (seconds)
  • end_time – end of time interval (seconds)
  • min_label_overlap – minimum duration (seconds) of annotation within the time interval for it to count as a label. Note that any annotation of length less than this value will be discarded. We recommend a value of 0.25 for typical bird songs, or shorter values for very short-duration events such as chip calls or nocturnal flight calls.
  • min_label_fraction – [default: None] if >= this fraction of an annotation overlaps with the time window, it counts as a label regardless of its duration. Note that if either of the two criterea (overlap and fraction) is met, the label is 1. if None (default), the criterion is not used (only min_label_overlap is used). A value of 0.5 would ensure that all annotations result in at least one clip being labeled 1 (if no gaps between clips).
Returns:

label 0/1} for all classes

Return type:

dictionary of {class

opensoundscape.annotations.one_hot_to_categorical(one_hot, classes)

transform one_hot labels to multi-target categorical (list of lists)

Parameters:
  • one_hot – 2d array with 0 for absent and 1 for present
  • classes – list of classes corresponding to columns in the array
Returns:

list of lists of categorical labels for each sample, eg

[[‘white’,’red’],[‘green’,’white’]] or [[0,1,2],[3]]

Return type:

labels

Audio

audio.py: Utilities for loading and modifying Audio objects

Note: Out-of-place operations

Functions that modify Audio (and Spectrogram) objects are “out of place”, meaning that they return a new Audio object instead of modifying the original object. This means that running a line ` audio_object.resample(22050) # WRONG! ` will not change the sample rate of audio_object! If your goal was to overwrite audio_object with the new, resampled audio, you would instead write ` audio_object = audio_object.resample(22050) `

class opensoundscape.audio.Audio(samples, sample_rate, resample_type='kaiser_fast', metadata=None)

Container for audio samples

Initialization requires sample array. To load audio file, use Audio.from_file()

Initializing an Audio object directly requires the specification of the sample rate. Use Audio.from_file or Audio.from_bytesio with sample_rate=None to use a native sampling rate.

Parameters:
  • samples (np.array) – The audio samples
  • sample_rate (integer) – The sampling rate for the audio samples
  • resample_type (str) – The resampling method to use [default: “kaiser_fast”]
Returns:

An initialized Audio object

bandpass(low_f, high_f, order)

Bandpass audio signal with a butterworth filter

Uses a phase-preserving algorithm (scipy.signal’s butter and solfiltfilt)

Parameters:
  • low_f – low frequency cutoff (-3 dB) in Hz of bandpass filter
  • high_f – high frequency cutoff (-3 dB) in Hz of bandpass filter
  • order – butterworth filter order (integer) ~= steepness of cutoff
duration()

Return duration of Audio

Returns:The duration of the Audio
Return type:duration (float)
extend(length)

Extend audio file by adding silence to the end

Parameters:length – the final duration in seconds of the extended audio object
Returns:a new Audio object of the desired duration
classmethod from_bytesio(bytesio, sample_rate=None, resample_type='kaiser_fast')

Read from bytesio object

Read an Audio object from a BytesIO object. This is primarily used for passing Audio over HTTP.

Parameters:
  • bytesio – Contents of WAV file as BytesIO
  • sample_rate – The final sampling rate of Audio object [default: None]
  • resample_type – The librosa method to do resampling [default: “kaiser_fast”]
Returns:

An initialized Audio object

classmethod from_file(path, sample_rate=None, resample_type='kaiser_fast', metadata=True, offset=None, duration=None, start_timestamp=None, out_of_bounds_mode='warn')

Load audio from files

Deal with the various possible input types to load an audio file Also attempts to load metadata using tinytag.

Audio objects only support mono (one-channel) at this time. Files with multiple channels are mixed down to a single channel.

Optionally, load only a piece of a file using offset and duration. This will efficiently read sections of a .wav file regardless of where the desired clip is in the audio. For mp3 files, access time grows linearly with time since the beginning of the file.

This function relies on librosa.load(), which supports wav natively but requires ffmpeg for mp3 support.

Parameters:
  • path (str, Path) – path to an audio file
  • sample_rate (int, None) – resample audio with value and resample_type, if None use source sample_rate (default: None)
  • resample_type – method used to resample_type (default: kaiser_fast)
  • metadata (bool) – if True, attempts to load metadata from the audio file. If an exception occurs, self.metadata will be None. Otherwise self.metadata is a dictionary. Note: will also attempt to parse AudioMoth metadata from the comment field, if the artist field includes AudioMoth. The parsing function for AudioMoth is likely to break when new firmware versions change the comment metadata field.
  • offset – load audio starting at this time (seconds) after the start of the file. Defaults to 0 seconds. - cannot specify both offset and start_timestamp
  • duration – load audio of this duration (seconds) starting at offset. If None, loads all the way to the end of the file.
  • start_timestamp

    load audio starting at this localized datetime.datetime timestamp - cannot specify both offset and start_timestamp - will only work if loading metadata results in localized datetime

    object for ‘recording_start_time’ key
    • will raise AudioOutOfBoundsError if requested time period

    is not full contained within the audio file Example of creating localized timestamp: ` import pytz; from datetime import datetime; local_timestamp = datetime(2020,12,25,23,59,59) local_timezone = pytz.timezone('US/Eastern') timestamp = local_timezone.localize(local_timestamp) `

  • out_of_bounds_mode
    • ‘warn’: generate a warning [default]
    • ’raise’: raise an AudioOutOfBoundsError
    • ’ignore’: return any available audio with no warning/error
Returns:

samples, sample_rate, resample_type, metadata (dict or None)

Return type:

Audio object with attributes

Note: default sample_rate=None means use file’s sample rate, does not resample

loop(length=None, n=None)

Extend audio file by looping it

Parameters:
  • length – the final length in seconds of the looped file (cannot be used with n)[default: None]
  • n – the number of occurences of the original audio sample (cannot be used with length) [default: None] For example, n=1 returns the original sample, and n=2 returns two concatenated copies of the original sample
Returns:

a new Audio object of the desired length or repetitions

resample(sample_rate, resample_type=None)

Resample Audio object

Parameters:
  • sample_rate (scalar) – the new sample rate
  • resample_type (str) – resampling algorithm to use [default: None (uses self.resample_type of instance)]
Returns:

a new Audio object of the desired sample rate

save(path, write_metadata=True)

Save Audio to file

NOTE: currently, only saving to .wav format supported

Supports writing the following metadata fields: [“title”,”copyright”,”software”,”artist”,”comment”,”date”, “album”,”license”,”tracknumber”,”genre”]

Parameters:
  • path – destination for output
  • write_metadata – if True, uses soundfile.SoundFile to add metadata to the file after writing it. If False, written file will not contain metadata.
spectrum()

Create frequency spectrum from an Audio object using fft

Parameters:self
Returns:fft, frequencies
split(clip_duration, clip_overlap=0, final_clip=None)

Split Audio into even-lengthed clips

The Audio object is split into clips of a specified duration and overlap

Parameters:
  • clip_duration (float) – The duration in seconds of the clips
  • clip_overlap (float) – The overlap of the clips in seconds [default: 0]
  • final_clip (str) –

    Behavior if final_clip is less than clip_duration seconds long. By default, discards remaining audio if less than clip_duration seconds long [default: None]. Options:

    • None: Discard the remainder (do not make a clip)
    • ”extend”: Extend the final clip with silence to reach clip_duration length
    • ”remainder”: Use only remainder of Audio (final clip will be shorter than clip_duration)
    • ”full”: Increase overlap with previous clip to yield a clip with clip_duration length
Returns:

list of audio objects - dataframe w/columns for start_time and end_time of each clip

Return type:

  • audio_clips

split_and_save(destination, prefix, clip_duration, clip_overlap=0, final_clip=None, dry_run=False)

Split audio into clips and save them to a folder

Parameters:
  • destination – A folder to write clips to
  • prefix – A name to prepend to the written clips
  • clip_duration – The duration of each clip in seconds
  • clip_overlap – The overlap of each clip in seconds [default: 0]
  • final_clip (str) –

    Behavior if final_clip is less than clip_duration seconds long. [default: None] By default, ignores final clip entirely. Possible options (any other input will ignore the final clip entirely),

    • ”remainder”: Include the remainder of the Audio (clip will not have clip_duration length)
    • ”full”: Increase the overlap to yield a clip with clip_duration length
    • ”extend”: Similar to remainder but extend (repeat) the clip to reach clip_duration length
    • None: Discard the remainder
  • dry_run (bool) – If True, skip writing audio and just return clip DataFrame [default: False]
Returns:

pandas.DataFrame containing paths and start and end times for each clip

time_to_sample(time)

Given a time, convert it to the corresponding sample

Parameters:time – The time to multiply with the sample_rate
Returns:The rounded sample
Return type:sample
trim(start_time, end_time)

Trim Audio object in time

If start_time is less than zero, output starts from time 0 If end_time is beyond the end of the sample, trims to end of sample

Parameters:
  • start_time – time in seconds for start of extracted clip
  • end_time – time in seconds for end of extracted clip
Returns:

a new Audio object containing samples from start_time to end_time

Warning: metadata is lost during this operation

exception opensoundscape.audio.AudioOutOfBoundsError

Custom exception indicating the user tried to load audio outside of the time period that exists in the audio object

exception opensoundscape.audio.OpsoLoadAudioInputError

Custom exception indicating we can’t load input

opensoundscape.audio.load_channels_as_audio(path, sample_rate=None, resample_type='kaiser_fast', offset=0, duration=None)

Load each channel of an audio file to a separate Audio object

Provides a way to access individual channels, since Audio.from_file mixes down to mono by default

Parameters:Audio.from_file() (see) –
Returns:list of Audio objects (one per channel)

AudioMoth

Utilities specifically for audio files recoreded by AudioMoths

opensoundscape.audiomoth.audiomoth_start_time(file, filename_timezone='UTC', to_utc=False)

parse audiomoth file name into a time stamp

AudioMoths create their file name based on the time that recording starts. This function parses the name into a timestamp. Older AudioMoth firmwares used a hexadecimal unix time format, while newer firmwares use a human-readable naming convention. This function handles both conventions.

Parameters:
  • file – (str) path or file name from AudioMoth recording
  • filename_timezone – (str) name of a pytz time zone (for options see pytz.all_timezones). This is the time zone that the AudioMoth uses to record its name, not the time zone local to the recording site. Usually, this is ‘UTC’ because the AudioMoth records file names in UTC.
  • to_utc – if True, converts timestamps to UTC localized time stamp. Otherwise, will return timestamp localized to timezone argument [default: False]
Returns:

localized datetime object - if to_utc=True, datetime is always “localized” to UTC

opensoundscape.audiomoth.parse_audiomoth_metadata(metadata)

parse a dictionary of AudioMoth .wav file metadata

-parses the comment field -adds keys for gain_setting, battery_state, recording_start_time -if available (firmware >=1.4.0), adds temperature

Notes on comment field: - Starting with Firmware 1.4.0, the audiomoth logs Temperature to the

metadata (wav header) eg “and temperature was 11.2C.”
  • At some point the firmware shifted from writing “gain setting 2” to “medium gain setting” and later to just “medium gain”. Should handle both modes.
  • In later versions of the firmware, “state” was ommitted from “battery state” in the comment field. Handles either phrasing.
Tested for AudioMoth firmware versions:
1.5.0 through 1.8.1
Parameters:metadata – dictionary with audiomoth metadata
Returns:metadata dictionary with added keys and values

Audio Tools

audio_tools.py: set of tools that filter or modify audio files or sample arrays (not Audio objects)

opensoundscape.audio_tools.bandpass_filter(signal, low_f, high_f, sample_rate, order=9)

perform a butterworth bandpass filter on a discrete time signal using scipy.signal’s butter and solfiltfilt (phase-preserving version of sosfilt)

Parameters:
  • signal – discrete time signal (audio samples, list of float)
  • low_f – -3db point (?) for highpass filter (Hz)
  • high_f – -3db point (?) for highpass filter (Hz)
  • sample_rate – samples per second (Hz)
  • order=9 – higher values -> steeper dropoff
Returns:

filtered time signal

opensoundscape.audio_tools.butter_bandpass(low_f, high_f, sample_rate, order=9)

generate coefficients for bandpass_filter()

Parameters:
  • low_f – low frequency of butterworth bandpass filter
  • high_f – high frequency of butterworth bandpass filter
  • sample_rate – audio sample rate
  • order=9 – order of butterworth filter
Returns:

set of coefficients used in sosfiltfilt()

opensoundscape.audio_tools.clipping_detector(samples, threshold=0.6)

count the number of samples above a threshold value

Parameters:
  • samples – a time series of float values
  • threshold=0.6 – minimum value of sample to count as clipping
Returns:

number of samples exceeding threshold

opensoundscape.audio_tools.convolve_file(in_file, out_file, ir_file, input_gain=1.0)

apply an impulse_response to a file using ffmpeg’s afir convolution

ir_file is an audio file containing a short burst of noise recorded in a space whose acoustics are to be recreated

this makes the files ‘sound as if’ it were recorded in the location that the impulse response (ir_file) was recorded

Parameters:
  • in_file – path to an audio file to process
  • out_file – path to save output to
  • ir_file – path to impulse response file
  • input_gain=1.0 – ratio for in_file sound’s amplitude in (0,1)
Returns:

os response of ffmpeg command

opensoundscape.audio_tools.mixdown_with_delays(files_to_mix, destination, delays=None, levels=None, duration='first', verbose=0, create_txt_file=False)

use ffmpeg to mixdown a set of audio files, each starting at a specified time (padding beginnings with zeros)

Parameters:
  • files_to_mix – list of audio file paths
  • destination – path to save mixdown to
  • delays=None – list of delays (how many seconds of zero-padding to add at beginning of each file)
  • levels=None – optionally provide a list of relative levels (amplitudes) for each input
  • duration='first' – ffmpeg option for duration of output file: match duration of ‘longest’,’shortest’,or ‘first’ input file
  • verbose=0 – if >0, prints ffmpeg command and doesn’t suppress ffmpeg output (command line output is returned from this function)
  • create_txt_file=False – if True, also creates a second output file which lists all files that were included in the mixdown
Returns:

ffmpeg command line output

opensoundscape.audio_tools.silence_filter(filename, smoothing_factor=10, window_len_samples=256, overlap_len_samples=128, threshold=None)

Identify whether a file is silent (0) or not (1)

Load samples from an mp3 file and identify whether or not it is likely to be silent. Silence is determined by finding the energy in windowed regions of these samples, and normalizing the detected energy by the average energy level in the recording.

If any windowed region has energy above the threshold, returns a 0; else returns 1.

Parameters:
  • filename (str) – file to inspect
  • smoothing_factor (int) – modifier to window_len_samples
  • window_len_samples – number of samples per window segment
  • overlap_len_samples – number of samples to overlap each window segment
  • threshold – threshold value (experimentally determined)
Returns:

0 if file contains no significant energy over bakcground 1 if file contains significant energy over bakcground

If threshold is None: returns net_energy over background noise

opensoundscape.audio_tools.window_energy(samples, window_len_samples=256, overlap_len_samples=128)

Calculate audio energy with a sliding window

Calculate the energy in an array of audio samples

Parameters:
  • samples (np.ndarray) – array of audio samples loaded using librosa.load
  • window_len_samples – samples per window
  • overlap_len_samples – number of samples shared between consecutive windows
Returns:

list of energy level (float) for each window

Spectrogram

spectrogram.py: Utilities for dealing with spectrograms

class opensoundscape.spectrogram.MelSpectrogram(spectrogram, frequencies, times, decibel_limits, window_samples=None, overlap_samples=None, window_type=None, audio_sample_rate=None, scaling=None)

Immutable mel-spectrogram container

A mel spectrogram is a spectrogram with pseudo-logarithmically spaced frequency bins (see literature) rather than linearly spaced bins.

See Spectrogram class an Librosa’s melspectrogram for detailed documentation.

NOTE: Here we rely on scipy’s spectrogram function (via Spectrogram) rather than on librosa’s _spectrogram or melspectrogram, because the amplitude of librosa’s spectrograms do not match expectations. We only use the mel frequency bank from Librosa.

classmethod from_audio(audio, n_mels=64, window_samples=512, overlap_samples=256, decibel_limits=(-100, -20), htk=False, norm='slaney', window_type='hann', dB_scale=True, scaling='spectrum')

Create a MelSpectrogram object from an Audio object

First creates a spectrogram and a mel-frequency filter bank, then computes the dot product of the filter bank with the spectrogram.

The kwargs for the mel frequency bank are documented at: - https://librosa.org/doc/latest/generated/librosa.feature.melspectrogram.html#librosa.feature.melspectrogram - https://librosa.org/doc/latest/generated/librosa.filters.mel.html?librosa.filters.mel

Parameters:
  • n_mels – Number of mel bands to generate [default: 128] Note: n_mels should be chosen for compatibility with the Spectrogram parameter window_samples. Choosing a value > ~ window_samples/10 will result in zero-valued rows while small values blend rows from the original spectrogram.
  • window_type – The windowing function to use [default: “hann”]
  • window_samples – n samples per window [default: 512]
  • overlap_samples – n samples shared by consecutive windows [default: 256]
  • htk – use HTK mel-filter bank instead of Slaney, see Librosa docs [default: False]
  • norm='slanley' – mel filter bank normalization, see Librosa docs
  • dB_scale=True – If True, rescales values to decibels, x=10*log10(x) - if dB_scale is False, decibel_limits is ignored
  • scaling="spectrum" – see scipy.signal.spectrogram docs for description of scaling parameter
Returns:

opensoundscape.spectrogram.MelSpectrogram object

plot(inline=True, fname=None, show_colorbar=False)

Plot the mel spectrogram with matplotlib.pyplot

We can’t use pcolormesh because it will smash pixels to achieve a linear y-axis, rather than preserving the mel scale.

Parameters:
  • inline=True
  • fname=None – specify a string path to save the plot to (ending in .png/.pdf)
  • show_colorbar – include image legend colorbar from pyplot
class opensoundscape.spectrogram.Spectrogram(spectrogram, frequencies, times, decibel_limits, window_samples=None, overlap_samples=None, window_type=None, audio_sample_rate=None, scaling=None)

Immutable spectrogram container

Can be initialized directly from spectrogram, frequency, and time values or created from an Audio object using the .from_audio() method.

frequencies

(list) discrete frequency bins generated by fft

times

(list) time from beginning of file to the center of each window

spectrogram

a 2d array containing 10*log10(fft) for each time window

decibel_limits

minimum and maximum decibel values in .spectrogram

window_samples

number of samples per window when spec was created [default: none]

overlap_samples

number of samples overlapped in consecutive windows when spec was created [default: none]

window_type

window fn used to make spectrogram, eg ‘hann’ [default: none]

audio_sample_rate

sample rate of audio from which spec was created [default: none]

scaling

Selects between computing the power spectral density (‘density’) where Sxx has units of V**2/Hz and computing the power spectrum (‘spectrum’) where Sxx has units of V**2, if x is measured in V and fs is measured in Hz. [default: spectrum]

amplitude(freq_range=None)

create an amplitude vs time signal from spectrogram

by summing pixels in the vertical dimension

Args
freq_range=None: sum Spectrogrm only in this range of [low, high] frequencies in Hz (if None, all frequencies are summed)
Returns:a time-series array of the vertical sum of spectrogram value
bandpass(min_f, max_f, out_of_bounds_ok=True)

extract a frequency band from a spectrogram

crops the 2-d array of the spectrograms to the desired frequency range

Parameters:
  • min_f – low frequency in Hz for bandpass
  • max_f – high frequency in Hz for bandpass
  • out_of_bounds_ok – (bool) if False, raises ValueError if min_f or max_f are not within the range of the original spectrogram’s frequencies [default: True]
Returns:

bandpassed spectrogram object

duration()

calculate the ammount of time represented in the spectrogram

Note: time may be shorter than the duration of the audio from which the spectrogram was created, because the windows may align in a way such that some samples from the end of the original audio were discarded

classmethod from_audio(audio, window_type='hann', window_samples=None, window_length_sec=None, overlap_samples=None, overlap_fraction=None, fft_size=None, decibel_limits=(-100, -20), dB_scale=True, scaling='spectrum')

create a Spectrogram object from an Audio object

Parameters:
  • window_type="hann" – see scipy.signal.spectrogram docs for description of window parameter
  • window_samples – number of audio samples per spectrogram window (pixel) - Defaults to 512 if window_samples and window_length_sec are None - Note: cannot specify both window_samples and window_length_sec
  • window_length_sec

    length of a single window in seconds - Note: cannot specify both window_samples and window_length_sec - Warning: specifying this parameter often results in less efficient

    spectrogram computation because window_samples will not be a power of 2.
  • overlap_samples – number of samples shared by consecutive windows - Note: must not specify both overlap_samples and overlap_fraction
  • overlap_fraction – fractional temporal overlap between consecutive windows - Defaults to 0.5 if overlap_samples and overlap_fraction are None - Note: cannot specify both overlap_samples and overlap_fraction
  • fft_size – see scipy.signal.spectrogram’s nfft parameter
  • decibel_limits – limit the dB values to (min,max) (lower values set to min, higher values set to max)
  • dB_scale – If True, rescales values to decibels, x=10*log10(x) - if dB_scale is False, decibel_limits is ignored
  • scaling="spectrum" – see scipy.signal.spectrogram docs for description of scaling parameter
Returns:

opensoundscape.spectrogram.Spectrogram object

limit_db_range(min_db=-100, max_db=-20)

Limit the decibel values of the spectrogram to range from min_db to max_db

values less than min_db are set to min_db values greater than max_db are set to max_db

similar to Audacity’s gain and range parameters

Parameters:
  • min_db – values lower than this are set to this
  • max_db – values higher than this are set to this
Returns:

Spectrogram object with db range applied

linear_scale(feature_range=(0, 1))

Linearly rescale spectrogram values to a range of values using in_range as decibel_limits

Parameters:feature_range – tuple of (low,high) values for output
Returns:Spectrogram object with values rescaled to feature_range
min_max_scale(feature_range=(0, 1))

Linearly rescale spectrogram values to a range of values using in_range as minimum and maximum

Parameters:feature_range – tuple of (low,high) values for output
Returns:Spectrogram object with values rescaled to feature_range
net_amplitude(signal_band, reject_bands=None)

create amplitude signal in signal_band and subtract amplitude from reject_bands

rescale the signal and reject bands by dividing by their bandwidths in Hz (amplitude of each reject_band is divided by the total bandwidth of all reject_bands. amplitude of signal_band is divided by badwidth of signal_band. )

Parameters:
  • signal_band – [low,high] frequency range in Hz (positive contribution)
  • band (reject) – list of [low,high] frequency ranges in Hz (negative contribution)

return: time-series array of net amplitude

plot(inline=True, fname=None, show_colorbar=False)

Plot the spectrogram with matplotlib.pyplot

Parameters:
  • inline=True
  • fname=None – specify a string path to save the plot to (ending in .png/.pdf)
  • show_colorbar – include image legend colorbar from pyplot
to_image(shape=None, channels=1, colormap=None, invert=False, return_type='pil')

Create an image from spectrogram (array, tensor, or PIL.Image)

Linearly rescales values in the spectrogram from self.decibel_limits to [0,255] (PIL.Image) or [0,1] (array/tensor)

Default of self.decibel_limits on load is [-100, -20], so, e.g., -20 db is loudest -> black, -100 db is quietest -> white

Parameters:
  • shape – tuple of output dimensions as (height, width) - if None, retains original shape of self.spectrogram
  • channels – eg 3 for rgb, 1 for greyscale - must be 3 to use colormap
  • colormap – if None, greyscale spectrogram is generated Can be any matplotlib colormap name such as ‘jet’
  • return_type – type of returned object - ‘pil’: PIL.Image - ‘np’: numpy.ndarray - ‘torch’: torch.tensor
Returns:

  • PIL.Image with c channels and shape w,h given by shape
    and values in [0,255]
  • np.ndarray with shape [c,h,w] and values in [0,1]
  • or torch.tensor with shape [c,h,w] and values in [0,1]

Return type:

Image/array with type depending on return_type

trim(start_time, end_time)

extract a time segment from a spectrogram

Parameters:
  • start_time – in seconds
  • end_time – in seconds
Returns:

spectrogram object from extracted time segment

window_length()

calculate length of a single fft window, in seconds:

window_start_times()

get start times of each window, rather than midpoint times

window_step()

calculate time difference (sec) between consecutive windows’ centers

Machine Learning

Convolutional Neural Networks

classes for pytorch machine learning models in opensoundscape

For tutorials, see notebooks on opensoundscape.org

class opensoundscape.torch.models.cnn.CNN(architecture, classes, sample_duration, single_target=False, preprocessor_class=<class 'opensoundscape.preprocess.preprocessors.SpectrogramPreprocessor'>, sample_shape=[224, 224, 3])

Generic CNN Model with .train(), .predict(), and .save()

flexible architecture, optimizer, loss function, parameters

for tutorials and examples see opensoundscape.org

Parameters:
  • architecture

    EITHER a pytorch model object (subclass of torch.nn.Module), for example one generated with the cnn_architectures module OR a string matching one of the architectures listed by cnn_architectures.list_architectures(), eg ‘resnet18’. - If a string is provided, uses default parameters

    (including use_pretrained=True) Note: For resnet architectures, if num_channels != 3, averages the conv1 weights across all channels.
  • classes – list of class names. Must match with training dataset classes if training.
  • single_target
    • True: model expects exactly one positive class per sample
    • False: samples can have any number of positive classes

    [default: False]

eval(targets, scores, logging_offset=0)

compute single-target or multi-target metrics from targets and scores

By default, the overall model score is “map” (mean average precision) for multi-target models (self.single_target=False) and “f1” (average of f1 score across classes) for single-target models).

Override this function to use a different set of metrics. It should always return (1) a single score (float) used as an overall metric of model quality and (2) a dictionary of computed metrics

Parameters:
  • targets – 0/1 for each sample and each class
  • scores – continuous values in 0/1 for each sample and class
  • logging_offset – modify verbosity - for example, -1 will reduce the amount of printing/logging by 1 level
load_weights(path, strict=True)

load network weights state dict from a file

For instance, load weights saved with .save_weights() in-place operation

Parameters:
  • path – file path with saved weights
  • strict – (bool) see torch.load()
predict(samples, batch_size=1, num_workers=0, activation_layer=None, binary_preds=None, threshold=None, split_files_into_clips=True, overlap_fraction=0, final_clip=None, bypass_augmentations=True, unsafe_samples_log=None)

Generate predictions on a dataset

Choose to return any combination of scores, labels, and single-target or multi-target binary predictions. Also choose activation layer for scores (softmax, sigmoid, softmax then logit, or None). Binary predictions are performed post-activation layer

Note: the order of returned dataframes is (scores, preds, labels)

Parameters:
  • samples – the files to generate predictions for. Can be: - a dataframe with index containing audio paths, OR - a list (or np.ndarray) of audio file paths
  • batch_size – Number of files to load simultaneously [default: 1]
  • num_workers – parallelization (ie cpus or cores), use 0 for current process [default: 0]
  • activation_layer – Optionally apply an activation layer such as sigmoid or softmax to the raw outputs of the model. options: - None: no activation, return raw scores (ie logit, [-inf:inf]) - ‘softmax’: scores all classes sum to 1 - ‘sigmoid’: all scores in [0,1] but don’t sum to 1 - ‘softmax_and_logit’: applies softmax first then logit [default: None]
  • binary_preds – Optionally return binary (thresholded 0/1) predictions options: - ‘single_target’: max scoring class = 1, others = 0 - ‘multi_target’: scores above threshold = 1, others = 0 - None: do not create or return binary predictions [default: None] Note: if you choose multi-target, you must specify threshold
  • threshold – prediction threshold(s) for post-activation layer scores. Only relevant when binary_preds == ‘multi_target’ If activation layer is sigmoid, choose value in [0,1] If activation layer is None or softmax_and_logit, in [-inf,inf]
  • overlap_fraction – fraction of overlap between consecutive clips when predicting on clips of longer audio files. For instance, 0.5 gives 50% overlap between consecutive clips.
  • final_clip – see opensoundscape.helpers.generate_clip_times_df
  • bypass_augmentations – If False, Actions with is_augmentation==True are performed. Default True.
  • unsafe_samples_log – if not None, samples that failed to preprocess will be listed in this text file.
Returns:

df of post-activation_layer scores predictions: df of 0/1 preds for each class unsafe_samples: list of samples that failed to preprocess

Return type:

scores

Note: if loading an audio file raises a PreprocessingError, the scores
and predictions for that sample will be np.nan

Note: if no return type is selected for binary_preds, returns None instead of a DataFrame for predictions

save_weights(path)

save just the weights of the network

This allows the saved weights to be used more flexibly than model.save() which will pickle the entire object. The weights are saved in a pickled dictionary using torch.save(self.network.state_dict())

Parameters:path – location to save weights file
train(train_df, validation_df=None, epochs=1, batch_size=1, num_workers=0, save_path='.', save_interval=1, log_interval=10, validation_interval=1, unsafe_samples_log='./unsafe_training_samples.log')

train the model on samples from train_dataset

If customized loss functions, networks, optimizers, or schedulers are desired, modify the respective attributes before calling .train().

Parameters:
  • train_df – a dataframe of files and labels for training the model
  • validation_df – a dataframe of files and labels for evaluating the model [default: None means no validation is performed]
  • epochs – number of epochs to train for (1 epoch constitutes 1 view of each training sample)
  • batch_size – number of training files simultaneously passed through forward pass, loss function, and backpropagation
  • num_workers – number of parallel CPU tasks for preprocessing Note: use 0 for single (root) process (not 1)
  • save_path – location to save intermediate and best model objects [default=”.”, ie current location of script]
  • save_interval – interval in epochs to save model object with weights [default:1] Note: the best model is always saved to best.model in addition to other saved epochs.
  • log_interval – interval in batches to print training loss/metrics
  • validation_interval – interval in epochs to test the model on the validation set Note that model will only update it’s best score and save best.model file on epochs that it performs validation.
  • unsafe_samples_log – file path: log all samples that failed in preprocessing (file written when training completes) - if None, does not write a file
class opensoundscape.torch.models.cnn.InceptionV3(classes, sample_duration, single_target=False, preprocessor_class=<class 'opensoundscape.preprocess.preprocessors.SpectrogramPreprocessor'>, freeze_feature_extractor=False, use_pretrained=True, sample_shape=[299, 299, 3])
opensoundscape.torch.models.cnn.load_model(path, device=None)

load a saved model object

Parameters:
  • path – file path of saved model
  • device – which device to load into, eg ‘cuda:1’
  • [default – None] will choose first gpu if available, otherwise cpu
Returns:

a model object with loaded weights

opensoundscape.torch.models.cnn.load_outdated_model(path, architecture, sample_duration, model_class=<class 'opensoundscape.torch.models.cnn.CNN'>, device=None)

load a CNN saved with a previous version of OpenSoundscape

This function enables you to load models saved with opso 0.4.x and 0.5.x. If your model was saved with .save() in a previous version of OpenSoundscape >=0.6.0, you must re-load the model using the original package version and save it’s network’s state dict, i.e., torch.save(model.network.state_dict(),path), then load the state dict to a new model object with model.load_weights(). See the Predict with pre-trained CNN tutorial for details.

For models created with the same version of OpenSoundscape as the one you are using, simply use opensoundscape.torch.models.cnn.load_model().

Note: for future use of the loaded model, you can simply call model.save(path) after creating it, then reload it with model = load_model(path). The saved model will be fully compatible with opensoundscape >=0.7.0.

Examples: ``` #load a binary resnet18 model from opso 0.4.x, 0.5.x, or 0.6.0 from opensoundscape.torch.models.cnn import CNN model = load_outdated_model(‘old_model.tar’,architecture=’resnet18’)

#load a resnet50 model of class CNN created with opso 0.5.0 from opensoundscape.torch.models.cnn import CNN model_050 = load_outdated_model(‘opso050_pytorch_model_r50.model’,architecture=’resnet50’) ```

Parameters:
  • path – path to model file, ie .model or .tar file
  • architecture – see CNN docs (pass None if the class __init__ does not take architecture as an argument)
  • sample_duration – length of samples in seconds
  • model_class – class to construct. Normally CNN.
  • device – optionally specify a device to map tensors onto, eg ‘cpu’, ‘cuda:0’, ‘cuda:1’[default: None] - if None, will choose cuda:0 if cuda is available, otherwise chooses cpu
Returns:

a cnn model object with the weights loaded from the saved model

opensoundscape.torch.models.cnn.separate_resnet_feat_clf(model)

Separate feature/classifier training params for a ResNet model

Parameters:model – an opso model object with a pytorch resnet architecture
Returs:
model with modified .optimizer_params and ._init_optimizer() method
Effects:
creates a new self.opt_net object that replaces the old one resets self.current_epoch to 0
opensoundscape.torch.models.cnn.use_resample_loss(model)

Modify a model to use ResampleLoss for multi-target training

ResampleLoss may perform better than BCE Loss for multitarget problems in some scenarios.

class opensoundscape.torch.models.utils.BaseModule

Base class for a pytorch model pipeline class.

All child classes should define load, save, etc

opensoundscape.torch.models.utils.apply_activation_layer(x, activation_layer=None)

applies an activation layer to a set of scores

Parameters:
  • x – input values
  • activation_layer
    • None [default]: return original values
    • ’softmax’: apply softmax activation
    • ’sigmoid’: apply sigmoid activation
    • ’softmax_and_logit’: apply softmax then logit transform
Returns:

values with activation layer applied

opensoundscape.torch.models.utils.cas_dataloader(dataset, batch_size, num_workers)

Return a dataloader that uses the class aware sampler

Class aware sampler tries to balance the examples per class in each batch. It selects just a few classes to be present in each batch, then samples those classes for even representation in the batch.

Parameters:
  • dataset – a pytorch dataset type object
  • batch_size – see DataLoader
  • num_workers – see DataLoader
opensoundscape.torch.models.utils.collate_lists_of_audio_clips(batch)

Collate function for splitting + prediction of long audio files

Puts each data field into a tensor with outer dimension batch size

Additionally, concats the dfs from each audio file into one long df for the entire batch

opensoundscape.torch.models.utils.get_batch(array, batch_size, batch_number)

get a single slice of a larger array

using the batch size and batch index, from zero

Parameters:
  • array – iterable to split into batches
  • batch_size – num elements per batch
  • batch_number – index of batch
Returns:

one batch (subset of array)

Note: the final elements are returned as the last batch even if there are fewer than batch_size

Example

if array=[1,2,3,4,5,6,7] then:

  • get_batch(array,3,0) returns [1,2,3]
  • get_batch(array,3,3) returns [7]
opensoundscape.torch.models.utils.tensor_binary_predictions(scores, mode, threshold=None)

generate binary 0/1 predictions from continuous scores

Does not transform scores: compares scores directly to threshold.

Parameters:
  • scores – torch.Tensor of dim (batch_size, n_classes) with input scores [-inf:inf]
  • mode – ‘single_target’, ‘multi_target’, or None (return empty tensor)
  • threshold – minimum score to predict 1, if mode==’multi_target’. threshold
  • be a single value for all classes or a list of class-specific values. (can) –
Returns:

torch.Tensor of 0/1 predictions in same shape as scores

Note: expects real-valued (unbounded) input scores, i.e. scores take values in [-inf, inf]. Sigmoid layer is applied before multi-target prediction, so the threshold should be in [0,1].

Module to initialize PyTorch CNN architectures with custom output shape

This module allows the use of several built-in CNN architectures from PyTorch. The architecture refers to the specific layers and layer input/output shapes (including convolution sizes and strides, etc) - such as the ResNet18 or Inception V3 architecture.

We provide wrappers which modify the output layer to the desired shape (to match the number of classes). The way to change the output layer shape depends on the architecture, which is why we need a wrapper for each one. This code is based on pytorch.org/tutorials/beginner/finetuning_torchvision_models_tutorial.html

To use these wrappers, for example, if your model has 10 output classes, write

my_arch=resnet18(10)

Then you can initialize a model object from opensoundscape.torch.models.cnn with your architecture:

model=PytorchModel(my_arch,classes)

or override an existing model’s architecture:

model.network = my_arch

Note: the InceptionV3 architecture must be used differently than other architectures - the easiest way is to simply use the InceptionV3 class in opensoundscape.torch.models.cnn.

Note 2: For resnet architectures, if num_channels != 3, averages the conv1 weights across all channels.

opensoundscape.torch.architectures.cnn_architectures.alexnet(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for AlexNet architecture

input size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.densenet121(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for densenet121 architecture

input size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.freeze_params(model)

remove gradients (aka freeze) all model parameters

opensoundscape.torch.architectures.cnn_architectures.inception_v3(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for Inception v3 architecture

Input: 229x229

WARNING: expects (299,299) sized images and has auxiliary output. See InceptionV3 class in opensoundscape.torch.models.cnn for use.

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.modify_resnet(model, num_classes, num_channels)

modify input and output shape of a resnet architecture

If num_channels != 3, averages the conv1 weights across all channels.

opensoundscape.torch.architectures.cnn_architectures.resnet101(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for ResNet101 architecture

input_size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo. If num_channels != 3, averages the conv1 weights across all channels.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.resnet152(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for ResNet152 architecture

input_size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo. If num_channels != 3, averages the conv1 weights across all channels.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.resnet18(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for ResNet18 architecture

input_size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo. If num_channels != 3, averages the conv1 weights across all channels.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.resnet34(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for ResNet34 architecture

input_size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo. If num_channels != 3, averages the conv1 weights across all channels.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.resnet50(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for ResNet50 architecture

input_size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo. If num_channels != 3, averages the conv1 weights across all channels.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.squeezenet1_0(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for squeezenet architecture

input size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo.
  • num_channels – specify channels in input sample, eg [channels h,w] sample shape
opensoundscape.torch.architectures.cnn_architectures.vgg11_bn(num_classes, freeze_feature_extractor=False, use_pretrained=True, num_channels=3)

Wrapper for vgg11 architecture

input size = 224

Parameters:
  • num_classes – number of output nodes for the final layer
  • freeze_feature_extractor – if False (default), entire network will have gradients and can train if True, feature block is frozen and only final layer is trained
  • use_pretrained – if True, uses pre-trained ImageNet features from Pytorch’s model zoo.

defines feature extractor and Architecture class for ResNet CNN

This implementation of the ResNet18 architecture allows for separate access to the feature extraction and classification blocks. This can be useful, for instance, to freeze the feature extractor and only train the classifier layer; or to specify different learning rates for the two blocks.

This implementation is used in the Resnet18Binary and Resnet18Multiclass classes of opensoundscape.torch.models.cnn.

class opensoundscape.torch.architectures.resnet.ResNetArchitecture(num_cls, weights_init='ImageNet', num_layers=18, init_classifier_weights=False)

ResNet architecture with 18 or 50 layers

This implementation enables separate access to feature and classification blocks.

Parameters:
  • num_cls – number of classes (int)
  • weights_init
    • “ImageNet”: load the pre-trained weights for ImageNet dataset
    • path: load weights from a path on your computer or a url
    • None: initialize with random weights
  • num_layers – 18 for Resnet18 or 50 for Resnet50
  • init_classifier_weights
    • if True, load the weights of the classification layer as well as

    feature extraction layers - if False (default), only load the weights of the feature extraction layers

load(init_path, init_classifier_weights=True, verbose=False)

load state dict (weights) of the feature+classifier optionally load only feature weights not classifier weights

Parameters:
  • init_path
    • url containing “http”: download weights from web
    • path: load weights from local path
  • init_classifier_weights
    • if True, load the weights of the classification layer as well as

    feature extraction layers - if False (default), only load the weights of the feature extraction layers

  • verbose – if True, print missing/unused keys [default: False]
class opensoundscape.torch.architectures.resnet.ResNetFeature(block, layers, zero_init_residual=False, groups=1, width_per_group=64, replace_stride_with_dilation=None, norm_layer=None)
class opensoundscape.torch.architectures.utils.BaseArchitecture

Base architecture for reference.

class opensoundscape.torch.architectures.utils.CompositeArchitecture(*args, **kwargs)

Architecture with separate feature and classsifier blocks

Data Selection

opensoundscape.data_selection.resample(df, n_samples_per_class, upsample=True, downsample=True, with_replace=False, random_state=None)

resample a one-hot encoded label df for a target n_samples_per_class

Parameters:
  • df – dataframe with one-hot encoded labels: columns are classes, index is sample name/path
  • n_samples_per_class – target number of samples per class
  • upsample – if True, duplicate samples for classes with <n samples to get to n samples
  • downsample – if True, randomly sample classis with >n samples to get to n samples
  • with_replace – flag to enable sampling of the same row more than once, default False
  • random_state – passed to np.random calls. If None, random state is not fixed.

Note: The algorithm assumes that the label df is single-label. If the label df is multi-label, some classes can end up over-represented.

Note 2: The resulting df will have samples ordered by class label, even if the input df had samples in a random order.

opensoundscape.data_selection.upsample(input_df, label_column='Labels', with_replace=False, random_state=None)

Given a input DataFrame of categorical labels, upsample to maximum value

Upsampling removes the class imbalance in your dataset. Rows for each label are repeated up to max_count // rows. Then, we randomly sample the rows to fill up to max_count.

The input df is NOT one-hot encoded in this case, but instead contains categorical labels in a specified label_columns

Parameters:
  • input_df – A DataFrame to upsample
  • label_column – The column to draw unique labels from
  • with_replace – flag to enable sampling of the same row more than once, default False
  • random_state – Set the random_state during sampling
Returns:

An upsampled DataFrame

Return type:

df

Datasets

Preprocessors: pd.Series child with an action sequence & forward method

class opensoundscape.torch.datasets.AudioFileDataset(samples, preprocessor, return_labels=True, bypass_augmentations=False)

Base class for audio datasets with OpenSoundscape (use in place of torch Dataset)

Custom Dataset classes should subclass this class or its children.

Datasets in OpenSoundscape contain a Preprocessor object which is responsible for the procedure of generating a sample for a given input. The DataLoader handles a dataframe of samples (and potentially labels) and uses a Preprocessor to generate samples from them.

Parameters:
  • samples

    the files to generate predictions for. Can be: - a dataframe with index containing audio paths, OR - a list or np.ndarray of audio file paths

    Notes for input dataframe:
    • df must have audio paths in the index.
    • If label_df has labels, the class names should be the columns, and
    the values of each row should be 0 or 1.
    • If data does not have labels, label_df will have no columns
  • preprocessor – an object of BasePreprocessor or its children which defines the operations to perform on input samples
  • return_labels – if True, the __getitem__ method will return {X:sample,y:labels} If False, the __getitem__ method will return {X:sample} If label_df has no labels (no columns), use return_labels=False [default: True]
Raises:

PreprocessingError if exception is raised during __getitem__

Effects:
self.unsafe_samples will contain a list of paths that did not successfully
produce a list of clips with start/end times, if split_files_into_clips=True
class_counts()

count number of each label

head(n=5)

out-of-place copy of first n samples

performs df.head(n) on self.label_df

Parameters:
  • n – number of first samples to return, see pandas.DataFrame.head()
  • [default – 5]
Returns:

a new dataset object

sample(**kwargs)

out-of-place random sample

creates copy of object with n rows randomly sampled from dataframe

Args: see pandas.DataFrame.sample()

Returns:a new dataset object
class opensoundscape.torch.datasets.AudioSplittingDataset(samples, preprocessor, overlap_fraction=0, final_clip=None)

class to load clips of longer files rather than one sample per file

Currently does not support returning labels.

Parameters:AudioFileDataset and make_clip_df (see) –

Grad Cam

GradCAM is a method of visualizing the activation of the network on parts of an image

# Author: Kazuto Nakashima # URL: http://kazuto1011.github.io # Created: 2017-05-26

Loss Functions

loss function classes to use with opensoundscape models

class opensoundscape.torch.loss.BCEWithLogitsLoss_hot

use pytorch’s nn.BCEWithLogitsLoss for one-hot labels by simply converting y from long to float

class opensoundscape.torch.loss.CrossEntropyLoss_hot

use pytorch’s nn.CrossEntropyLoss for one-hot labels by converting labels from 1-hot to integer labels

throws a ValueError if labels are not one-hot

class opensoundscape.torch.loss.ResampleLoss(class_freq, reduction='mean', loss_weight=1.0)
opensoundscape.torch.loss.reduce_loss(loss, reduction)

Reduce loss as specified.

Parameters:
  • loss (Tensor) – Elementwise loss tensor.
  • reduction (str) – Options are “none”, “mean” and “sum”.
Returns:

Reduced loss tensor.

Return type:

Tensor

opensoundscape.torch.loss.weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None)

Apply element-wise weight and reduce loss.

Parameters:
  • loss (Tensor) – Element-wise loss.
  • weight (Tensor) – Element-wise weights.
  • reduction (str) – Same as built-in losses of PyTorch.
  • avg_factor (float) – Avarage factor when computing the mean of losses.
Returns:

Processed loss values.

Return type:

Tensor

Safe Dataloading

Dataset wrapper to handle errors gracefully in Preprocessor classes

A SafeDataset handles errors in a potentially misleading way: If an error is raised while trying to load a sample, the SafeDataset will instead load a different sample. The indices of any samples that failed to load will be stored in ._unsafe_indices.

The behavior may be desireable for training a model, but could cause silent errors when predicting a model (replacing a bad file with a different file), and you should always be careful to check for ._unsafe_indices after using a SafeDataset.

based on an implementation by @msamogh in nonechucks (github.com/msamogh/nonechucks/)

class opensoundscape.torch.safe_dataset.SafeDataset(dataset, unsafe_behavior, eager_eval=False)

A wrapper for a Dataset that handles errors when loading samples

WARNING: When iterating, will skip the failed sample, but when using within a DataLoader, finds the next good sample and uses it for the current index (see __getitem__).

Note that this class does not subclass DataSet. Instead, it contains a .dataset attribute that is a DataSet (or a Preprocessor, which subclasses DataSet).

Parameters:
  • dataset – a torch Dataset instance or child such as a Preprocessor
  • eager_eval – If True, checks if every file is able to be loaded during initialization (logs _safe_indices and _unsafe_indices)

Attributes: _safe_indices and _unsafe_indices can be accessed later to check which samples threw errors.

_build_index()

tries to load each sample, logs _safe_indices and _unsafe_indices

__getitem__(index)

If loading an index fails, keeps trying the next index until success

_safe_get_item()

Tries to load a sample, returns None if error occurs

is_index_built

Returns True if all indices of the original dataset have been classified into safe_samples_indices or _unsafe_samples_indices.

Sampling

classes for strategically sampling within a DataLoader

class opensoundscape.torch.sampling.ClassAwareSampler(labels, num_samples_cls=1)

In each batch of samples, pick a limited number of classes to include and give even representation to each class

class opensoundscape.torch.sampling.ImbalancedDatasetSampler(dataset, indices=None, num_samples=None, callback_get_label=None)

Samples elements randomly from a given list of indices for imbalanced dataset :param indices: a list of indices :type indices: list, optional :param num_samples: number of samples to draw :type num_samples: int, optional :param callback_get_label func: a callback-like function which takes two arguments - dataset and index

Performance Metrics

opensoundscape.metrics.binary_predictions(scores, single_target=False, threshold=0.5)

convert numeric scores to binary predictions

return 0/1 for an array of scores: samples (rows) x classes (columns)

Parameters:
  • scores – a 2-d list or np.array. row=sample, columns=classes
  • single_target – if True, predict 1 for highest scoring class per sample, 0 for other classes. If False, predict 1 for all scores > threshold [default: False]
  • threshold – Predict 1 for score > threshold. only used if single_target = False. [default: 0.5]
opensoundscape.metrics.multi_target_metrics(targets, scores, class_names, threshold)

generate various metrics for a set of scores and labels (targets)

Parameters:
  • targets – 0/1 lables in 2d array
  • scores – continuous values in 2d array
  • class_names – list of strings
  • threshold – scores >= threshold result in prediction of 1, while scores < threshold result in prediction of 0
Returns:

dictionary of various overall and per-class metrics

Return type:

metrics_dict

opensoundscape.metrics.single_target_metrics(targets, scores, class_names)

generate various metrics for a set of scores and labels (targets)

Predicts 1 for the highest scoring class per sample and 0 for all other classes.

Parameters:
  • targets – 0/1 lables in 2d array
  • scores – continuous values in 2d array
  • class_names – list of strings
Returns:

dictionary of various overall and per-class metrics

Return type:

metrics_dict

Preprocessing

Image Augmentation

Transforms and augmentations for PIL.Images

opensoundscape.preprocess.img_augment.time_split(img, seed=None)

Given a PIL.Image, split into left/right parts and swap

Randomly chooses the slicing location For example, if h chosen

abcdefghijklmnop
^

hijklmnop + abcdefg

Parameters:img – A PIL.Image
Returns:A PIL.Image

Preprocessing Actions

Actions for augmentation and preprocessing pipelines

This module contains Action classes which act as the elements in Preprocessor pipelines. Action classes have go(), on(), off(), and set() methods. They take a single sample of a specific type and return the transformed or augmented sample, which may or may not be the same type as the original.

See the preprocessor module and Preprocessing tutorial for details on how to use and create your own actions.

class opensoundscape.preprocess.actions.Action(fn, is_augmentation=False, extra_args=[], **kwargs)

Action class for an arbitrary function

The function must take the sample as the first argument

Note that this allows two use cases: (A) regular function that takes an input object as first argument

eg. Audio.from_file(path,**kwargs)
  1. method of a class, which takes ‘self’ as the first argument, eg. Spectrogram.bandpass(self,**kwargs)

Other arguments are an arbitrary list of kwargs.

class opensoundscape.preprocess.actions.AudioClipLoader(**kwargs)

Action to load clips from an audio file

Loads an audio file or part of a file to an Audio object. Will load entire audio file if _start_time and _end_time are None. see Audio.from_file() for documentation.

Parameters:Audio.from_file() (see) –
class opensoundscape.preprocess.actions.AudioTrim(**kwargs)

Action to trim/extend audio to desired length

Parameters:actions.trim_audio (see) –
class opensoundscape.preprocess.actions.BaseAction

Parent class for all Actions (used in Preprocessor pipelines)

New actions should subclass this class.

Subclasses should set self.requires_labels = True if go() expects (X,y) instead of (X). y is a row of a dataframe (a pd.Series) with index (.name) = original file path, columns=class names, values=labels (0,1). X is the sample, and can be of various types (path, Audio, Spectrogram, Tensor, etc). See Overlay for an example of an Action that uses labels.

set(**kwargs)

only allow keys that exist in self.params

class opensoundscape.preprocess.actions.Overlay(is_augmentation=True, **kwargs)

Action Class for augmentation that overlays samples on eachother

Required Args:

overlay_df: dataframe of audio files (index) and labels to use for overlay update_labels (bool): if True, labels of sample are updated to include

labels of overlayed sample

See overlay() for other arguments and default values.

opensoundscape.preprocess.actions.frequency_mask(tensor, max_masks=3, max_width=0.2)

add random horizontal bars over Tensor

Parameters:
  • tensor – input Torch.tensor sample
  • max_masks – max number of horizontal bars [default: 3]
  • max_width – maximum size of horizontal bars as fraction of sample height
Returns:

augmented tensor

opensoundscape.preprocess.actions.image_to_tensor(img, greyscale=False)

Convert PIL image to RGB or greyscale Tensor (PIL.Image -> Tensor)

convert PIL.Image w/range [0,255] to torch Tensor w/range [0,1]

Parameters:
  • img – PIL.Image
  • greyscale – if False, converts image to RGB (3 channels). If True, converts image to one channel.
opensoundscape.preprocess.actions.overlay(x, _labels, _preprocessor, overlay_df, update_labels, overlay_class=None, overlay_prob=1, max_overlay_num=1, overlay_weight=0.5)

iteratively overlay 2d samples on top of eachother

Overlays (blends) image-like samples from overlay_df on top of the sample with probability overlay_prob until stopping condition. If necessary, trims overlay audio to the length of the input audio.

Overlays can be used in a few general ways:
  1. a separate df where any file can be overlayed (overlay_class=None)
  2. same df as training, where the overlay class is “different” ie,
    does not contain overlapping labels with the original sample
  3. same df as training, where samples from a specific class are used
    for overlays
Parameters:
  • overlay_df – a labels dataframe with audio files as the index and classes as columns
  • _labels – labels of the original sample
  • _preprocessor – the preprocessing pipeline
  • update_labels – if True, add overlayed sample’s labels to original sample
  • overlay_class

    how to choose files from overlay_df to overlay Options [default: “different”]: None - Randomly select any file from overlay_df “different” - Select a random file from overlay_df containing none

    of the classes this file contains

    specific class name - always choose files from this class

  • overlay_prob – the probability of applying each subsequent overlay
  • max_overlay_num

    the maximum number of samples to overlay on original - for example, if overlay_prob = 0.5 and max_overlay_num=2,

    1/2 of samples will recieve 1 overlay and 1/4 will recieve an additional second overlay
  • overlay_weight – a float > 0 and < 1, or a list of 2 floats [min, max] between which the weight will be randomly chosen. e.g. [0.1,0.7] An overlay_weight <0.5 means more emphasis on original sample.
Returns:

overlayed sample, (possibly updated) labels

opensoundscape.preprocess.actions.scale_tensor(tensor, input_mean=0.5, input_std=0.5)

linear scaling of tensor values using torch.transforms.Normalize

(Tensor->Tensor)

WARNING: This does not perform per-image normalization. Instead, it takes as arguments a fixed u and s, ie for the entire dataset, and performs X=(X-input_mean)/input_std.

Parameters:
  • input_mean – mean of input sample pixels (average across dataset)
  • input_std – standard deviation of input sample pixels (average across dataset)
  • are NOT the target mu and sd, but the original mu and sd of img ((these) –
  • which the output will have mu=0, std=1) (for) –
Returns:

modified tensor

opensoundscape.preprocess.actions.tensor_add_noise(tensor, std=1)

Add gaussian noise to sample (Tensor -> Tensor)

Parameters:std – standard deviation for Gaussian noise [default: 1]

Note: be aware that scaling before/after this action will change the effect of a fixed stdev Gaussian noise

opensoundscape.preprocess.actions.time_mask(tensor, max_masks=3, max_width=0.2)

add random vertical bars over sample (Tensor -> Tensor)

Parameters:
  • tensor – input Torch.tensor sample
  • max_masks – maximum number of vertical bars [default: 3]
  • max_width – maximum size of bars as fraction of sample width
Returns:

augmented tensor

opensoundscape.preprocess.actions.torch_color_jitter(tensor, brightness=0.3, contrast=0.3, saturation=0.3, hue=0)

Wraps torchvision.transforms.ColorJitter

(Tensor -> Tensor) or (PIL Img -> PIL Img)

Parameters:
  • tensor – input sample
  • brightness=0.3
  • contrast=0.3
  • saturation=0.3
  • hue=0
Returns:

modified tensor

opensoundscape.preprocess.actions.torch_random_affine(tensor, degrees=0, translate=(0.3, 0.1), fill=0)

Wraps for torchvision.transforms.RandomAffine

(Tensor -> Tensor) or (PIL Img -> PIL Img)

Parameters:
  • tensor – torch.Tensor input saple
  • = 0 (degrees) –
  • = (translate) –
  • = 0-255, duplicated across channels (fill) –
Returns:

modified tensor

Note: If applying per-image normalization, we recommend applying RandomAffine after image normalization. In this case, an intermediate gray value is ~0. If normalization is applied after RandomAffine on a PIL image, use an intermediate fill color such as (122,122,122).

opensoundscape.preprocess.actions.trim_audio(audio, _sample_duration, extend=True, random_trim=False, tol=1e-05)

trim audio clips (Audio -> Audio)

Trims an audio file to desired length Allows audio to be trimmed from start or from a random time Optionally extends audio shorter than clip_length with silence

Parameters:
  • audio – Audio object
  • _sample_duration – desired final length (sec) - if None, no trim is performed
  • extend – if True, clips shorter than _sample_duration are extended with silence to required length
  • random_trim – if True, chooses a random segment of length _sample_duration from the input audio. If False, the file is trimmed from 0 seconds to _sample_duration seconds.
  • tol – tolerance for considering a clip to be of the correct length (sec)
Returns:

trimmed audio

Preprocessors

class opensoundscape.preprocess.preprocessors.BasePreprocessor(sample_duration=None)

Class for defining an ordered set of Actions and a way to run them

Custom Preprocessor classes should subclass this class or its children

Preprocessors have one job: to transform samples from some input (eg a file path) to some output (eg a torch.Tensor) using a specific procedure. The procedure consists of Actions ordered by the Preprocessor’s index. Preprocessors have a forward() method which runs the set of Actions specified in the index.

Parameters:
  • action_dict – dictionary of name:Action actions to perform sequentially
  • sample_duration – length of audio samples to generate (seconds)
forward(sample, break_on_type=None, break_on_key=None, clip_times=None, bypass_augmentations=False)

perform actions in self.pipeline on a sample (until a break point)

Actions with .bypass = True are skipped. Actions with .is_augmentation = True can be skipped by passing bypass_augmentations=True.

Parameters:
  • sample – either: - pd.Series with file path as index (.name) and labels - OR a file path as pathlib.Path or string
  • break_on_type – if not None, the pipeline will be stopped when it reaches an Action of this class. The matching action is not performed.
  • break_on_key – if not None, the pipeline will be stopped when it reaches an Action whose index equals this value. The matching action is not performed.
  • bypass_augmentations – if True, actions with .is_augmentatino=True are skipped
Returns:

preprocessed sample, ‘y’:labels} if return_labels==True, otherwise {‘X’:preprocessed sample}

Return type:

{‘X’

insert_action(action_index, action, after_key=None, before_key=None)

insert an action in specific specific position

This is an in-place operation

Inserts a new action before or after a specific key. If after_key and before_key are both None, action is appended to the end of the index.

Parameters:
  • action_index – string key for new action in index
  • action – the action object, must be subclass of BaseAction
  • after_key – insert the action immediately after this key in index
  • before_key – insert the action immediately before this key in index Note: only one of (after_key, before_key) can be specified
remove_action(action_index)

alias for self.drop(…,inplace=True), removes an action

This is an in-place operation

Parameters:action_index – index of action to remove
class opensoundscape.preprocess.preprocessors.SpectrogramPreprocessor(sample_duration, overlay_df=None, out_shape=[224, 224, 3])

Child of BasePreprocessor that creates specrogram Tensors w/augmentation

loads audio, creates spectrogram, performs augmentations, returns tensor

by default, does not resample audio, but bandpasses to 0-11.025 kHz (to ensure all outputs have same scale in y-axis) can change with .load_audio.set(sample_rate=sr)

during prediction, will load clips from long audio files rather than entire audio files.

Parameters:
  • sample_duration – length in seconds of audio samples generated If not None, longer clips trimmed to this length. By default, shorter clips will be extended (modify random_trim_audio and trim_audio to change behavior).
  • overlay_df – if not None, will include an overlay action drawing samples from this df
  • out_shape – output shape of tensor h,w,channels [default: [224,224,3]]
exception opensoundscape.preprocess.utils.PreprocessingError

Custom exception indicating that a Preprocessor pipeline failed

opensoundscape.preprocess.utils.get_args(func)

get list of arguments and default values from a function

opensoundscape.preprocess.utils.get_reqd_args(func)

get list of required arguments and default values from a function

opensoundscape.preprocess.utils.show_tensor(tensor, channel=None, transform_from_zero_centered=True, invert=True)

helper function for displaying a sample as an image

Parameters:
  • tensor – torch.Tensor of shape [c,w,h] with values centered around zero
  • channel – specify an integer to plot only one channel, otherwise will attempt to plot all channels
  • transform_from_zero_centered – if True, transforms values from [-1,1] to [0,1]
  • invert – if true, flips value range via x=1-x
opensoundscape.preprocess.utils.show_tensor_grid(tensors, columns, channel=None, transform_from_zero_centered=True, invert=True, labels=None)

create image of nxn tensors

Parameters:
  • tensors – list of samples
  • columns – number of columns in grid
  • labels – title of each subplot
  • other args, see show_tensor() (for) –

Tensor Augmentation

Augmentations and transforms for torch.Tensors

These functions were implemented for PyTorch in: https://github.com/zcaceres/spec_augment The original paper is available on https://arxiv.org/abs/1904.08779

opensoundscape.preprocess.tensor_augment.freq_mask(spec, F=30, max_masks=3, replace_with_zero=False)

draws horizontal bars over the image

F:maximum frequency-width of bars in pixels

max_masks: maximum number of bars to draw

replace_with_zero: if True, bars are 0s, otherwise, mean img value

opensoundscape.preprocess.tensor_augment.time_mask(spec, T=40, max_masks=3, replace_with_zero=False)

draws vertical bars over the image

T:maximum time-width of bars in pixels

max_masks: maximum number of bars to draw

replace_with_zero: if True, bars are 0s, otherwise, mean img value

opensoundscape.preprocess.tensor_augment.time_warp(spec, W=5)

apply time stretch and shearing to spectrogram

fills empty space on right side with horizontal bars

W controls amount of warping. Random with occasional large warp.

Signal Processing

RIBBIT

Detect periodic vocalizations with RIBBIT

This module provides functionality to search audio for periodically fluctuating vocalizations.

opensoundscape.ribbit.calculate_pulse_score(amplitude, amplitude_sample_rate, pulse_rate_range, plot=False, nfft=1024)

Search for amplitude pulsing in an audio signal in a range of pulse repetition rates (PRR)

scores an audio amplitude signal by highest value of power spectral density in the PRR range

Parameters:
  • amplitude – a time series of the audio signal’s amplitude (for instance a smoothed raw audio signal)
  • amplitude_sample_rate – sample rate in Hz of amplitude signal, normally ~20-200 Hz
  • pulse_rate_range – [min, max] values for amplitude modulation in Hz
  • plot=False – if True, creates a plot visualizing the power spectral density
  • nfft=1024 – controls the resolution of the power spectral density (see scipy.signal.welch)
Returns:

pulse rate score for this audio segment (float)

opensoundscape.ribbit.ribbit(spectrogram, signal_band, pulse_rate_range, clip_duration, clip_overlap=0, final_clip=None, noise_bands=None, plot=False)

Run RIBBIT detector to search for periodic calls in audio

This tool searches for periodic energy fluctuations at specific repetition rates and frequencies.

Parameters:
  • spectrogram – opensoundscape.Spectrogram object of an audio file
  • signal_band – [min, max] frequency range of the target species, in Hz
  • pulse_rate_range – [min,max] pulses per second for the target species
  • clip_duration – the length of audio (in seconds) to analyze at one time - each clip is analyzed independently and recieves a ribbit score
  • clip_overlap (float) – overlap between consecutive clips (sec)
  • final_clip (str) – behavior if final clip is less than clip_duration seconds long. By default, discards remaining audio if less than clip_duration seconds long [default: None]. Options: - None: Discard the remainder (do not make a clip) - “remainder”: Use only remainder of Audio (final clip will be shorter than clip_duration) - “full”: Increase overlap with previous clip to yield a clip with clip_duration length Note that the “extend” option is not supported for RIBBIT.
  • noise_bands – list of frequency ranges to subtract from the signal_band For instance: [ [min1,max1] , [min2,max2] ] - if None, no noise bands are used - default: None
  • plot=False – if True, plot the power spectral density for each clip
Returns:

DataFrame of index=(‘start_time’,’end_time’), columns=[‘score’], with a row for each clip.

Notes

__PARAMETERS__ RIBBIT requires the user to select a set of parameters that describe the target vocalization. Here is some detailed advice on how to use these parameters.

Signal Band: The signal band is the frequency range where RIBBIT looks for the target species. It is best to pick a narrow signal band if possible, so that the model focuses on a specific part of the spectrogram and has less potential to include erronious sounds.

Noise Bands: Optionally, users can specify other frequency ranges called noise bands. Sounds in the noise_bands are _subtracted_ from the signal_band. Noise bands help the model filter out erronious sounds from the recordings, which could include confusion species, background noise, and popping/clicking of the microphone due to rain, wind, or digital errors. It’s usually good to include one noise band for very low frequencies – this specifically eliminates popping and clicking from being registered as a vocalization. It’s also good to specify noise bands that target confusion species. Another approach is to specify two narrow noise_bands that are directly above and below the signal_band.

Pulse Rate Range: This parameters specifies the minimum and maximum pulse rate (the number of pulses per second, also known as pulse repetition rate) RIBBIT should look for to find the focal species. For example, choosing pulse_rate_range = [10, 20] means that RIBBIT should look for pulses no slower than 10 pulses per second and no faster than 20 pulses per second.

Clip Duration: The clip_duration parameter tells RIBBIT how many seconds of audio to analyze at one time. Generally, you should choose a clip_length that is similar to the length of the target species vocalization, or a little bit longer. For very slowly pulsing vocalizations, choose a longer window so that at least 5 pulses can occur in one window (0.5 pulses per second -> 10 second window). Typical values for are 0.3 to 10 seconds. Also, clip_overlap can be used for overlap between sequential clips. This is more computationally expensive but will be more likely to center a target sound in the clip (with zero overlap, the target sound may be split up between adjacent clips).

Plot: We can choose to show the power spectrum of pulse repetition rate for each window by setting plot=True. The default is not to show these plots (plot=False).

__ALGORITHM__ This is the procedure RIBBIT follows: divide the audio into segments of length clip_duration for each clip:

calculate time series of energy in signal band (signal_band) and subtract noise band energies (noise_bands) calculate power spectral density of the amplitude time series score the file based on the maximum value of power spectral density in the pulse rate range

Signal Processing

Signal processing tools for feature extraction and more

opensoundscape.signal.cwt_peaks(audio, center_frequency, wavelet='morl', peak_threshold=0.2, peak_separation=None, plot=False)

compute a cwt, post-process, then extract peaks

Performs a continuous wavelet transform (cwt) on an audio signal at a single frequency. It then squares, smooths, and normalizes the signal. Finally, it detects peaks in the resulting signal and returns the times and magnitudes of detected peaks. It is used as a feature extractor for Ruffed Grouse drumming detection.

Parameters:
  • audio – an Audio object
  • center_frequency – the target frequency to extract peaks from
  • wavelet – (str) name of a pywt wavelet, eg ‘morl’ (see pywt docs)
  • peak_threshold – minimum height of peaks - if None, no minimum peak height - see “height” argument to scipy.signal.find_peaks
  • peak_separation – minimum time between detected peaks, in seconds - if None, no minimum distance - see “distance” argument to scipy.signal.find_peaks
Returns:

list of times (from beginning of signal) of each peak peak_levels: list of magnitudes of each detected peak

Return type:

peak_times

Note

consider downsampling audio to reduce computational cost. Audio must have sample rate of at least 2x target frequency.

opensoundscape.signal.detect_peak_sequence_cwt(audio, sr=400, window_len=60, center_frequency=50, wavelet='morl', peak_threshold=0.2, peak_separation=0.0375, dt_range=[0.05, 0.8], dy_range=[-0.2, 0], d2y_range=[-0.05, 0.15], max_skip=3, duration_range=[1, 15], points_range=[9, 100], plot=False)

Use a continuous wavelet transform to detect accellerating sequences

This function creates a continuous wavelet transform (cwt) feature and searches for accelerating sequences of peaks in the feature. It was developed to detect Ruffed Grouse drumming events in audio signals. Default parameters are tuned for Ruffed Grouse drumming detection.

Analysis is performed on analysis windows of fixed length without overlap. Detections from each analysis window across the audio file are aggregated.

Parameters:
  • audio – Audio object
  • sr=400 – resample audio to this sample rate (Hz)
  • window_len=60 – length of analysis window (sec)
  • center_frequency=50 – target audio frequency of cwt
  • wavelet='morl' – (str) pywt wavelet name (see pywavelets docs)
  • peak_threshold=0.2 – height threhsold (0-1) for peaks in normalized signal
  • peak_separation=15/400 – min separation (sec) for peak finding
  • 0.8] (dt_range=[0.05,) – sequence detection point-to-point criterion 1 - Note: the upper limit is also used as sequence termination criterion 2
  • 0] (dy_range=[-0.2,) – sequence detection point-to-point criterion 2
  • 0.15] (d2y_range=[-0.05,) – sequence detection point-to-point criterion 3
  • max_skip=3 – sequence termination criterion 1: max sequential invalid points
  • 15] (duration_range=[1,) – sequence criterion 1: length (sec) of sequence
  • 100] (points_range=[9,) – sequence criterion 2: num points in sequence
  • plot=False – if True, plot peaks and detected sequences with pyplot
Returns:

dataframe summarizing detected sequences

Note: for Ruffed Grouse drumming, which is very low pitched, audio is resampled to 400 Hz. This greatly increases the efficiency of the cwt, but will only detect frequencies up to 400/2=200Hz. Generally, choose a resample frequency as low as possible but >=2x the target frequency

Note: the cwt signal is normalized on each analysis window, so changing the analysis window size can change the detection results.

Note: if there is an incomplete window remaining at the end of the audio file, it is discarded (not analyzed).

opensoundscape.signal.find_accel_sequences(t, dt_range=[0.05, 0.8], dy_range=[-0.2, 0], d2y_range=[-0.05, 0.15], max_skip=3, duration_range=[1, 15], points_range=[5, 100])

detect accelerating/decelerating sequences in time series

developed for deteting Ruffed Grouse drumming events in a series of peaks extracted from cwt signal

The algorithm computes the forward difference of t, y(t). It iterates through the [y(t), t] points searching for sequences of points that meet a set of conditions. It begins with an empty candidate sequence.

“Point-to-point criterea”: Valid ranges for dt, dy, and d2y are checked for each subsequent point and are based on previous points in the candidate sequence. If they are met, the point is added to the candidate sequence.

“Continuation criterea”: Conditions for max_skip and the upper bound of dt are used to determine when a sequence should be terminated.

  • max_skip: max number of sequential invalid points before terminating
  • dt<=dt_range[1]: if dt is long, sequence should be broken

“Sequence criterea”: When a sequence is terminated, it is evaluated on conditions for duration_range and points_range. If it meets these conditions, it is saved as a detected sequence.

  • duration_range: length of sequence in seconds from first to last point
  • points_range: number of points included in sequence

When a sequence is terminated, the search continues with the next point and an empty sequence.

Parameters:
  • t – (list or np.array) times of all detected peaks (seconds)
  • dt_range=[0.05,0.8] – valid values for t(i) - t(i-1)
  • dy_range=[-0.2,0] – valid values for change in y (grouse: difference in time between consecutive beats should decrease)
  • d2y_range=[-.05,15] – limit change in dy: should not show large decrease (sharp curve downward on y vs t plot)
  • max_skip=3 – max invalid points between valid points for a sequence (grouse: should not have many noisy points between beats)
  • duration_range=[1,15] – total duration of sequence (sec)
  • points_range=[9,100] – total number of points in sequence
Returns:

lists of t and y for each detected sequence

Return type:

sequences_t, sequences_y

opensoundscape.signal.frequency2scale(frequency, wavelet, sr)

determine appropriate wavelet scale for desired center frequency

Parameters:
  • frequency – desired center frequency of wavelet in Hz (1/seconds)
  • wavelet – (str) name of pywt wavelet, eg ‘morl’ for Morlet
  • sr – sample rate in Hz (1/seconds)
Returns:

(float) scale parameter for pywt.ctw() to extract desired frequency

Return type:

scale

Note: this function is not exactly an inverse of pywt.scale2frequency(), because that function returns frequency in sample-units (cycles/sample) rather than frequency in Hz (cycles/second). In other words, freuquency_hz = pywt.scale2frequency(w,scale)*sr.

opensoundscape.signal.thresholded_event_durations(x, threshold, normalize=False, sr=1)

Detect positions and durations of events over threshold in 1D signal

This function takes a 1D numeric vector and searches for segments that are continuously greater than a threshold value. The input signal can optionally be normalized, and if a sample rate is provided the start positions will be in the units of [sr]^-1 (ie if sr is Hz, start positions will be in seconds).

Parameters:
  • x – 1d input signal, a vector of numeric values
  • threshold – minimum value of signal to be a detection
  • normalize – if True, performs x=x/max(x)
  • sr – sample rate of input signal
Returns:

start time of each detected event durations: duration (# samples/sr) of each detected event

Return type:

start_times

Misc tools

Helpers

opensoundscape.helpers.binarize(x, threshold)

return a list of 0, 1 by thresholding vector x

opensoundscape.helpers.bound(x, bounds)

restrict x to a range of bounds = [min, max]

opensoundscape.helpers.file_name(path)

get file name without extension from a path

opensoundscape.helpers.generate_clip_times_df(full_duration, clip_duration, clip_overlap=0, final_clip=None)

generate start and end times for even-lengthed clips

The behavior for incomplete final clips at the end of the full_duration depends on the final_clip parameter.

This function only creates a dataframe with start and end times, it does not perform any actual trimming of audio or other objects.

Parameters:
  • full_duration – The amount of time (seconds) to split into clips
  • clip_duration (float) – The duration in seconds of the clips
  • clip_overlap (float) – The overlap of the clips in seconds [default: 0]
  • final_clip (str) –

    Behavior if final_clip is less than clip_duration seconds long. By default, discards remaining time if less than clip_duration seconds long [default: None]. Options:

    • None: Discard the remainder (do not make a clip)
    • ”extend”: Extend the final clip beyond full_duration to reach clip_duration length
    • ”remainder”: Use only remainder of full_duration (final clip will be shorter than clip_duration)
    • ”full”: Increase overlap with previous clip to yield a clip with clip_duration length.
      Note: returns entire original audio if it is shorter than clip_duration
Returns:

DataFrame with columns for ‘start_time’ and ‘end_time’ of each clip

Return type:

clip_df

opensoundscape.helpers.hex_to_time(s)

convert a hexidecimal, Unix time string to a datetime timestamp in utc

Example usage: ``` # Get the UTC timestamp t = hex_to_time(‘5F16A04E’)

# Convert it to a desired timezone my_timezone = pytz.timezone(“US/Mountain”) t = t.astimezone(my_timezone) ```

Parameters:s (string) – hexadecimal Unix epoch time string, e.g. ‘5F16A04E’
Returns:datetime.datetime object representing the date and time in UTC
opensoundscape.helpers.inrange(x, r)

return true if x is in range [r[0],r1] (inclusive)

opensoundscape.helpers.isNan(x)

check for nan by equating x to itself

opensoundscape.helpers.jitter(x, width, distribution='gaussian')

Jitter (add random noise to) each value of x

Parameters:
  • x – scalar, array, or nd-array of numeric type
  • width – multiplier for random variable (stdev for ‘gaussian’ or r for ‘uniform’)
  • distribution – ‘gaussian’ (default) or ‘uniform’ if ‘gaussian’: draw jitter from gaussian with mu = 0, std = width if ‘uniform’: draw jitter from uniform on [-width, width]
Returns:

x + random jitter

Return type:

jittered_x

opensoundscape.helpers.linear_scale(array, in_range=(0, 1), out_range=(0, 255))

Translate from range in_range to out_range

Inputs:
in_range: The starting range [default: (0, 1)] out_range: The output range [default: (0, 255)]
Outputs:
new_array: A translated array
opensoundscape.helpers.make_clip_df(files, clip_duration, clip_overlap=0, final_clip=None)

generate df of fixed-length clip times for a set of file_batch_size

Used internally to prepare a dataframe listing clips of longer audio files

This function creates a single dataframe with audio files as the index and columns: ‘start_time’, ‘end_time’. It will list clips of a fixed duration from the beginning to end of each audio file.

Parameters:
  • files – list of audio file paths
  • clip_duration (float) – see generate_clip_times_df
  • clip_overlap (float) – see generate_clip_times_df
  • final_clip (str) – see generate_clip_times_df
Returns:

dataframe with columns ‘start_time’, ‘end_time’ and

file paths as index

unsafe_samples: list of file paths that did not exist or failed to

produce a valid list of clips

Return type:

clip_df

opensoundscape.helpers.min_max_scale(array, feature_range=(0, 1))

rescale vaues in an a array linearly to feature_range

opensoundscape.helpers.overlap(r1, r2)

“calculate the amount of overlap between two real-numbered ranges

opensoundscape.helpers.overlap_fraction(r1, r2)

“calculate the fraction of r1 (low, high range) that overlaps with r2

opensoundscape.helpers.rescale_features(X, rescaling_vector=None)

rescale all features by dividing by the max value for each feature

optionally provide the rescaling vector (1xlen(X) np.array), so that you can rescale a new dataset consistently with an old one

returns rescaled feature set and rescaling vector

opensoundscape.helpers.run_command(cmd)

run a bash command with Popen, return response

opensoundscape.helpers.sigmoid(x)

sigmoid function

Taxa

a set of utilites for converting between scientific and common names of bird species in different naming systems (xeno canto and bird net)

opensoundscape.taxa.bn_common_to_sci(common)

convert bird net common name (ignoring dashes, spaces, case) to scientific name as lowercase-hyphenated

opensoundscape.taxa.common_to_sci(common)

convert bird net common name (ignoring dashes, spaces, case) to scientific name as lowercase-hyphenated

opensoundscape.taxa.get_species_list()

list of scientific-names (lowercase-hyphenated) of species in the loaded species table

opensoundscape.taxa.sci_to_bn_common(scientific)

convert scientific name as lowercase-hyphenated to birdnet common name as lowercasenospaces

opensoundscape.taxa.sci_to_xc_common(scientific)

convert scientific name as lowercase-hyphenated to xeno-canto common name as lowercasenospaces

opensoundscape.taxa.xc_common_to_sci(common)

convert xeno-canto common name (ignoring dashes, spaces, case) to scientific name as lowercase-hyphenated

Localization

opensoundscape.localization.calc_speed_of_sound(temperature=20)

Calculate speed of sound in meters per second

Calculate speed of sound for a given temperature in Celsius (Humidity has a negligible effect on speed of sound and so this functionality is not implemented)

Parameters:temperature – ambient temperature in Celsius
Returns:the speed of sound in meters per second
opensoundscape.localization.localize(receiver_positions, arrival_times, temperature=20.0, invert_alg='gps', center=True, pseudo=True)

Perform TDOA localization on a sound event

Localize a sound event given relative arrival times at multiple receivers. This function implements a localization algorithm from the equations described in the class handout (“Global Positioning Systems”). Localization can be performed in a global coordinate system in meters (i.e., UTM), or relative to recorder positions in meters.

Parameters:
  • receiver_positions – a list of [x,y,z] positions for each receiver Positions should be in meters, e.g., the UTM coordinate system.
  • arrival_times – a list of TDOA times (onset times) for each recorder The times should be in seconds.
  • temperature – ambient temperature in Celsius
  • invert_alg – what inversion algorithm to use (only ‘gps’ is implemented)
  • center – whether to center recorders before computing localization result. Computes localization relative to centered plot, then translates solution back to original recorder locations. (For behavior of original Sound Finder, use True)
  • pseudo – whether to use the pseudorange error (True) or sum of squares discrepancy (False) to pick the solution to return (For behavior of original Sound Finder, use False. However, in initial tests, pseudorange error appears to perform better.)
Returns:

The solution (x,y,z,b) with the lower sum of squares discrepancy b is the error in the pseudorange (distance to mics), b=c*delta_t (delta_t is time error)

opensoundscape.localization.lorentz_ip(u, v=None)

Compute Lorentz inner product of two vectors

For vectors u and v, the Lorentz inner product for 3-dimensional case is defined as

u[0]*v[0] + u[1]*v[1] + u[2]*v[2] - u[3]*v[3]

Or, for 2-dimensional case as

u[0]*v[0] + u[1]*v[1] - u[2]*v[2]
Parameters:
  • u – vector with shape either (3,) or (4,)
  • v – vector with same shape as x1; if None (default), sets v = u
Returns:

value of Lorentz IP

Return type:

float

opensoundscape.localization.travel_time(source, receiver, speed_of_sound)

Calculate time required for sound to travel from a souce to a receiver

Parameters:
  • source – cartesian position [x,y] or [x,y,z] of sound source
  • receiver – cartesian position [x,y] or [x,y,z] of sound receiver
  • speed_of_sound – speed of sound in m/s
Returns:

time in seconds for sound to travel from source to receiver