Train a CNN
Convolutional neural networks (CNNs) are popular tools for creating automated machine learning classifiers on images or image-like samples. By converting audio into a two-dimensional frequency vs. time representation such as a spectrogram, we can generate image-like samples that can be used to train CNNs.
This tutorial demonstrates the basic use of OpenSoundscape’s preprocessors
and cnn
modules for training CNNs and making predictions using CNNs.
Under the hood, OpenSoundscape uses Pytorch for machine learning tasks. By using the class opensoundscape.ml.cnn.CNN
, you can train and predict with PyTorch’s powerful CNN architectures in just a few lines of code.
Run this tutorial
This tutorial is more than a reference! It’s a Jupyter Notebook which you can run and modify on Google Colab or your own computer.
Link to tutorial |
How to run tutorial |
---|---|
The link opens the tutorial in Google Colab. Uncomment the “installation” line in the first cell to install OpenSoundscape. |
|
The link downloads the tutorial file to your computer. Follow the Jupyter installation instructions, then open the tutorial file in Jupyter. |
[1]:
# if this is a Google Colab notebook, install opensoundscape in the runtime environment
if 'google.colab' in str(get_ipython()):
%pip install git+https://github.com/kitzeslab/opensoundscape@develop ipykernel==5.5.6 ipython==7.34.0 pillow==9.4.0
num_workers=0
else:
num_workers=4
Setup
Import needed packages
[2]:
# the cnn module provides classes for training/predicting with various types of CNNs
from opensoundscape import CNN
#other utilities and packages
import torch
import pandas as pd
from pathlib import Path
import numpy as np
import pandas as pd
import random
import subprocess
from glob import glob
import sklearn
#set up plotting
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize']=[15,5] #for large visuals
%config InlineBackend.figure_format = 'retina'
Set random seeds
Set manual seeds for Pytorch and Python. These essentially “fix” the results of any stochastic steps in model training, ensuring that training results are reproducible. You probably don’t want to do this when you actually train your model, but it’s useful for debugging.
[3]:
torch.manual_seed(0)
random.seed(0)
np.random.seed(0)
Download files
Training a machine learning model requires some pre-labeled data. These data, in the form of audio recordings or spectrograms, are labeled with whether or not they contain the sound of the species of interest.
These data can be obtained from online databases such as Xeno-Canto.org, or by labeling one’s own ARU data using a program like Cornell’s Raven sound analysis software. In this example we are using a set of annotated avian soundscape recordings that were annotated using the software Raven Pro 1.6.4 (Bioacoustics Research Program 2022):
An annotated set of audio recordings of Eastern North American birds containing frequency, time, and species information. Lauren M. Chronister, Tessa A. Rhinehart, Aidan Place, Justin Kitzes. https://doi.org/10.1002/ecy.3329
These are the same data that are used by the annotation and preprocessing tutorials, so you can skip this step if you’ve already downloaded them there.
Download example files
Download a set of example audio files and Raven annotations:
Option 1: run the cell below
if you get a 403 error, DataDryad suspects you are a bot. Use Option 2.
Option 2:
Download and unzip both
annotation_Files.zip
andmp3_Files.zip
from the https://datadryad.org/stash/dataset/doi:10.5061/dryad.d2547d81zMove the unzipped contents into a subfolder of the current folder called
./annotated_data/
[4]:
# Note: the "!" preceding each line below allows us to run bash commands in a Jupyter notebook
# If you are not running this code in a notebook, input these commands into your terminal instead
!wget -O annotation_Files.zip https://datadryad.org/stash/downloads/file_stream/641805;
!wget -O mp3_Files.zip https://datadryad.org/stash/downloads/file_stream/641807;
!mkdir annotated_data;
!unzip annotation_Files.zip -d ./annotated_data/annotation_Files;
!unzip mp3_Files.zip -d ./annotated_data/mp3_Files;
--2024-10-08 13:17:47-- https://datadryad.org/stash/downloads/file_stream/641805
Resolving datadryad.org (datadryad.org)... 52.25.192.224, 34.211.245.249, 35.82.66.187, ...
Connecting to datadryad.org (datadryad.org)|52.25.192.224|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2024-10-08 13:17:49 ERROR 403: Forbidden.
--2024-10-08 13:17:50-- https://datadryad.org/stash/downloads/file_stream/641807
Resolving datadryad.org (datadryad.org)... 34.211.245.249, 35.82.66.187, 52.36.117.254, ...
Connecting to datadryad.org (datadryad.org)|34.211.245.249|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2024-10-08 13:17:50 ERROR 403: Forbidden.
mkdir: annotated_data: File exists
Archive: annotation_Files.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of annotation_Files.zip or
annotation_Files.zip.zip, and cannot find annotation_Files.zip.ZIP, period.
Archive: mp3_Files.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
unzip: cannot find zipfile directory in one of mp3_Files.zip or
mp3_Files.zip.zip, and cannot find mp3_Files.zip.ZIP, period.
Prepare audio data
To prepare audio data for machine learning, we need to convert our annotated data into clip-level labels.
These steps are covered in depth in other tutorials, so we’ll just set our clip labels up quickly for this example.
First, get exactly matched lists of audio files and their corresponding selection files:
[5]:
# Set the current directory to where the dataset is downloaded
dataset_path = Path("./annotated_data/")
# Make a list of all of the selection table files
selection_files = glob(f"{dataset_path}/annotation_Files/*/*.txt")
# Create a list of audio files, one corresponding to each Raven file
# (Audio files have the same names as selection files with a different extension)
audio_files = [f.replace('annotation_Files','mp3_Files').replace('.Table.1.selections.txt','.mp3') for f in selection_files]
Next, convert the selection files and audio files to a BoxedAnnotations
object, which contains the time, frequency, and label information for all annotations for every recording in the dataset.
[9]:
from opensoundscape.annotations import BoxedAnnotations
# Create a dataframe of annotations
annotations = BoxedAnnotations.from_raven_files(
raven_files=selection_files,
audio_files=audio_files,
annotation_column='Species')
/Users/SML161/opensoundscape/opensoundscape/annotations.py:300: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
all_annotations_df = pd.concat(all_file_dfs).reset_index(drop=True)
[10]:
%%capture
# Parameters to use for label creation
clip_duration = 3
clip_overlap = 0
min_label_overlap = 0.25
species_of_interest = ["NOCA", "EATO", "SCTA", "BAWW", "BCCH", "AMCR", "NOFL"]
# Create dataframe of one-hot labels
clip_labels = annotations.clip_labels(
clip_duration = clip_duration,
clip_overlap = clip_overlap,
min_label_overlap = min_label_overlap,
class_subset = species_of_interest # You can comment this line out if you want to include all species.
)
[11]:
clip_labels.head()
[11]:
NOCA | EATO | SCTA | BAWW | BCCH | AMCR | NOFL | |||
---|---|---|---|---|---|---|---|---|---|
file | start_time | end_time | |||||||
annotated_data/mp3_Files/Recording_1/Recording_1_Segment_31.mp3 | 0.0 | 3.0 | False | True | False | False | False | False | False |
3.0 | 6.0 | False | False | False | False | False | False | False | |
6.0 | 9.0 | False | True | False | False | False | False | False | |
9.0 | 12.0 | False | False | False | False | False | False | False | |
12.0 | 15.0 | False | False | False | False | False | True | False |
Create train, validation, and test datasets
To train and test a model, we use three datasets:
The training dataset is used to fit your machine learning model to the audio data.
The validation dataset is a held-out dataset that is used to select hyperparameters (e.g. how many epochs to train for) during training
The test dataset is another held-out dataset that we use to check how the model performs on data that were not available at all during training.
While both the training and validation datasets are used while training the model, the test dataset is never touched until the model is fully trained and completed.
The training and validation datasets may be gathered from the same source as each other. In contrast, the test dataset is often gathered from a different source to assess whether the model’s performance generalizes to a real-world problem. For example, training and validation data might be drawn from an online database like Xeno-Canto, whereas the testing data is from your own field data.
Create a test dataset
We’ll separate the test dataset first. For a good assessment of the model’s generalization, we want the test set to be independent of the training and validation datasets. For example, we don’t want to use clips from the same source recording in the training dataset and the test dataset.
For this example, we’ll use the recordings in the folders Recording_1
, Recording_2
and Recording_3
as our training and validation data, and use the recordings in folder Recording_4
as our test data.
[12]:
# Select all files from Recording_4 as a test set
mask = clip_labels.reset_index()['file'].apply(lambda x: 'Recording_4' in x).values
test_set = clip_labels[mask]
# All other files will be used as a training set
train_and_val_set = clip_labels.drop(test_set.index)
# Save .csv tables of the training and validation sets to keep a record of them
train_and_val_set.to_csv("./annotated_data/train_and_val_set.csv")
test_set.to_csv("./annotated_data/test_set.csv")
If you wanted, you could load the training and testing set from these saved CSV files.
[13]:
train_and_val_set = pd.read_csv('./annotated_data/train_and_val_set.csv',index_col=[0,1,2])
test_set = pd.read_csv('./annotated_data/test_set.csv',index_col=[0,1,2])
Split training and validation datasets
Now, separate the remaining non-test data into training and validation datasets.
The idea of keeping a separate validation dataset is that, throughout training, we can ‘peek’ at the performance on the validation set to choose hyperparameters. (This is in contrast to the test dataset, which we will not look at until we’ve finished training our model.)
One important hyperparameter is the number of epochs to train to, in order to prevent overfitting. Each epoch includes one round of fitting on each training sample.
If a model’s performance on a training dataset continues to improve as it trains, but its performance on the validation dataset plateaus, this could incate the model is overfitting on the training dataset, learning information specific to those particular samples instead of gaining the ability to generalize to new data.
[14]:
# Split our training data into training and validation sets
train_df, valid_df = sklearn.model_selection.train_test_split(train_and_val_set, test_size=0.1, random_state=0)
[15]:
train_df.to_csv("./annotated_data/train_set.csv")
valid_df.to_csv("./annotated_data/valid_set.csv")
Resample data for even class representation
Before training, we will balance the number of samples of each class in the training set. This helps the model learn all of the classes, rather than paying too much attention to the classes with the most labeled annotations.
[16]:
from opensoundscape.data_selection import resample
# upsample (repeat samples) so that all classes have 800 samples
balanced_train_df = resample(train_df,n_samples_per_class=800,random_state=0)
Set up model
Now we create a model object. We have to select several parameters when creating this object: its architecture
, classes
, and sample_duration
.
Some additional parameters can also be changed at this step, such as the preprocessor used to create spectrograms and the shape of the spectrograms.
For more detail on this step, see the “Customize CNN training” tutorial.
Create CNN object
Now, create a CNN object with this architecture, the classes we put into the dataframe above, and the same sample duration as we selected above.
The first time you run this script for a particular architecture, OpenSoundscape will download the desired architecture.
[17]:
# Create a CNN object designed to recognize 3-second samples
from opensoundscape import CNN
# Use resnet34 architecture
architecture = 'resnet34'
# Can use this code to get your classes, if needed
class_list = list(train_df.columns)
model = CNN(
architecture = architecture,
classes = class_list,
sample_duration = clip_duration #3s, selected above
)
Check model device
If a GPU is available on your computer, the CNN object automatically selects it for accellerating performance. You can override .device
to use a specific device such as cpu
or cuda:3
[18]:
print(f'model.device is: {model.device}')
model.device is: mps
Set up WandB model logging
While this step is optional, it is very helpful for model training. In this step, we set up model logging on a service called Weights & Biases (AKA WandB).
Weights & Biases is a free website you can use to monitor model training. It is integrated with OpenSoundscape to include helpful functions such as checking on your model’s training progress in real time, visualizing the spectrograms created for training your model, comparing multiple tries at training the same model, and more. For more information, check out this blog post.
The instructions below will help you set up wandb
logging:
Create an account on the Weights and Biases website.
The first time you use
wandb
, you’ll need to runwandb.login()
in Python orwandb login
on the command line, then enter the API key from your settings pageIn a Python script where you want to log model training, use
wandb.init()
as demonstrated below. The “Entity” or team option allows runs and projects to be shared across members in a group, making it easy to collaborate and see progress of other team members’ runs.
As training progresses, performance metrics will be plotted to the wandb logging platform and visible on this run’s web page. For example, this wandb web page shows the content logged to wandb when this notebook was run by the Kitzes Lab. By default, OpenSoundscape + WandB integration creates several pages with information about the model:
Overview: hyperparameters, run description, and hardware available during the run
Charts: “Samples” panel with audio and images of preprocessed samples (useful for checking that your preprocessing performs as expected and your labels are correct)
Charts: graphs of each class’s performance metrics over training time
Model: summary of model architecture
Logs: standard output of training script
System: computational performance metrics including memory, CPU use, etc
When training several models and comparing performance, the “Project” page of WandB provides comparisons of metrics and hyperparameters across training runs.
[19]:
import wandb
try:
wandb.login()
wandb_session = wandb.init(
entity='kitzeslab', #replace with your entity/group name
project='OpenSoundscape tutorials',
name='Train CNN',
)
except: #if wandb.init fails, don't use wandb logging
print('failed to create wandb session. wandb session will be None')
wandb_session = None
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: samlapp (kitzeslab). Use `wandb login --relogin` to force relogin
/Users/SML161/opensoundscape/docs/tutorials/wandb/run-20241008_131926-701x1t52
Train the CNN
Finally, train the CNN for two epoch. Typically, we would train the model for more than two epochs, but because training is slow and is much better done outside of a Jupyter Notebook, we just include this as a short demonstration of training.
Each epoch is one pass-through of all of the samples in the training dataset, plus running predictions on the validation dataset.
Each epoch is composed of smaller groups of samples called batches. The machine learning model predicts on every sample in the batch, then the model weights are updated based on those samples. Larger batches can increase training speed, but require more memory. If you get a memory error, try reducing the batch size.
We use default training parameters, but many aspects of CNN training can be customized (see the “Customize CNN training” tutorial for examples).
[20]:
checkpoint_folder = Path("model_training_checkpoints")
checkpoint_folder.mkdir(exist_ok=True)
[21]:
#%%capture --no-stdout --no-display
# Uncomment the line above to silence outputs from this cell
model.train(
balanced_train_df,
valid_df,
epochs = 2,
batch_size = 64,
log_interval = 100, #log progress every 100 batches
num_workers = num_workers, #parallelized cpu tasks for preprocessing
wandb_session = wandb_session,
save_interval = 10, #save checkpoint every 10 epochs
save_path = checkpoint_folder #location to save checkpoints
)
Training Epoch 0
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/site-packages/torchmetrics/functional/classification/precision_recall_curve.py:798: UserWarning: MPS: nonzero op is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/Indexing.mm:334.)
unique_mapping = unique_mapping[unique_mapping >= 0]
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/site-packages/torchmetrics/functional/classification/average_precision.py:308: UserWarning: MPS: no support for int64 for sum_out_mps, downcasting to a smaller data type (int32/float32). Native support for int64 has been added in macOS 13.3. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/ReduceOps.mm:157.)
weights=(state[1] == 1).sum(dim=0).float() if thresholds is None else state[0][:, 1, :].sum(-1),
Epoch: 0 [batch 0/88, 0.00%]
Epoch Running Average Loss: 0.718
Most Recent Batch Loss: 0.718
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/site-packages/torchmetrics/utilities/prints.py:43: UserWarning: Average precision score for one or more classes was `nan`. Ignoring these classes in macro-average
warnings.warn(*args, **kwargs) # noqa: B028
Validation.
Training Epoch 1
Epoch: 1 [batch 0/88, 0.00%]
Epoch Running Average Loss: 0.385
Most Recent Batch Loss: 0.385
Validation.
Best Model Appears at Epoch 1 with Validation score 0.889.
Once this is finished running, you have trained the CNN.
To generate predictions on audio files using the CNN, use the .predict()
method of the CNN object. Here, we apply a sigmoid activation layer which maps the CNN’s outputs (all real numbers) to a 0-1 range.
[22]:
scores_df = model.predict(valid_df.head(),activation_layer='sigmoid')
We don’t expect this CNN to actually be good at classifying sounds, since we only trained it with a few examples and for a couple epochs. We’d want to train with hundreds of examples per class for 10-100 epochs as a starting point for training a useful model.
For guidance on how to use machine learning classifiers, see the Classifieres 101 Guide on opensoundscape.org and the tutorial on predicting with pre-trained CNNs.
Clean up: Run the following cell to delete the files created in this tutorial. However, these files are used in other tutorials, so you may wish not to delete them just yet.
[23]:
import shutil
# uncomment to remove the training files
# shutil.rmtree('./annotated_data')
shutil.rmtree('./wandb')
shutil.rmtree('./model_training_checkpoints')
try:
Path('annotation_Files.zip').unlink()
except:
pass
try:
Path('mp3_Files.zip').unlink()
except:
pass