Train a CNN

Convolutional neural networks (CNNs) are popular tools for creating automated machine learning classifiers on images or image-like samples. By converting audio into a two-dimensional frequency vs. time representation such as a spectrogram, we can generate image-like samples that can be used to train CNNs.

This tutorial demonstrates the basic use of OpenSoundscape’s preprocessors and cnn modules for training CNNs and making predictions using CNNs.

Under the hood, OpenSoundscape uses Pytorch for machine learning tasks. By using the class opensoundscape.ml.cnn.CNN, you can train and predict with PyTorch’s powerful CNN architectures in just a few lines of code.

Run this tutorial

This tutorial is more than a reference! It’s a Jupyter Notebook which you can run and modify on Google Colab or your own computer.

Link to tutorial

How to run tutorial

Open In Colab

The link opens the tutorial in Google Colab. Uncomment the “installation” line in the first cell to install OpenSoundscape.

Download via DownGit

The link downloads the tutorial file to your computer. Follow the Jupyter installation instructions, then open the tutorial file in Jupyter.

[1]:
# if this is a Google Colab notebook, install opensoundscape in the runtime environment
if 'google.colab' in str(get_ipython()):
  %pip install opensoundscape

Setup

Import needed packages

[2]:
# the cnn module provides classes for training/predicting with various types of CNNs
from opensoundscape import CNN

#other utilities and packages
import torch
import pandas as pd
from pathlib import Path
import numpy as np
import pandas as pd
import random
import subprocess
from glob import glob
import sklearn

#set up plotting
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize']=[15,5] #for large visuals
%config InlineBackend.figure_format = 'retina'
/Users/SML161/opensoundscape/opensoundscape/ml/cnn.py:18: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
  from tqdm.autonotebook import tqdm

Set random seeds

Set manual seeds for Pytorch and Python. These essentially “fix” the results of any stochastic steps in model training, ensuring that training results are reproducible. You probably don’t want to do this when you actually train your model, but it’s useful for debugging.

[3]:
torch.manual_seed(0)
random.seed(0)
np.random.seed(0)

Download files

Training a machine learning model requires some pre-labeled data. These data, in the form of audio recordings or spectrograms, are labeled with whether or not they contain the sound of the species of interest.

These data can be obtained from online databases such as Xeno-Canto.org, or by labeling one’s own ARU data using a program like Cornell’s Raven sound analysis software. In this example we are using a set of annotated avian soundscape recordings that were annotated using the software Raven Pro 1.6.4 (Bioacoustics Research Program 2022):

An annotated set of audio recordings of Eastern North American birds containing frequency, time, and species information. Lauren M. Chronister, Tessa A. Rhinehart, Aidan Place, Justin Kitzes. https://doi.org/10.1002/ecy.3329

These are the same data that are used by the annotation and preprocessing tutorials, so you can skip this step if you’ve already downloaded them there.

Download the datasets to your current working directory and unzip them. You can do so by running the cell below OR

[4]:
%%capture
# Note: the "!" preceding each line below allows us to run bash commands in a Jupyter notebook
# If you are not running this code in a notebook, input these commands into your terminal instead
!wget -O annotation_Files.zip https://datadryad.org/stash/downloads/file_stream/641805;
!wget -O mp3_Files.zip https://datadryad.org/stash/downloads/file_stream/641807;
!mkdir annotated_data;
!unzip annotation_Files.zip -d ./annotated_data/Annotation_Files;
!unzip mp3_Files.zip -d ./annotated_data/Recordings;

Prepare audio data

To prepare audio data for machine learning, we need to convert our annotated data into clip-level labels.

These steps are covered in depth in other tutorials, so we’ll just set our clip labels up quickly for this example.

First, get exactly matched lists of audio files and their corresponding selection files:

[4]:
# Set the current directory to where the dataset is downloaded
dataset_path = Path("./annotated_data/")

# Make a list of all of the selection table files
selection_files = glob(f"{dataset_path}/Annotation_Files/*/*.txt")

# Create a list of audio files, one corresponding to each Raven file
# (Audio files have the same names as selection files with a different extension)
audio_files = [f.replace('Annotation_Files','Recordings').replace('.Table.1.selections.txt','.mp3') for f in selection_files]

Next, convert the selection files and audio files to a BoxedAnnotations object, which contains the time, frequency, and label information for all annotations for every recording in the dataset.

[5]:
from opensoundscape.annotations import BoxedAnnotations
# Create a dataframe of annotations
annotations = BoxedAnnotations.from_raven_files(
    selection_files,
    audio_files)
[6]:
%%capture
# Parameters to use for label creation
clip_duration = 3
clip_overlap = 0
min_label_overlap = 0.25
species_of_interest = ["NOCA", "EATO", "SCTA", "BAWW", "BCCH", "AMCR", "NOFL"]

# Create dataframe of one-hot labels
clip_labels = annotations.one_hot_clip_labels(
    clip_duration = clip_duration,
    clip_overlap = clip_overlap,
    min_label_overlap = min_label_overlap,
    class_subset = species_of_interest # You can comment this line out if you want to include all species.
)
[7]:
clip_labels.head()
[7]:
NOCA EATO SCTA BAWW BCCH AMCR NOFL
file start_time end_time
annotated_data/Recordings/Recording_4/Recording_4_Segment_21.mp3 0.0 3.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
3.0 6.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
6.0 9.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
9.0 12.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
12.0 15.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0

Create train, validation, and test datasets

To train and test a model, we use three datasets: * The training dataset is used to fit your machine learning model to the audio data. * The validation dataset is a held-out dataset that is used to select hyperparameters (e.g. how many epochs to train for) during training * The test dataset is another held-out dataset that we use to check how the model performs on data that were not available at all during training.

While both the training and validation datasets are used while trained the model, the test dataset is never touched until the model is fully trained and completed.

The training and validation datasets may be gathered from the same source as each other. In contrast, the test dataset is often gathered from a different source to assess whether the model’s performance generalizes to a real-world problem. For example, training and validation data might be drawn from an online database like Xeno-Canto, whereas the testing data is from your own field data.

Create a test dataset

We’ll separate the test dataset first. For a good assessment of the model’s generalization, we want the test set to be independent of the training and validation datasets. For example, we don’t want to use clips from the same source recording in the training dataset and the test dataset.

For this example, we’ll use the recordings in the folders Recording_1, Recording_2 and Recording_3 as our training and validation data, and use the recordings in folder Recording_4 as our test data.

[8]:
# Select all files from Recording_4 as a test set
mask = clip_labels.reset_index()['file'].apply(lambda x: 'Recording_4' in x).values
test_set = clip_labels[mask]

# All other files will be used as a training set
train_and_val_set = clip_labels.drop(test_set.index)

# Save .csv tables of the training and validation sets to keep a record of them
train_and_val_set.to_csv("./annotated_data/train_and_val_set.csv")
test_set.to_csv("./annotated_data/test_set.csv")

If you wanted, you could load the training and testing set from these saved CSV files.

[9]:
train_and_val_set = pd.read_csv('./annotated_data/train_and_val_set.csv',index_col=[0,1,2])
test_set = pd.read_csv('./annotated_data/test_set.csv',index_col=[0,1,2])

Split training and validation datasets

Now, separate the remaining non-test data into training and validation datasets.

The idea of keeping a separate validation dataset is that, throughout training, we can ‘peek’ at the performance on the validation set to choose hyperparameters. (This is in contrast to the test dataset, which we will not look at until we’ve finished training our model.)

One important hyperparameter is the number of epochs to train to, in order to prevent overfitting. Each epoch includes one round of fitting on each training sample.

If a model’s performance on a training dataset continues to improve as it trains, but its performance on the validation dataset plateaus, this could incate the model is overfitting on the training dataset, learning information specific to those particular samples instead of gaining the ability to generalize to new data.

[10]:
# Split our training data into training and validation sets
train_df, valid_df = sklearn.model_selection.train_test_split(train_and_val_set, test_size=0.1, random_state=0)
[11]:
train_df.to_csv("./annotated_data/train_set.csv")
valid_df.to_csv("./annotated_data/valid_set.csv")

Resample data for even class representation

Before training, we will balance the number of samples of each class in the training set. This helps the model learn all of the classes, rather than paying too much attention to the classes with the most labeled annotations.

[12]:
from opensoundscape.data_selection import resample

# upsample (repeat samples) so that all classes have 800 samples
balanced_train_df = resample(train_df,n_samples_per_class=800,random_state=0)

Set up model

Now we create a model object. We have to select several parameters when creating this object: its architecture, classes, and sample_duration.

Some additional parameters can also be changed at this step, such as the preprocessor used to create spectrograms and the shape of the spectrograms.

For more detail on this step, see the “Customize CNN training” tutorial.

Create CNN object

Now, create a CNN object with this architecture, the classes we put into the dataframe above, and the same sample duration as we selected above.

The first time you run this script for a particular architecture, OpenSoundscape will download the desired architecture.

[13]:
# Create a CNN object designed to recognize 3-second samples
from opensoundscape import CNN

# Use resnet34 architecture
architecture = 'resnet34'

# Can use this code to get your classes, if needed
class_list = list(train_df.columns)

model = CNN(
    architecture = architecture,
    classes = class_list,
    sample_duration = clip_duration #3s, selected above
)

Check model device

If a GPU is available on your computer, the CNN object automatically selects it for accellerating performance. You can override .device to use a specific device such as cpu or cuda:3

[14]:
print(f'model.device is: {model.device}')
model.device is: mps

Set up WandB model logging

While this step is optional, it is very helpful for model training. In this step, we set up model logging on a service called Weights & Biases (AKA WandB).

Weights & Biases is a free website you can use to monitor model training. It is integrated with OpenSoundscape to include helpful functions such as checking on your model’s training progress in real time, visualizing the spectrograms created for training your model, comparing multiple tries at training the same model, and more. For more information, check out this blog post.

The instructions below will help you set up wandb logging:

  • Create an account on the Weights and Biases website.

  • The first time you use wandb, you’ll need to run wandb.login() in Python or wandb login on the command line, then enter the API key from your settings page

  • In a Python script where you want to log model training, use wandb.init() as demonstrated below. The “Entity” or team option allows runs and projects to be shared across members in a group, making it easy to collaborate and see progress of other team members’ runs.

As training progresses, performance metrics will be plotted to the wandb logging platform and visible on this run’s web page. For example, this wandb web page shows the content logged to wandb when this notebook was run by the Kitzes Lab. By default, OpenSoundscape + WandB integration creates several pages with information about the model:

  • Overview: hyperparameters, run description, and hardware available during the run

  • Charts: “Samples” panel with audio and images of preprocessed samples (useful for checking that your preprocessing performs as expected and your labels are correct)

  • Charts: graphs of each class’s performance metrics over training time

  • Model: summary of model architecture

  • Logs: standard output of training script

  • System: computational performance metrics including memory, CPU use, etc

When training several models and comparing performance, the “Project” page of WandB provides comparisons of metrics and hyperparameters across training runs.

[15]:
import wandb
try:
    wandb.login()
    wandb_session = wandb.init(
        entity='kitzeslab', #replace with your entity/group name
        project='OpenSoundscape tutorials',
        name='Train CNN',
    )
except: #if wandb.init fails, don't use wandb logging
    print('failed to create wandb session. wandb session will be None')
    wandb_session = None
wandb: WARNING Calling wandb.login() after wandb.init() has no effect.
Finishing last run (ID:j3o8kasj) before initializing another...
Waiting for W&B process to finish... (success).
View run Train CNN at: https://wandb.ai/kitzeslab/OpenSoundscape%20tutorials/runs/j3o8kasj
Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
Find logs at: ./wandb/run-20231016_150518-j3o8kasj/logs
Successfully finished last run (ID:j3o8kasj). Initializing new run:
wandb version 0.15.12 is available! To upgrade, please run: $ pip install wandb --upgrade
Tracking run with wandb version 0.13.11
Run data is saved locally in /Users/SML161/opensoundscape/docs/tutorials/wandb/run-20231016_150544-1bqjhvna
Syncing run Train CNN to Weights & Biases (docs)

Train the CNN

Finally, train the CNN for two epoch. Typically, we would train the model for more than two epochs, but because training is slow and is much better done outside of a Jupyter Notebook, we just include this as a short demonstration of training.

Each epoch is one pass-through of all of the samples in the training dataset, plus running predictions on the validation dataset.

Each epoch is composed of smaller groups of samples called batches. The machine learning model predicts on every sample in the batch, then the model weights are updated based on those samples. Larger batches can increase training speed, but require more memory. If you get a memory error, try reducing the batch size.

We use default training parameters, but many aspects of CNN training can be customized (see the “Customize CNN training” tutorial for examples).

[16]:
checkpoint_folder = Path("model_training_checkpoints")
checkpoint_folder.mkdir(exist_ok=True)

training on mps (Apple Silicon GPU) requires PyTorch >= 2.1.0. If we have an older

[34]:
# if model.device ==  torch.device('mps'):
#     model.device=torch.device('cpu')
[17]:
#%%capture --no-stdout --no-display
# Uncomment the line above to silence outputs from this cell

model.train(
    balanced_train_df,
    valid_df,
    epochs = 2,
    batch_size = 64,
    log_interval = 100, #log progress every 100 batches
    num_workers = 4, #4 parallelized cpu tasks for preprocessing
    wandb_session = wandb_session,
    save_interval = 10, #save checkpoint every 10 epochs
    save_path = checkpoint_folder #location to save checkpoints
)

Training Epoch 0
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/site-packages/wandb/sdk/lib/import_hooks.py:243: DeprecationWarning: Deprecated since Python 3.4. Use importlib.util.find_spec() instead.
  loader = importlib.find_loader(fullname, path)
Epoch: 0 [batch 0/88, 0.00%]
        DistLoss: 0.727
Metrics:
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp6jxpys8kwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpac4qdf9gwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmph6691dw2wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp4b_c1qyfwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpt5aakdslwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpe9f877wswandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpjc300kncwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpkxmcdxx5wandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp3tcagek9wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpbj9hmspiwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmphco9txwjwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmprxqezoglwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp96z81r69wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpo3mqj0yywandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpokv366frwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpkg3tw4lywandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
Metrics:
        MAP: 0.477

Validation.
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpvp05gl5wwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpedbcjn25wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpryrx0maewandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpa32pa72uwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp5wshcksjwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmprqzxphlmwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp5lovd_21wandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp5ti4y2lkwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpyd3cln8rwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp7calzk13wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp4weete30wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpjyxb9mobwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpqbdk2k_wwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpimi6bb3awandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmprb9ttemywandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp8por1a1cwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
Metrics:
        MAP: 0.513

Training Epoch 1
Epoch: 1 [batch 0/88, 0.00%]
        DistLoss: 0.391
Metrics:
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp5vvkm7k7wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp6l6binkpwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpnrtogtt4wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp7wtyaae4wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpk59cslfywandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpur9f_kznwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpbfzo72aqwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp7n1z6q5zwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpz03a9iwjwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpde3ups51wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpm7468p11wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpaio2dla_wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpkbvv9nxlwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp07ldf49cwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpumiww9fkwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmps7uobq44wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
Metrics:
        MAP: 0.773

Validation.
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpfndmgk36wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmps0oxqyycwandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpy117imb9wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmptv1n0sq9wandb'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmppg9bacviwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpnniababfwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpgvgcxj3uwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpazgxzpfmwandb-artifacts'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpnakqywpywandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpg5jub8qawandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpqro9quz8wandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpidenu7_lwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpcb_7207kwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpqezg4n7swandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmpyegs1litwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
/Users/SML161/miniconda3/envs/opso_dev/lib/python3.9/tempfile.py:821: ResourceWarning: Implicitly cleaning up <TemporaryDirectory '/var/folders/d8/265wdp1n0bn_r85dh3pp95fh0000gq/T/tmp5wyufb7pwandb-media'>
  _warnings.warn(warn_message, ResourceWarning)
Metrics:
        MAP: 0.681

Best Model Appears at Epoch 1 with Validation score 0.681.

Once this is finished running, you have trained the CNN.

Clean up: Run the following cell to delete the files created in this tutorial. However, these files are used in other tutorials, so you may wish not to delete them just yet.

[19]:
import shutil
shutil.rmtree('./annotated_data')
shutil.rmtree('./wandb')
shutil.rmtree('./model_training_checkpoints')
Path('annotation_Files.zip').unlink()
Path('mp3_Files.zip').unlink()