Page 53 - Winter2021
P. 53

  Figure 4. Convolutional and recurrent neural networks. a: Convolutional neural network for recognizing endangered North Atlantic right whale (Eubalaena glacialis) upcalls. Spectrograms were presented to the network that learned convolutional filters representing examples of both upcall present and upcall absent spectrogram patches. Each filter is convolved with the input to create an output that has high values in areas that are similar to the filter. Max pooling takes the maximal value of the convolutional output over a small area, effectively downsampling the convolutional output and decreasing the importance of exact position within the spectrogram. In this network, there are two convolutional and two pooling layers, followed by a traditional multilayer feedforward network that classifies the representation of the input spectrogram extracted by the convolutional layers. Adapted from Shiu et al., 2020, with permission. b: Example of using recurrent networks to exploit context. The spectrogram shows the 20-Hz song of a fin whale (Balaenoptera physalus) in the presence of heavy shipping noise. The annotations by a human analyst are shown beneath, followed by a convolutional neural network (CNN) and a hybrid CNN with a recurrence layer. The CNN misses most song notes under these challenging conditions. The hybrid network has learned the song pattern and can better pick up weak notes of the song. Adapted from Madhusudhana et al., 2021, with permission.
more complex boundaries would inherently be unable to model the separating boundary correctly.
The second component of error is variance. Variance reflects the error that is due to a specific training set. When learners are very sensitive to small changes in the training data, they have high variance. More complex learners tend to have a higher variance, and regulariza- tion strategies are a method to mitigate for this.
A common method to reduce the amount of variance error is to train multiple classifiers and make a decision based on a function of their decisions such as a vote. This is an effective method of reduc- ing variance and is the basis of ensemble learning meth- ods such as random forest (Breiman, 2001). Random forest makes decisions based on a set of decision trees, classifiers that make a series of branching decisions that depend on values of features, much like the popular chil- dren’s game of 20 questions:
“Are you thinking of an animal?”/“Yes”/“Is it large?”
Evaluating Learning
One of the goals of researchers using machine learning algorithms to solve applied problems is to ensure that the algo- rithms are actually useful in novel environments. As such, one needs to take care when evaluating how well an algorithm performs.
Unsupervised Learning Metrics
In unsupervised learn- ing, there are intrinsic and extrinsic measurements
of performance. Intrinsic measurements on clustered data examine the quality of data partitions and usually use some variant of measuring the similarity within a cluster versus the similarity between clusters. The sil- houette algorithm is a popular measure that has been demonstrated to correlate well with human intuition on two-dimensional clustering tasks (Lewis et al., 2012). In contrast, extrinsic measurements examine the similar- ity between partitionings. This can be done to examine
Winter 2021 • Acoustics Today 53
























































































   51   52   53   54   55