Avian Conservation and Ecology
The following is the established format for referencing this article:
Turgeon, P. J., S. L. Van Wilgenburg, and K. L. Drake. 2017. Microphone variability and degradation: implications for monitoring programs employing autonomous recording units. Avian Conservation and Ecology 12(1):9.
https://doi.org/10.5751/ACE-00958-120109
Methodology, part of a special feature on Advancing bird population monitoring with acoustic recording technologies

Microphone variability and degradation: implications for monitoring programs employing autonomous recording units
 
Variabilité et dégradation des microphones: implications pour les programmes de surveillance utilisant des unités d'enregistrement autonomes

1Bird Studies Canada, 2Environment and Climate Change Canada, Canadian Wildlife Service

ABSTRACT

Autonomous recording units (ARUs) are emerging as an effective tool for avian population monitoring and research. Although ARU technology is being rapidly adopted, there is a need to establish whether variation in ARU components and their degradation with use might introduce detection biases that would affect long-term monitoring and research projects. We assessed whether microphone sensitivity impacted the probability of detecting bird vocalizations by broadcasting a sequence of 12 calls toward an array of commercially available ARUs equipped with microphones of varying sensitivities under three levels (32 dBA, 42 dBA, and 50 dBA) of experimentally induced noise conditions selected to reflect the range of noise levels commonly encountered during avian surveys. We used binomial regression to examine factors influencing probability of detection for each species and used these to examine the impact of microphone sensitivity on the effective detection area (ha) for each species. Microphone sensitivity loss reduced detection probability for all species examined, but the magnitude of the effect varied between species and often interacted with distance. Microphone sensitivity loss reduced the effective detection area by an average of 25% for microphones just beyond manufacturer specifications (-5 dBV) and by an average of 66% for severely compromised microphones (-20 dBV). Microphone sensitivity loss appeared to be more problematic for low frequency calls where reduction in the effective detection area occurred most rapidly. Microphone degradation poses a source of variation in avian surveys made with ARUs that will require regular measurement of microphone sensitivity and criteria for microphone replacement to ensure scientifically reproducible results. We recommend that research and monitoring projects employing ARUs test their microphones regularly, replace microphones with declining sensitivity, and record sensitivity as a potential covariate in statistical analyses of acoustic data.

RÉSUMÉ

Les unités d'enregistrement autonome (UEA) apparaissent comme un outil efficace pour la recherche et le suivi des populations aviaires. Bien que la technologie UEA ait été rapidement adoptée, il est nécessaire d'établir si la variation des composants des UEA et leur dégradation avec l'utilisation pourraient introduire des biais de détection qui affecteraient les projets de recherche et le suivi à long terme. Nous avons évalué si la sensibilité du microphone a une incidence sur la probabilité de détecter les vocalisations d'oiseaux en diffusant une séquence de 12 chants d'oiseaux vers une série d'UEA disponibles dans le commerce, équipées de microphones de sensibilités variables, testés sous trois niveaux (32 dBA, 42 dBA et 50 dBA) de bruit induit expérimentalement. Ces conditions ont été sélectionnées pour reproduire la gamme de niveaux de bruit généralement rencontrés lors des études aviaires. Nous avons utilisé une régression binomiale pour examiner les facteurs influençant la probabilité de détection pour chaque espèce et les avons utilisés pour examiner l'impact de la sensibilité du microphone sur la zone de détection (ha) pour chaque espèce. La perte de sensibilité du microphone a réduit la probabilité de détection de toutes les espèces testées, mais l'ampleur de l'effet variait selon les espèces et interagissait souvent avec la distance. La perte de sensibilité du microphone a réduit la zone de détection d'en moyenne 25% pour les microphones juste en dessous des spécifications du fabricant (-5 dBV) et en moyenne de 66% pour les microphones fortement compromis (-20 dBV). La perte de sensibilité du microphone semble être plus problématique pour les chants d'oiseaux de faible fréquence où la réduction de la zone de détection s'est produite le plus rapidement. La dégradation du microphone induit une source de variation dans les recherches aviaires réalisées avec des ARU qui nécessiteront une mesure régulière de la sensibilité du microphone et des critères de remplacement du microphone afin d'obtenir des résultats scientifiquement reproductibles. Nous recommandons que les projets de recherche et de suivi utilisant des ARU testent régulièrement leurs microphones, remplacent les microphones dont la sensibilité est décroissante, et tiennent compte de la sensibilité comme covariable potentielle dans les analyses statistiques des données acoustiques.
Key words: acoustic surveys; autonomous recording units; bird monitoring; detection probability; effective detection radius; environmental noise; microphone sensitivity

INTRODUCTION

Ornithologists use a number of methods to track changes in avian populations, with point count–based methods being one of the most commonly employed approaches (Ralph et al. 1995, Matsuoka et al. 2014). Point counts generally make use of trained observers who conduct surveys over a set duration during which all birds seen or heard within a specified distance of the observer are counted (Ralph et al. 1995, Matsuoka et al. 2014). Point count protocols are widely used in monitoring programs such as the North American Breeding Bird Survey (BBS) to infer population status, trends, and habitat relationships, thus providing an important tool for species conservation (Peterjohn and Sauer 1999, Sauer et al. 2003, Sauer et al. 2013). For many groups of birds, detections during point count surveys are primarily based upon acoustic cues (Dejong and Emlen 1985, Brewster and Simons 2009); therefore, acoustic recording technologies offer an alternative sampling method to supplement traditional point counts (Hobson et al. 2002, Klingbeil and Willig 2015). Although point counts by either human observers or using acoustic recordings are widely used, there is increasing recognition that these methods are susceptible to imperfect detection that can lead to biased estimation of species density or abundance (Thompson 2002, Royle et al. 2005). Imperfect detection results from variable availability of cues (e.g., songs) given by a species or individual, and variation in the ability of observers to perceive these signals once they are available (Alldredge et al. 2007a, b, Diefenbach et al. 2007). Although several studies have focused on factors influencing detection probability by human observers (e.g., Alldredge et al. 2007a, b, Simons et al. 2007, Stanislav et al. 2010), less is known about what factors drive variation in detection probability where acoustic recording technologies are used as the primary survey tool. The recent emergence and growing popularity of autonomous recording units (ARUs) in avian monitoring has created a need to investigate how variation in electronic components of these recording technologies could influence potential biases with this survey method.

Interest in the application of ARUs in avian research is in part because of their potential to address several of the known biases associated with traditional point count surveys. Programmability facilitates devising recording schedules that can directly reduce or possibly eliminate biases associated with variation in call availability with respect to time of day or season (Brandes 2008, Venier et al. 2012). Furthermore, the ease of obtaining repeat “visits” to the same location can also facilitate the application of statistical methods such as N-mixture modeling (Royle and Nichols 2003) to estimate and account for biases in detection probability. ARUs are well suited to monitor species that are logistically difficult to monitor (e.g., nocturnal species) using human observers (Goyette et al. 2011, Rognan et al. 2012) and can reduce impacts on wildlife (Carey 2009) as well as potential biases caused by the presence of an observer (Gutzwiller and Marcum 1997, Riffell and Riffell 2002). In addition, recordings have the added benefit of creating a permanent record of the acoustic environment that can be viewed on a spectrogram (Digby et al. 2013), listened to multiple times (Haselmayer and Quinn 2000), analyzed by multiple analysts to verify species identifications (Hobson et al. 2002), or even slowed down to enumerate certain species based on temporal separation between calls (Drake et al. 2016).

Despite the many advantages of ARUs, they also present several potential challenges to application in research and monitoring. Unlike ARUs, human observers can use visuals cues, estimate distance to observations, directly associate observations with specific microhabitats, and generally detect birds over a larger area than most sound recorders (Hutto and Stutzman 2009, Sidie-Slettedahl et al. 2015). Although most of the above limitations can be addressed via appropriate survey design, ARUs have the additional challenge that the ability to detect a signal can vary with the choice of recording unit (Venier et al. 2012, Rempel et al. 2013) and microphone specifications (Fristrup and Mennitt 2012). Microphone sensitivity (essentially how efficiently a microphone converts a signal from sound pressure to electrical energy) is not only a concern when initially purchasing recording equipment for a project, but also through time as the equipment is used and potentially degraded.

Similar to long-held concerns regarding the impact of age-related hearing loss in humans on point count surveys (Ramsey and Scott 1981, Emlen and DeJong 1992, Farmer et al. 2014), degradation of microphone sensitivity through time could affect the detection of sounds by the recorder and therefore the conclusions of long-term studies. We are unaware of any studies discussing microphone sensitivity loss relating to bioacoustic monitoring, but other fields have identified sensitivity loss in microphones as an issue. For example, in cochlear implants, more than 25% of microphones examined experienced a gradual loss of sensitivity over time where a one decibel reduction in sensitivity reduced speech recognition by one word per minute (Razza and Burdo 2011), illustrating the impact that equipment degradation can have on performance. As more monitoring programs incorporate ARUs, it becomes increasingly important to understand how equipment wear might affect detection to ensure that data quality is maintained through time.

We conducted an experiment by broadcasting species calls toward an array of ARUs with microphones of varying quality to assess the effect of microphone sensitivity loss and environmental noise on distance-related probability of detecting sounds recorded by ARUs. Specifically, our objective was to quantify the differences in detection caused by microphone sensitivity loss over a realistic range of survey conditions. We recognize that the types (brand/model) of ARUs and microphones may vary between studies; however, our experiment provides a clear example of how degradation of equipment may affect data collection. We discuss the implications of microphone degradation for long-term projects and provide general recommendations for quality control.

METHODS

Microphone sensitivity

We measured the sensitivity of SMX-II microphones for the Song Meter (Models SM2 and SM2+, Wildlife Acoustics, Maynard, Massachusetts, USA) ARU, which are commonly used in bird monitoring programs. Specifically, we installed the latest firmware version available (version 3.3.7) and set the gain jumpers on the ARUs to 0 dB while leaving all other jumpers at the factory settings (i.e. 2.5V bias enabled and 3Hz high pass filter cut-off). We then set the Song Meters to calibration mode and let the ARU stabilize for a minimum of two minutes prior to microphone calibration. After removing windscreens from the microphones, we attached each microphone to the left microphone jack of the ARU and fit a sound level calibrator (Model 407744, Extech Instruments, Nashua, New Hampshire, USA) over the end of the microphone. The sound level calibrator emits a 1 KHz pure tone at 94 dB, from which a sensitivity reading can be obtained from the ARU.

To determine if microphone sensitivity varied in relation to microphone age, we measured the sensitivity of a population of 369 microphones and divided them into three groups based on the number of field seasons (> 1 mo) the microphones were deployed: microphones used during 2 to 4 field seasons (n = 75), microphones used during a single field season (n = 151), and microphones purchased in 2014 but never deployed (n = 143). We performed a Kruskal-Wallis test followed by post hoc Mann-Whitney U tests to determine whether groups differed from one another.

Field experiment

We conducted our field experiment in the rural municipality of Foam Lake, Saskatchewan, Canada, in an open field with flat terrain covered by graminoid vegetation ~0.5 m tall. We performed three replicate trials after the breeding season was largely over (26 July-8 August) on nights (22:30-01:45 hours) with little to no wind (average wind 0 to 2.2 km/h; measured using an anemometer; Kestrel 4000 NV Wind Meter, Kestrel Meters, Birmingham, Michigan, USA). Average ambient noise ranged from 32.1 to 33.0 dBA during broadcast trials, measured using a data-logging sound pressure level meter (Model C-322, Reed Instruments, Wilmington, North Carolina, USA).

We deployed 12 triads of ARUs in a linear array spanning 220 m with triads spaced at 20-m intervals (Fig. 1). We used a combination of SM2 and SM2+ units in the array, after first ensuring that all ARUs operated within 0.5 dBV of one another using the sound level calibrator, and repositioned the gain jumpers to factory settings (i.e., 48 dB) while leaving all other jumpers in place. Within each triad, we spaced recorders 15 cm apart based on the position of the microphone jack and aligned the left channel toward the broadcast location because our objectives only required one microphone per recorder.

Using the results of our microphone sensitivity test (described above), we quantified the deviation of the sensitivity of each individual microphone (n = 331) from the mean sensitivity rounded to the nearest integer of our unused cohort of microphones (i.e., -41 dBV). Microphones were then assigned to one of three sensitivity classes: sensitivity loss (0 to 10th percentile; deviation of -20.50 to -2.30 dBV), no sensitivity loss (45th to 55th percentile; deviation of -0.05 to 0.30 dBV), and above mean sensitivity (90th to 100th percentile; deviation of +1.40 to +3.30 dBV). For every broadcast trial, we randomly drew 12 microphones from each of the three sensitivity classes and equipped a recorder within each triad with a microphone from each sensitivity class, permitting us to compare the effect of microphone sensitivity loss on detection under identical conditions.

During trials, we broadcast white noise to simulate environmental noise experienced under wind conditions representative of those experienced during most bird surveys. We used an experimental approach because it allowed us to assess the effect of ambient noise while minimizing potential complications brought about by variable (uncontrolled) wind speed and direction. Prior to our experiment, we empirically determined the ambient noise experienced under wind conditions falling within standardized protocols for avian surveys such as the BBS and the North American Marsh Bird Monitoring Protocol (Conway 2011). Specifically, we simultaneously measured wind speeds (km/h) using a Kestrel 2000 anemometer (Kestrel Meters, Birmingham, Michigan, USA) and sound pressure level (dBA) using a model C-322 sound pressure level meter (Reed Instruments, Wilmington, North Carolina, USA). Based on graphical inspection of the relationship between wind speed and sound pressure level, we simulated Beaufort 3 winds (13-19 km/h) using 50 dBA of white noise and Beaufort 2 winds (6-12 km/h) using 42 dBA of white noise, and did not use any white noise to represent Beaufort 0 (< 2 km/h) and Beaufort 1 (2-5 km/h) winds. To simulate wind noise, we placed speakers 1 m in front of each triad of recorders and broadcast white noise to expose microphones to 42 dBA and 50 dBA of noise for each of the respective “wind trials” (Fig. 2).

We broadcast a sequence of bird calls toward the array of recorders using a FoxPro Firestorm digital game caller (FOXPRO Inc., Lewistown, Pennsylvania, USA). We repeated the broadcast from 20 and 30 m away from the first triad of recorders; combined with the positioning of our 12 triads (above), this produced recording samples at 24 10-m intervals for distances ranging from 20-250 m. The majority of our broadcast sequence consisted of wetland-associated species because ARUs are likely the most effective way to monitor this group of birds (Sidie-Slettedahl et al. 2015). We included the primary vocalizations of American Bittern Botaurus lentiginosus (AMBI), Le Conte’s Sparrow Ammodramus leconteii (LCSP), Nelson’s Sparrow Ammodramus nelsoni (NESP), Pied-billed Grebe Podilymbus podiceps (PBGR), Sedge Wren Cistothorus platensis (SEWR), and Yellow Rail Coturnicops noveboracensis (YERA). The sequence also contained Sora Porzana carolina (SORA) “per-weep” and “whinny” calls and Virginia Rail Rallus limicola (VIRA) “tick-it” and “grunt” calls. We also included songs of two forest-dwelling species (Black-and-white Warbler Mniotilta varia, BAWW; Ovenbird Seiurus aurocapilla, OVEN) to allow comparison of our findings to those of Alldredge et al. (2007a), who looked at similar effects on surveys by human observers.

Because call loudness will influence the distance over which calls can be detected, we attempted to have the broadcast sequence reflect the volume of actual bird calls to improve the applicability of our findings. To achieve this, we broadcast species calls toward four experienced birders standing 50 m away (following Hobson et al. 2002), but repeated this over 5 dBA increments and had the birders identify which volumes they considered accurate for each species. To encompass the range of observer estimates and represent a potential range of variation in a species call loudness, we broadcast each call type at two decibel levels; quieter species (BAWW, LCSP, NESP, SEWR, YERA) were broadcast with a maximum dBA of 95 and 105 at the source, whereas louder species (OVEN, PBGR, SORA, VIRA) were broadcast with a maximum dBA of 105 and 120 at source (as opposed to 1 m from source). We broadcast AMBI calls with a maximum source dBA of 105 and 111 because we could not create a recording of this species that reached 120 dBA without noticeable distortion. With the use of two data-logging decibel meters, we determined that calls broadcast at 95, 105, and 120 dBA at the source corresponded to approximately 70 (SD 2.9), 75 (SD 4.4), and 86 dBA (SD 4.4) at 1 m away, respectively. These sound levels are consistent with previous studies that have attempted to measure the volume of bird vocalizations (Brackenbury 1979, Drake et al. 2016).

All recordings were processed in a laboratory setting by a single analyst using Raven Pro software (version 1.5 beta) and high-quality, noise-cancelling headphones (Bose® QC15; Bose Corporation, Framingham, Massachusetts, USA). The analyst could use either visual evidence on spectrograms or sound to identify vocalizations. We ensured the analyst was blind to all treatment information (i.e., distance from sound source, microphone sensitivity, and noise level) by saving broadcast sequences as randomly numbered files.

Analysis

Data were entered as binomial outcomes where a call type for a given volume was either detected (1) or not detected (0). For each species, we used generalized linear regression modeling with a binomial error family and a complementary log-log link to examine factors influencing the probability that calls were detected. We chose the complementary log-log link because it tends to provide better fit to skewed data (Hosmer et al. 2013) and comparison against initial fits using logit link functions suggested better fits, especially near the intercept where probability of detection should equal 1 in our experiment. We treated distance and microphone sensitivity loss as independent continuous variables, whereas white noise treatment was included as a categorical variable. We did not treat broadcast volume as a variable, but rather pooled the two volumes together to account for uncertainty and variability in actual loudness of species. All models treated distance as a second-order polynomial fit to account for the expected decay in sound pressure level following the inverse square law (Marten and Marler 1977).

We developed a set of a priori candidate models (Table 1) that considered main effects and relevant interactions. For most models, we computed 85% confidence intervals based on the normal approximation interval. However, certain models experienced complete separation; when this occurred we calculated 85% confidence intervals based on profile likelihoods (Heinze and Schemper 2002) using the MASS package in R (Venables and Ripley 2002).

We used Akaike’s information criteria corrected for small sample size (AICc; Burnham and Anderson 2002) to determine the most parsimonious model for each species. We discarded models in which the 85% confidence interval contained zero (Arnold 2010) and defaulted to the next best model that did not contain “pretending parameters” (sensu Anderson 2008). Using the best approximating model, we determined the effective detection radius and subsequently the effective detection area of the ARU for each call type.

RESULTS

A cohort of 143 new (unused) microphones had a median microphone sensitivity of -40.9 dBV (range -37.7 to -43.5 dBV; SD 1.2; Fig. 3). We note that our measurements differ by 5 dBV from the manufacturer’s sensitivity specifications of -36 ± 4 dBV associated with the internal microphone element, but hereafter use our measurements (i.e., -41 ± 4 dBV) when discussing microphone specifications. Field use had a statistically significant (Kruskal-Wallis test χ2 = 130.44, p < 0.0001) effect on microphone sensitivity (Fig. 4), and post hoc Mann-Whitney U tests between the groups show that all groups differ significantly from one another. The median sensitivity of microphones deployed for one season was 1.0 dBV lower (U = 4224, p < 0.0001) than the median sensitivity of new microphones at -40.9 dBV (Fig. 4). Similarly, the median sensitivity of microphones deployed for two or more seasons was 1.9 dBV less sensitive (U = 1126, p < 0.0001) than new microphones at 40.9 dBV corresponding to a median sensitivity difference of 0.9 (U=3580, p < 0.0001) between microphones deployed for a single season and microphones deployed for two or more seasons (Fig. 4).

Our initial design should have yielded 648 6-minute sound files; however, mechanical failure and human error caused a subset of recordings to be lost or discarded. In total, we processed 576 6-minute sound files resulting in 13,824 calls across the 12 call types potentially available for detection.

Our model selection process resulted in different models being selected to explain variation in call detection between species; however, all of the selected models included the main effects for microphone sensitivity, distance, and noise treatment (Table 2). The most frequently selected model among species included the main effects as well as an interaction between distance and treatment and provided the best fit for five call types (BAWW, LCSP, SEWR, SORA whinny calls, VIRA grunt calls). Two call types (AMBI and VIRA tick-it calls) included the main effects with the distance by microphone interaction as the best model. A model having main effects with the distance by treatment interaction as well as the distance by microphone interaction was the most appropriate model for NESP and YERA calls. Finally, three call types (OVEN, PBGR, SORA per-weep calls) had top models that only included the main effects. LCSP was the only species for which we selected the second ranking model as our best model because the 85% confidence interval around the estimate included zero. A summary of parameter estimates and model ranking is available in Appendices 1-3.

For all species, increasing distance, noise, and loss of microphone sensitivity decreased detection probability (see Appendix 4). The effect of microphone sensitivity loss on detection decreased with increasing distance for the four species (NESP, Fig. A4.2a; YERA, Fig. A4.3a; VIRA tick-it calls, Fig. A4.5b; AMBI, Fig. A4.6a) that have a distance by microphone interaction in their top model. For distance by treatment interaction terms that do not include zero in the confidence interval, the effect of noise decreased the probability of detection with increasing distance for all species (LCSP, Fig. A4.1b; NESP, Fig. A4.2a; SEWR, Fig. A4.2b; YERA, Fig. A4.3a; SORA whinny calls, Fig. A4.4b; and VIRA grunt calls, Fig. A4.5a), except the BAWW, which had a positive interaction between the 50 dBA noise treatment and the distance squared term (Fig. A4.1a).

We identified the effective detection radius of each call type under various noise treatments and with increasing levels of microphone sensitivity loss (Table 3) to determine the effective detection area. We did not calculate the effective detection area of loud species under ambient conditions because their effective detection radii were beyond our experimental range of 250 meters. Additionally, NESP calls exceeded our experimental range under ambient conditions, so we excluded this species from the quiet group when summarizing our results.

The effective detection area (ha) of all species decreased with increasing microphone sensitivity loss (Fig. 5, Fig. 6, and Table 4). For quiet species, LCSP calls were least affected by microphone sensitivity loss, whereas SEWR calls were most affected under all noise treatments except ambient conditions, where the effective detection area of YERA initially dropped faster. For loud species, VIRA tick-it calls were least affected by microphone sensitivity loss, whereas AMBI calls were most severely affected under both noise treatments analyzed. A complete summary of the effective detection area (ha) and the percent loss of each call type with increasing levels of microphone sensitivity loss is available in Appendix 5.

Wind noise reduced the effective detection area across the range of microphone sensitivity; for brevity, we present the effect of noise based on microphones showing no sensitivity losses. The difference in noise between calm conditions to Beaufort 2 conditions caused the effective detection area of quiet species to decrease by an average of 76% (range 70–81%; SD 5%) and a further reduction of 69% (range 64–72%; SD 4%) between Beaufort 2 and Beaufort 3 conditions. Thus, the effective detection area was reduced by 93% (range 92–93%; SD 1%) between calm and Beaufort 3 conditions for quiet species. With loud species, noise caused approximately 25% less of a reduction in effective detection area than observed for quiet species; the change of noise between Beaufort 2 and Beaufort 3 conditions caused the effective detection area of these species to decrease by 44% (range 19–58%; SD 13%). AMBI calls were least affected by the increase of noise, losing only 19% its effective detection area.

DISCUSSION

We provide strong evidence that microphone sensitivity decreases with field use. Furthermore, we show that lower microphone sensitivity reduces the effective area sampled by a microphone and thus induces distance-related biases in detection probability for all species. The observed reduction of effective detection area suggests that our results are similar to those found in cochlear implants (Razza and Burdo 2011) in that any sensitivity loss in a microphone results in reduced performance. Compared with microphones showing no sensitivity loss, microphones with sensitivity readings of 5 dBV below (i.e., -46 dBV) manufacturer specifications reduced species’ effective detection areas by an average of 25% and by an average of 66% for the least sensitive microphones tested (i.e., -61 dBV). Large reductions in effective detection area should not be surprising given that microphones have the same function as ears during point counts, and variation in hearing ability among observers has the potential to greatly reduce the sampling area (Ramsey and Scott 1981). Although humans tend to experience the greatest age-related hearing losses at high frequencies (Ramsey and Scott 1981, Emlen and DeJong 1992, Farmer et al. 2014), our results suggest that microphones may be more vulnerable to low frequency losses; AMBI and PBGR calls, the two calls included in our experiment that are primarily below 1.5 KHz, experienced the fastest decline in effective detection area (58% and 39% decline with a 5 dBV loss in sensitivity, respectively), and both experienced reductions of > 85% with a 20 dBV reduction in sensitivity.

An initial inspection of spectrograms from recordings made with worn microphones suggests possible mechanisms for the lower detection rates of low frequency calls. Microphones appear to exhibit greater static over low frequencies and may be more vulnerable to sensitivity losses at low frequencies only. Although more detailed experiments and quantification would be required to define the mechanisms and frequency-specific wear that may be occurring in microphones, our results suggest that there may be no single (uniform) clear criterion to determine when a change in microphone sensitivity is sufficient to warrant replacement because multiple methods exist to process sound recordings and each will vary in the degree that damage affects them. For example, although the lower signal-to-noise ratio caused by static will lower detection for sound analysts (Fig. 7; compare sound files in Appendices 6 and 7), auditory inspections of recordings may yield higher detection rates than visual scans because masked visual signatures may still be audible (Fig. 7a; see sound file in Appendix 6). Furthermore, automated detection or recognition software will likely be more affected by worn microphones compared with human analysts because noise impedes the ability of automated recognizers to detect sounds (Bardeli et al. 2010).

For most species, background noise greatly reduced the analyst’s ability to detect calls, suggesting that noise affects recording-based surveys in a similar way as point count surveys (e.g., Simons et al. 2007, Pacifici et al. 2008). Not surprisingly, the magnitude of the noise effect was greater for quiet species than loud species. For example, for microphones at -41 dBV, the addition of 8 dBA of white noise (from Beaufort 2 to Beaufort 3 conditions) reduced the effective detection area of loud calls by 19–58% and 64–72% across the quiet species. Past research suggests that wind affects detection of calls in ARU surveys more severely than in-person surveys (Digby et al. 2013), and it is possible we did not experience the true effect wind has on recordings because our experimental design did not include wind gusts, which would tend to cause recordings to “clip” (Zakis 2011) and further affect detection. Additionally, white noise may not have been ideal to represent sound produced by wind in open habitats, where greater environmental noise can occur at lower frequencies (Zakis 2011), thereby masking only portions of a call and not affecting detection as severely (Koper et al. 2015). However, our use of white noise was likely appropriate to simulate the interference caused by leaf rustle in deciduous forests and likely dense cattail vegetation, which tend to follow a similar sound profile (Turnbull, personal commuication). Our approach also allowed us to reduce variation associated with changes in wind speed and direction and thus allow more refined inference with regard to the interaction between microphone sensitivity and realistic levels of environmental noise.

The predicted effective detection radii of all species appear large when comparing them with other studies (e.g., Alldredge et al. 2007a), but this may be because we broadcast calls at night in conditions with little to no wind—conditions ideal for temperature inversions that can increase the distance at which sounds are heard—or because we performed the experiment in an area lacking trees and other tall vegetation that can attenuate sound (Schieck 1997, Pacifici et al. 2008). We were surprised to find that NESP calls traveled further than YERA calls, but this is likely because our broadcast volume for NESP was unrealistically loud. Despite large effective detection radii, groups of calls attenuated as we expected within the boundaries of our experiment when ARUs were equipped with microphones near manufacturer specifications. Specifically, quiet species had smaller effective detection radii than loud species, and high frequency sounds attenuated faster than low frequency sounds (Schieck 1997, Alldredge et al. 2007a). These volume- and frequency-dependent attenuations did not necessarily hold true when ARUs were equipped with degraded microphones. For loud species, certain low-frequency calls became less detectable than mid-frequency calls once microphone sensitivity dropped to -44 dBV, and for quiet species, certain mid-frequency calls became less detectable than high-frequency calls once microphone sensitivity dropped to -48 dBV. Additionally, certain loud species became less detectable than quiet species once microphones sensitivity dropped to -52 dBV.

Loss of microphone sensitivity with repeated field use could confound temporal comparisons in monitoring and research programs if quality control guidelines are not established. For example, failure to replace damaged microphones over multiyear studies could create the appearance of a decline in an otherwise stable population, whereas periodic replacement of microphones could theoretically induce cyclic patterns in detection based on the frequency of microphone renewal in long-term studies. If a 10% difference in number of birds detected can cause undesirable bias in trend estimates from index-based surveys (as stated in Rempel et al. 2013), ensuring that monitoring programs use microphones within a certain range of sensitivity will maintain data quality. Some units may be able to compensate for varying microphone sensitivity by adjusting a gain setting on the unit. However, further experiments will be needed to determine what effect this may have on detection because adjusting the gain based on the 1 KHz test tone may result in excessive gain at higher frequencies and may not change the signal-to-noise ratio of microphones that exhibit static at the lower frequencies.

We recommend that all microphones be uniquely identified (labeled) and the sensitivity measured immediately upon purchase and subsequently tested after each field season to track changes in microphone sensitivity. Furthermore, recording the time-specific estimates of microphone sensitivity and tracking which microphones were deployed on a given ARU and recording location would allow the inclusion of microphone sensitivity as a covariate in statistical models to potentially adjust for differences in detection between sites or years. Performing a single point check using a commercially available 1 KHz/94 dB sound level calibrator should be sufficient to identify microphones with poor sensitivity because sensitivity loss appears to be greatest in the low frequencies. However, we did not sweep the whole frequency spectrum of our microphones to detect frequency-specific damage; thus, future work over a broader frequency range would be useful, but the difficulties and costs associated with this might not be worthwhile. Furthermore, although static appeared to occur more frequently in microphones with lower sensitivity readings, an increase in the noise floor of a microphone can occur independently of sensitivity loss, meaning a single point check will not reliably detect this kind of damage. Thus, periodic inspection of spectrograms would also be useful to determine if other problems exist. This may be especially important for programs heavily reliant upon recognition software.

The cost of a sound level calibrator and replacement microphones, as well as the work involved with testing microphones, presents additional costs that should be accounted for in cost-benefit scenarios (sensu Hutto and Stutzman 2009), but such scenarios should also consider the benefit of repeat visits obtainable via ARU use. It would be useful to conduct longitudinal studies to understand rates of microphone decay and determine whether damage is associated to specific environmental conditions, and examine whether the magnitude at which microphone sensitivity loss affects detection is equal across various habitat types. Lastly, although our experiment used a single microphone per recorder, ARUs function with two microphones and record in stereo; thus, further research should investigate whether matching microphones with similar sensitivities for sound localization is preferred to pairing microphones with different sensitivities to sample similar areas.

CONCLUSION

Microphone variation and degradation present a potential source of bias that monitoring and research programs will have to guard against to maintain data quality. Although our results are specific to the particular products we tested, the patterns observed in this study can be generalized across all ARUs. While we have highlighted distance-related heterogeneity in detection probability caused by variation in microphone sensitivity, it is important to note that the range of variation in microphone sensitivity we observed is still roughly half of that observed between human observers (Ramsey and Scott 1981). We have outlined approaches that can easily be used to document and maintain quality control on microphone sensitivity by testing and replacing microphones as necessary. Determining when to replace microphones will depend on a project’s objectives, target species, and methods used to process the recordings.


AUTHOR CONTRIBUTIONS

Patrick J. Turgeon conducted the experiment, analyzed the data, wrote the paper, and assisted with the experimental design. Steve L. Van Wilgenburg designed the experiment, analyzed the data, and contributed to writing the paper. Kiel L. Drake designed the experiment and contributed to writing the paper.

RESPONSES TO THIS ARTICLE

Responses to this article are invited. If accepted for publication, your response will be hyperlinked to the article. To submit a response, follow this link. To read responses already accepted, follow this link.

ACKNOWLEDGMENTS

We thank Natural Resource Canada Science and Technology Internship Program, Saskatchewan Water Security Agency, and the Canadian Wildlife Service for funding that supported this research. This work was also supported by operating funds from the Terrestrial Unit (Prairie Region) of the Canadian Wildlife Service to SVW. We thank Brennan Obermayer, Christopher Chutter, and LeeAnn Latremouille for their help setting up the project and Laura Boettcher for her help collecting data throughout the experiment. We are grateful to K. Hobson, B. Klingbeil, I. Agranat, and two anonymous reviewers for constructive comments that improved the manuscript. The authors do not endorse any products mentioned in the manuscript nor imply the use of any particular brand.

LITERATURE CITED

Alldredge, M. W., T. R. Simons, and K. H. Pollock. 2007a. Factors affecting aural detections of songbirds. Ecological Applications 17(3):948-955. http://dx.doi.org/10.1890/06-0685

Alldredge, M. W., T. R. Simons, K. H. Pollock, and K. Pacifici. 2007b. A field evaluation of the time-of-detection method to estimate population size and density for aural avian point counts. Avian Conservation and Ecology 2(2):13. [online] URL: http://www.ace-eco.org/vol2/iss2/art13/

Anderson, D. R. 2008. Model based inference in the life sciences: a primer on evidence. Springer, New York, New York, USA. http://dx.doi.org/10.1007/978-0-387-74075-1

Arnold, T. W. 2010. Uninformative parameters and model selection using Akaike’s information criterion. Journal of Wildlife Management 74(6):1175-1178. http://dx.doi.org/10.1111/j.1937-2817.2010.tb01236.x

Bardeli, R., D. Wolff, F. Kurth, M. Koch, K. H. Tauchert, and K. H. Frommolt. 2010. Detecting bird sounds in a complex acoustic environment and application to bioacoustic monitoring. Pattern Recognition Letters 31(12):1524-1534. http://dx.doi.org/10.1016/j.patrec.2009.09.014

Brackenbury, J. H. 1979. Power capabilities of the avian sound-producing system. Journal of Experimental Biology 78(1):163-166.

Brandes, T. S. 2008. Automated sound recording and analysis techniques for bird surveys and conservation. Bird Conservation International 18(S1):S163-S173. http://dx.doi.org/10.1017/S0959270908000415

Brewster, J. P., and T. R. Simons. 2009. Testing the importance of auditory detections in avian point counts. Journal of Field Ornithology 80(2):178-182. http://dx.doi.org/10.1111/j.1557-9263.2009.00220.x

Burnham, K. P., and D. R. Anderson. 2002. Model selection and multimodel inference: a practical information-theoretic approach. Springer, New York, New York, USA. http://dx.doi.org/10.1007/b97636

Carey, M. J. 2009. The effects of investigator disturbance on procellariiform seabirds: a review. New Zealand Journal of Zoology 36(3):367-377. http://dx.doi.org/10.1080/03014220909510161

Conway, C. J. 2011. Standardized North American marsh bird monitoring protocol. Waterbirds 34(3):319-346. http://dx.doi.org/10.1675/063.034.0307

Dejong, M. J., and J. T. Emlen. 1985. The shape of the auditory detection function and its implications for songbird censusing. Journal of Field Ornithology 56(3):213-223. [online] URL: http://www.jstor.org/stable/4513018

Diefenbach, D. R., M. R. Marshall, J. A. Mattice, D. W. Brauning, and D. Johnson. 2007. Incorporating availability for detection in estimates of bird abundance. Auk 124(1):96-106. http://dx.doi.org/10.1642/0004-8038(2007)124[96:iafdie]2.0.co;2

Digby, A., M. Towsey, B. D. Bell, and P. D. Teal. 2013. A practical comparison of manual and autonomous methods for acoustic monitoring. Methods in Ecology and Evolution 4(7):675-683. http://dx.doi.org/10.1111/2041-210X.12060

Drake, K. L., M. Frey, D. Hogan, and R. Hedley. 2016. Using digital recordings and sonogram analysis to obtain counts of yellow rails. Wildlife Society Bulletin 40(2):346-354. http://dx.doi.org/10.1002/wsb.658

Emlen, J. T., and M. J. DeJong. 1992. Counting birds: the problem of variable hearing abilities. Journal of Field Ornithology 63(1):26-31. [online] URL: http://www.jstor.org/stable/4513657

Farmer R. G., M. L. Leonard, J. E. Mills Flemming, and S. C. Anderson. 2014. Observer aging and long-term avian survey data quality. Ecology and Evolution 4(12):2563-2576. http://dx.doi.org/10.1002/ece3.1101

Fristrup, K. M., and D. Mennitt. 2012. Bioacoustical monitoring in terrestrial environments. Acoustics Today 8(3):16-24. http://dx.doi.org/10.1121/1.4753913

Goyette, J. L., R. W. Howe, A. T. Wolf, and W. D. Robinson. 2011. Detecting tropical nocturnal birds using automated audio recordings. Journal of Field Ornithology 82(3):279-287. http://dx.doi.org/10.1111/j.1557-9263.2011.00331.x

Gutzwiller, K. J., and H. A. Marcum. 1997. Bird reactions to observer clothing color: implications for distance-sampling techniques. Journal of Wildlife Management 61(3):935-947. http://dx.doi.org/10.2307/3802203

Haselmayer, J., and J. S. Quinn. 2000. A comparison of point counts and sound recording as bird survey methods in Amazonian southeast Peru. Condor 102(4):887-893.

Heinze, G., and M. Schemper. 2002. A solution to the problem of separation in logistic regression. Statistics in Medicine 21(16):2409-2419. http://dx.doi.org/10.1002/sim.1047

Hobson, K. A., R. S. Rempel, H. Greenwood, S. Turnbull, and S. L. Van Wilgenburg. 2002. Acoustic surveys of birds using electronic recordings: New potential from an omnidirectional microphone system. Wildlife Society Bulletin 30(3):709-720. [online] URL: http://www.jstor.org/stable/3784223

Hosmer, D. W., Jr., S. Lemeshow, and R. X. Sturdivant. 2013. Applied logistic regression. Third edition. John Wiley & Sons, Hoboken, New Jersey, USA. http://dx.doi.org/10.1002/0471722146

Hutto, R. L., and R. J. Stutzman. 2009. Humans versus autonomous recording units: a comparison of point-count results. Journal of Field Ornithology 80(4):387-398. http://dx.doi.org/10.1111/j.1557-9263.2009.00245.x

Klingbeil, B. T., and M. R. Willig. 2015. Bird biodiversity assessments in temperate forest: the value of point count versus acoustic monitoring protocols. PeerJ 3:e973. http://doi.org/10.7717/peerj.973

Koper, N., L. Leston, T. M. Baker, C. Curry, and P. Rosa. 2015. Effects of ambient noise on detectability and localization of avian songs and tones by observers in grasslands. Ecology and Evolution 6(1):245-255. http://dx.doi.org/10.1002/ece3.1847

Marten, K., and P. Marler. 1977. Sound transmission and its significance for animal vocalization. Behavioral Ecology and Sociobiology 2(3):271-290.

Matsuoka, S. M., C. L. Mahon, C. M. Handel, P. Sólymos, E. M. Bayne, P. Fontaine, and C. J. Ralph. 2014. Reviving common standards in point-count surveys for broad inference across studies. Condor 116(4):599-608. http://dx.doi.org/10.1650/CONDOR-14-108.1

Pacifici, K., T. R. Simons, and K. H. Pollock. 2008. Effects of vegetation and background noise on the detection process in auditory avian point-count surveys. Auk 125(3):600-607. http://dx.doi.org/10.1525/auk.2008.07078

Peterjohn, B. G., and J. R. Sauer. 1999. Population status of North American grassland birds from the North American Breeding Bird Survey, 1966-1996. Pages 27-44 in P. D. Vickery and J. R. Herkert, editors. Ecology and conservation of grassland birds of the western hemisphere. Studies in avian biology Number 19. Allen Press, Lawrence, Kansas, USA.

Ralph, C. J., J. R. Sauer, and S. Droege, editors. 1995. Monitoring bird populations by point counts. General Technical Report PSW-GTR-149 edition. Pacific Southwest Research Station, Forest Service, United States Department of Agriculture, Albany, California, USA. http://dx.doi.org/10.2737/psw-gtr-149

Ramsey, F. L., and J. M. Scott. 1981. Tests of hearing ability. Pages 353-359 in C. J. Ralph and J. M. Scott, editors. Estimating numbers of terrestrial birds. Studies in avian biology Number 6. Allen Press, Lawrence, Kansas, USA.

Razza, S., and S. Burdo. 2011. An underestimated issue: unsuspected decrease of sound processor microphone sensitivity, technical, and clinical evaluation. Cochlear Implants International 12(2):114-123. http://dx.doi.org/10.1179/146701010X486499

Rempel, R. S., C. M. Francis, J. N. Robinson, and M. Campbell. 2013. Comparison of audio recording system performance for detecting and monitoring songbirds. Journal of Field Ornithology 84(1):86-97. http://dx.doi.org/10.1111/jofo.12008

Riffell, S. K., and B. D. Riffell. 2002. Can observer clothing color affect estimates of richness and abundance? An experiment with point counts. Journal of Field Ornithology 73(4):351-359. http://dx.doi.org/10.1648/0273-8570-73.4.351

Rognan, C. B., J. M. Szewczak, and M. L. Morrison. 2012. Autonomous recording of Great Gray Owls in the Sierra Nevada. Northwestern Naturalist 93(2):138-144. http://dx.doi.org/10.1898/nwn11-02.1

Royle, J. A., and J. D. Nichols. 2003. Estimating abundance from repeated presence-absence data or point counts. Ecology 84(3):777-790. http://dx.doi.org/10.1890/0012-9658(2003)084[0777:eafrpa]2.0.co;2

Royle, J. A., J. D. Nichols, and M. Kéry. 2005. Modelling occurrence and abundance of species when detection is imperfect. Oikos 110(2):353-359. http://dx.doi.org/10.1111/j.0030-1299.2005.13534.x

Sauer, J. R., J. E. Fallon, and R. Johnson. 2003. Use of North American Breeding Bird Survey data to estimate population change for bird conservation regions. Journal of Wildlife Management 67(2):372-389. http://dx.doi.org/10.2307/3802778

Sauer, J. R., W. A. Link, J. E. Fallon, K. L. Pardieck, and D. J. Ziolkowski, Jr. 2013. The North American Breeding Bird Survey 1966-2011: summary analysis and species accounts. North American Fauna 79:1-32. http://dx.doi.org/10.3996/nafa.79.0001

Schieck, J. 1997. Biased detection of bird vocalizations affects comparisons of bird abundance among forested habitats. Condor 99(1):179-190. http://dx.doi.org/10.2307/1370236

Sidie-Slettedahl, A. M., K. C. Jensen, R. R. Johnson, T. W. Arnold, J. E. Austin, and J. D. Stafford. 2015. Evaluation of autonomous recording units for detecting 3 species of secretive marsh birds. Wildlife Society Bulletin 39(3):626-634. http://dx.doi.org/10.1002/wsb.569

Simons, T. R., M. W. Alldredge, K. H. Pollock, J. M. Wettroth, and A. M. Dufty, Jr. 2007. Experimental analysis of the auditory detection process on avian point counts. Auk 124(3):986-999. http://dx.doi.org/10.1642/0004-8038(2007)124[986:eaotad]2.0.co;2

Stanislav, S. J., K. H. Pollock, T. R. Simons, and M. W. Alldredge. 2010. Separation of availability and perception processes for aural detection in avian point counts: a combined multiple-observer and time-of-detection approach. Avian Conservation and Ecology 5(1):3. http://dx.doi.org/10.5751/ace-00372-050103

Thompson, W. L. 2002. Towards reliable bird surveys: accounting for individuals present but not detected. Auk 119(1):18-25. https://doi.org/10.1642/0004-8038(2002)119[0018:TRBSAF]2.0.CO;2

Venables, W. N., and B. D. Ripley. 2002. Modern applied statistics with S. Fourth edition. Springer, New York, New York, USA. http://dx.doi.org/10.1007/978-0-387-21706-2

Venier, L. A., S. B. Holmes, G. W. Holborn, K. A. Mcilwrick, and G. Brown. 2012. Evaluation of an automated recording device for monitoring forest birds. Wildlife Society Bulletin 36(1):30-39. http://dx.doi.org/10.1002/wsb.88

Zakis, J. A. 2011. Wind noise at microphones within and across hearing aids at wind speeds below and above microphone saturationa. Journal of the Acoustical Society of America 129(6):3897-3907. http://dx.doi.org/10.1121/1.3578453

Address of Correspondent:
Patrick J. Turgeon
115 Perimeter Road
Saskatoon, SK
S7N 0X4 Canada
patrick.j.turgeon@gmail.com
Jump to top
Table1  | Table2  | Table3  | Table4  | Figure1  | Figure2  | Figure3  | Figure4  | Figure5  | Figure6  | Figure7  | Appendix1  | Appendix2  | Appendix3  | Appendix4  | Appendix5  | Appendix6  | Appendix7