Your browser doesn't support javascript.
loading
Dataset for polyphonic sound event detection tasks in urban soundscapes: The synthetic polyphonic ambient sound source (SPASS) dataset.
Viveros-Muñoz, Rhoddy; Huijse, Pablo; Vargas, Victor; Espejo, Diego; Poblete, Victor; Arenas, Jorge P; Vernier, Matthieu; Vergara, Diego; Suárez, Enrique.
Affiliation
  • Viveros-Muñoz R; Instituto de Acústica, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Huijse P; Instituto de Informática, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Vargas V; Millennium Institute of Astrophysics, Nuncio Monseñor Sotero Sanz 100, Providencia, Santiago, Chile.
  • Espejo D; Instituto de Acústica, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Poblete V; Instituto de Acústica, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Arenas JP; Instituto de Acústica, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Vernier M; Instituto de Acústica, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Vergara D; Instituto de Informática, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
  • Suárez E; Instituto de Acústica, Universidad Austral de Chile, General Lagos 2086, Valdivia, Chile.
Data Brief ; 50: 109552, 2023 Oct.
Article in En | MEDLINE | ID: mdl-37743885
This paper presents the Synthetic Polyphonic Ambient Sound Source (SPASS) dataset, a publicly available synthetic polyphonic audio dataset. SPASS was designed to train deep neural networks effectively for polyphonic sound event detection (PSED) in urban soundscapes. SPASS contains synthetic recordings from five virtual environments: park, square, street, market, and waterfront. The data collection process consisted of the curation of different monophonic sound sources following a hierarchical class taxonomy, the configuration of the virtual environments with the RAVEN software library, the generation of all stimuli, and the processing of this data to create synthetic recordings of polyphonic sound events with their associated metadata. The dataset contains 5000 audio clips per environment, i.e., 25,000 stimuli of 10 s each, virtually recorded at a sampling rate of 44.1 kHz. This effort is part of the project ``Integrated System for the Analysis of Environmental Sound Sources: FuSA System'' in the city of Valdivia, Chile, which aims to develop a system for detecting and classifying environmental sound sources through deep Artificial Neural Network (ANN) models.
Key words

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies Language: En Journal: Data Brief Year: 2023 Document type: Article Affiliation country: Chile Country of publication: Netherlands

Full text: 1 Collection: 01-internacional Database: MEDLINE Type of study: Diagnostic_studies Language: En Journal: Data Brief Year: 2023 Document type: Article Affiliation country: Chile Country of publication: Netherlands