PerSCiDO facilitates the exploration of research datasets.

Share your research datasets using PerSCiDO!

Datasets: 36
Downloaded: 1923
  • speech data
VocADomA4H -- Acoustic recordings
This repository contains the acoustics signals of the Vocadom@A4H dataset : This part of the data is restricted but can be accessed by signing a form
Read me file
Read me file
*The VocADom@A4H corpus*

This dataset contains the complementary files (acoustic files) of the VocADom@A4H corpus whose main website is:

Further information is also available on the VOCADOM project website:

This dataset contains a corpus of about 12 hours of data from 11 different recording sessions in the Amiqual4Home smart home. The experiment was conducted between May and June 2017 as part of the VocADom project supported by the Agence Nationale de la Recherche under grant ANR-16-CE33-0006.

If you use the corpus or need more details please refer to the following paper: Context-Aware Voice-based Interaction in Smart Home -VocADom@A4H Corpus Collection and Empirical Assessment of its Usefulness
author = "Portet, Fran{\c c}ois and Caffiau, Sybille and Ringeval, Fabien and Vacher, Michel and Bonnefond, Nicolas and Rossato, Solange and Lecouteux, Benjamin and Desot, Thierry",
title = "Context-Aware Voice-based Interaction in Smart Home - VocADom@A4H Corpus Collection and Empirical Assessment of its Usefulness",
booktitle = "17th IEEE International Conference on Pervasive Intelligence and Computing (PICom 2019)",
year = "2019",
location = "Fukuoka, Japan",
url = ""

*Aims and protocol of the recording*
The experiment was performed to study voice commands in multi-room smart home and in a multi-dweller setting. Usual home automation sensors (movement detector, contact door detector, temperature, etc.) as well as arrays of microphone signals were captured.
Eleven participants uttered voice commands while performing scripted activities of daily living for about one hour of recording per participant. At the beginning of each session for each participant, the voice command grammar was not imposed as the aim was to elicit spontaneous speech. Then, the participants had to follow an increasingly constrained grammar. Using a Wizard-of-Oz approach, out-of-sight experimenters enacted user commands, acting as a 'perfect' voice command system.
For each participant, the whole experiment session was recorded continuously without interruption. Within a session, 3 phases were identified:

Phase 1 - Graphical based instruction to elicit spontaneous voice commands (interaction with the home)
Phase 2 - Inhabitant scenario enacting a visit by a friend (interaction with the home and the visitor)
Phase 3 - Voice commands in noisy domestic environment (reading of voice commands in the home - no interaction)

This data set is intended to be useful for the following (not exclusive) tasks :

multi-Human localization
multi-Human activity recognition
smart home context modeling
multi-channel Voice activity detection
multi-channel Automatic Speech Recognition
multi-channel Spoken Language Understanding
multi-channel Speaker Recognition
multi-channel speech enhancement
multi-channel blind source separation
multi-channel automatic decision making

*What is in this dataset*

This dataset contains the recorded acoustic signals whose usage is restricted to research only. Beware that all recorded speech utterances were in French.

All the data is stored under the record/ directory which contains 11 sub-directories named S[00-10]/. Each of these respects the following structure :

mic_array/ (microphone array recordings) available after having signed the End-User License Agreement (EULA)

mic_headset/ (headset microphone recordings) available after having signed the End-User License Agreement (EULA)


The 16-channel recording of the experiment was performed by 4 arrays of 4 microphones arranged in a square of 10cm side.
Each microphone was a t.bone LC 97 TWS (
Recording was performed using Kristal Audio Engine : Version 1.0.1 (Jun 1 2004) on a Windows 8.1 64.
Each channel is a mono 16-bit signed integer PCM sampled at 16kHz
Array I (resp II, III, IV) is composed of channel_[1-4].wav (resp [5-8], [9-12], [13-16])
The floor plan for precise location of these arrays is avaible on the main repository of the dataset.

Known Issues:
- for S05 and S06 the array of microphones recording has been damaged. These files have not been recovered and hence cannot be used with confidence.


Wireless microphone worn by the participant.
It was a SENNHEISER HSP4 -ew-3 static cardioide, jack 3.5.
The recording was performed using Audacity : 2.0.5 on a Ubuntu 14.04 LTS 64.
Mono 16-bit Signed Integer PCM acquired at 16kHz.

Known Issues:
- 15 first minutes of the worn microphone of S04 are missing. They have been padded with silence. Padding was added using the sox pad option

*What is NOT in this dataset*

All other data are freely available at :

Among other data, the End User will find the speech transcripts and the home automation logs of the recording sessions as well as documents about the smart home, participants and material used during the experiment.
2020 01 09
The size of this dataset is more than 4000 Mb
Other metadata
  • External Identifiers:

  • Subjects:

    Computer Science
  • Keywords:

    Speech processing, smart home, voice command
  • Corresponding tasks:

    spoken language translation, classification, pattern extraction, prediction, rule extraction, person detection, Activity recognition- tracking
  • Encoding data format:

    wav files

François Portet, Sybille Caffiau, Fabien Ringeval, Michel Vacher, Nicolas Bonnefond (2020). VocADomA4H -- Acoustic recordings [Data set].. Published 2020 via Perscido-Grenoble-Alpes;

François Portet, Sybille Caffiau, Fabien Ringeval, Michel Vacher, Nicolas Bonnefond (2020). VocADomA4H -- Acoustic recordings [Data set].. Published 2020 via Perscido-Grenoble-Alpes