.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_find_audio_events.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note Click :ref:`here ` to download the full example code .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_find_audio_events.py: ============================= Use Audio to Align Video Data ============================= In this example, we use ``pd-parser`` to find audio events using the same algorithm for matching with time-stamps and rejecting misaligned audio, but applied using the onset of an audio deflection instead of detecting photodiode events based on their square wave shape. .. GENERATED FROM PYTHON SOURCE LINES 10-15 .. code-block:: default # Authors: Alex Rockhill # # License: BSD (3-clause) .. GENERATED FROM PYTHON SOURCE LINES 16-28 Load in a video with audio In this example, we'll use audio and instead of aligning electrophysiology data, we'll align a video. This example data is from a task where movements are played on a monitor for the participant to mirror and the video recording is synchronized by playing a pre-recorded clap. This clap sound, or a similar sound, is recommended for synchronizing audio because the onset is clear and allows good precision in synchronizing events. Note that the commands that require ffmpeg are pre-computed and commented out because ffmpeg must be installed to use them and it is not required by ``pd-parser``. .. GENERATED FROM PYTHON SOURCE LINES 28-84 .. code-block:: default import os import os.path as op import numpy as np from scipy.io import wavfile from subprocess import call # from subprocess import run, PIPE, STDOUT # import datetime import mne from mne.utils import _TempDir import pd_parser # get the data out_dir = _TempDir() call(['curl -L https://raw.githubusercontent.com/alexrockhill/pd-parser/' 'master/pd_parser/tests/data/test_video.mp4 ' '-o ' + op.join(out_dir, 'test_video.mp4')], shell=True, env=os.environ) call(['curl -L https://raw.githubusercontent.com/alexrockhill/pd-parser/' 'master/pd_parser/tests/data/test_video.wav ' '-o ' + op.join(out_dir, 'test_video.wav')], shell=True, env=os.environ) call(['curl -L https://raw.githubusercontent.com/alexrockhill/pd-parser/' 'master/pd_parser/tests/data/test_video_beh.tsv ' '-o ' + op.join(out_dir, 'test_video_beh.tsv')], shell=True, env=os.environ) # navigate to the example video video_fname = op.join(out_dir, 'test_video.mp4') audio_fname = video_fname.replace('mp4', 'wav') # pre-computed # extract audio (requires ffmpeg) # run(['ffmpeg', '-i', video_fname, audio_fname]) fs, data = wavfile.read(audio_fname) data = data.mean(axis=1) # stereo audio but only need one source info = mne.create_info(['audio'], fs, ['stim']) raw = mne.io.RawArray(data[np.newaxis], info) # find audio-visual time offset offset = 0 # pre-computed value for this video ''' result = run(['ffprobe', '-show_entries', 'stream=codec_type,start_time', '-v', '0', '-of', 'compact=p=1:nk=0', video_fname], stdout=PIPE, stderr=STDOUT) output = result.stdout.decode('utf-8').split('\n') offset = float(output[0].strip('stream|codec_type=video|start_time')) - \ float(output[1].strip('stream|codec_type=audio|start_time')) ''' # save to disk as required by ``pd-parser``, raw needs a filename fname = op.join(out_dir, 'sub-1_task-mytask_raw.fif') raw.save(fname) # navigate to corresponding behavior behf = op.join(out_dir, 'test_video_beh.tsv') .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Creating RawArray with float64 data, n_channels=1, n_times=16464896 Range : 0 ... 16464895 = 0.000 ... 343.019 secs Ready. Writing /var/folders/s4/y1vlkn8d70jfw7s8s03m9p540000gn/T/tmp_mne_tempdir_ran50pv8/sub-1_task-mytask_raw.fif Closing /var/folders/s4/y1vlkn8d70jfw7s8s03m9p540000gn/T/tmp_mne_tempdir_ran50pv8/sub-1_task-mytask_raw.fif [done] .. GENERATED FROM PYTHON SOURCE LINES 85-88 Run the parser Now we'll call the main function to automatically parse the audio events. .. GENERATED FROM PYTHON SOURCE LINES 88-92 .. code-block:: default annot, samples = pd_parser.parse_audio(fname, beh=behf, beh_key='tone_onset_time', audio_ch_names=['audio'], zscore=10) .. image:: /auto_examples/images/sphx_glr_plot_find_audio_events_001.png :alt: Synchronization Events Compared to Behavior Events :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out Out: .. code-block:: none Reading in /var/folders/s4/y1vlkn8d70jfw7s8s03m9p540000gn/T/tmp_mne_tempdir_ran50pv8/sub-1_task-mytask_raw.fif Opening raw data file /var/folders/s4/y1vlkn8d70jfw7s8s03m9p540000gn/T/tmp_mne_tempdir_ran50pv8/sub-1_task-mytask_raw.fif... Isotrak not found Range : 0 ... 16464895 = 0.000 ... 343.019 secs Ready. Reading 0 ... 16464895 = 0.000 ... 343.019 secs... Finding points where the audio is above `zscore` threshold... 17 audio candidate events found Checking best alignments 0%| | 0/15 [00:00` .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_find_audio_events.ipynb ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_