With David Yaden and Shawtaroh Granzier-Nakajima
How can simultaneous processing of multiple levels (meaning, syntax, lexical, phonological) be demonstrated through Eye Movement Miscue Analysis (EMMA)? EMMA involves the simultaneous eye-tracking and audio-recording of oral-readings to understand readers’ patterns of visual attention to text and their comprehension determined through think-alouds, retellings, miscues, or other methods which demonstrate higher level processes beyond sequential word recognition. Data collected includes patterns of scan path trajectories, metrics for saccades, and fixations. While commercial software is available, much is not applicable for specific experimental situations; therefore, we created and tailored software to match authentic types of reading.
Experimental setup consists of the Applied Science Laboratories (ASL) D6 remote-eye-tracker system, a webcam and secondary keyboard. Syncing of oral-readings and eye movement is achieved through modulation of parallel-port-bit (XDAT) values sampled in real-time (60Hz for the ASL D6) in succession with issuing an audible beep. In this manner, we are able to ‘in-post’ separate data before-and-after the beep/XDAT markers. These processes are automated through AutoHotKey and command-line (batch) scripts which use DLL, ffmpeg, and vlc libraries as well as custom Audacity plug-ins and ResultsPlus (ASL analytic software). We show that a relatively low cost ‘in-house’ solution is possible to rival ill-suited commercial software for implementing EMMA with eye-movement and audio syncing.