Skip to main content

Open Source Toolbox for Analyzing Frequency Following Responses (FFRs)

Project leader:

Nike Gnanateja Gurindapalli, (gurindapalli@wisc.edu)

Project description:

The aim of this project is to develop an open source toolbox for analyzing frequency following responses (FFRs) to speech using a standardized pipeline and AI based approaches. FFRs are non-invasive neural responses to speech sounds that can be obtained by electroencephalography. The FFRs are phase-locked neural responses emerging from the auditory brainstem and cortex. The FFRs are extensively used across different populations, such as individuals with Autism, phonological disorders, dyslexia, aging, etc., to track the integrity of auditory processing in the brain. While the FFRs provide promising non-invasive snapshot of auditory processing, the methods used to analyze FFRs across different studies are not consistent. Further, researchers working in this field are still using older techniques to analyze FFRs, and make meaningful interpretations about computational mechanisms underlying speech processing. The advances in data analyses in FFR have been monumental in the past 3-4 years. However, access to such tools is limited only to a few labs, and also rely on extensive computational knowledge by the user. This project aims to develop an open-source toolbox for FFR analysis that would be accessible to researchers with a varied range of expertise. The toolbox will be made available both as scripts and gui to be used by beginners and advanced users alike. This will be a one of a kind toolbox that incorporates a range of feature extraction and AI-based models to decode the stimulus features encoded in the FFRs. These models are designed to be neurophysiologically interpretable while being extremely robust for applications in a variety of situations. The software development is underway. We have a rough gui ready. We have developed 10 different AI-based models to analyze the FFRs. Future work is aimed at fine-tuning these models and integrating them in the gui we are developing. The models will be able to run individual-specific and generic models to derive interpretable features from the FFRs.

Interns will assist with developing python code using Deep learning models, developing user interfaces using pyQt.

Intern needs:

Python, Pytorch, TensorFlow, signal processing, Deep learning, pyQt, problem solving

Application Requirements:

  1. Review the available projects by visiting the various project pages.
  2. Interns should apply through the UW Student Jobs portal. Applicants who are not currently admitted or enrolled as a UW-Madison Student or without a UW NetID can login as well as create an account. Please note, you must apply to each project individually that you want to be considered for.
  3. Application materials submitted through the UW Student Jobs portal should include:
  4. Submit a resume, cover letter, and three references as part of your application.
  5. Interviews will be arranged for selected candidates on a rolling basis after applications close.