Yune S. Lee, PhD
Assistant Professor; Department of Speech, Language, and Hearing; School of Behavioral and Brain Sciences; Director of Speech, Language, and Music (SLAM) Laboratory
Dr. Yune Lee started his career at Samsung, Seoul, S. Korea, where he was involved in company commercials, broadcasting, and marketing projects. After a quick stint, he joined a music studio where he produced a number of songs and jingles for television commercials, movies, and event commencement. During this time, he began to realize the power of music on human mind and body. This growing curiosity led him to pursue Ph.D. in cognitive neuroscience at Dartmouth College. He then delved into more clinically oriented neuroscience research during the postdoc period at Penn Neurology. He started operating SLAM (Speech, Language, and Music) laboratory at the Speech and Hearing Department and the Chronic Brain Injury at The Ohio State University. In 2020, he joined the School of Behavioral and Brain Sciences at the University of Texas at Dallas. His research has been funded by numerous grants including NIH, NSF, Parkinson’s Foundation, AWARE in Dallas and Friends of BrainHealth. Dr. Lee was chosen as one of the first recipients of the NIH’s new initiative — Music & Health. His research has received numerous media attention including US News & World report.
Dr. Lee’s lab has been developing a non-invasive brain enhancement method using sound — specifically musical rhythm for developmental language disorders. This is based upon emerging evidence demonstrating connections between music and language. For example, children’s musical rhythm skills are predictive of their language, especially grammar proficiencies. Conversely, children with dyslexia show poor rhythm skills. Importantly, a recent pilot experiment found that rhythmic auditory stimulation yielded better performance on a subsequent spoken sentence comprehension task than control auditory stimulation in healthy young adults. This naturally invites therapeutic potential of sound therapy for individuals with language disorders. The behavioral findings warrant neuroimaging studies to better understand the neurobiological mechanisms underlying the therapeutic impact of auditory rhythmic stimulation.
Recent advances in brain research have revealed behavioral connections between musical rhythm and linguistic syntax.
Children with good rhythm skills understood grammatically complex sentences better than children with poor rhythm skills.
This study suggests that even slight variations in hearing acuity impact the brain systems required to accurately understand speech.
fMRI scans of a patient suffering from chronic aphasia reveal how neural activity can predict how well they can identify an image.
Because many individuals with aphasia can sing even when they cannot talk, melodic-intonation therapy has been accepted as a viable aphasia therapy.
We are the auditory neuroscience lab at the Department of Speech, Language, and Hearing at the School of Behavioral and Brain Sciences. My lab conducts interdisciplinary projects investigating relations between speech, language, and music (SLAM) in the context of neurological disorders.
Hearing Loss and Cognition
Might early hearing impairment lead to cognitive challenges later in life? Dr. Lee talks with us in episode 30 about his research into how even minor hearing loss can increase the cognitive load required to distinguish spoken language. His open-access article “Differences in hearing acuity among ‘normal-hearing’ young adults modulate the neural basis for speech comprehension” was published with multiple co-authors in the May 2018 issue in eNeuro.