EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Neural Correlates of Speech Perception

Download or read book Neural Correlates of Speech Perception written by Nicole Scherm and published by . This book was released on 2015 with total page 57 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Neural Correlates of Quality Perception for Complex Speech Signals

Download or read book Neural Correlates of Quality Perception for Complex Speech Signals written by Jan-Niklas Antons and published by Springer. This book was released on 2015-02-11 with total page 108 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book interconnects two essential disciplines to study the perception of speech: Neuroscience and Quality of Experience, which to date have rarely been used together for the purposes of research on speech quality perception. In five key experiments, the book demonstrates the application of standard clinical methods in neurophysiology on the one hand and of methods used in fields of research concerned with speech quality perception on the other. Using this combination, the book shows that speech stimuli with different lengths and different quality impairments are accompanied by physiological reactions related to quality variations, e.g., a positive peak in an event-related potential. Furthermore, it demonstrates that – in most cases – quality impairment intensity has an impact on the intensity of physiological reactions.

Book Neural Correlates of Auditory visual Speech Perception in Noise

Download or read book Neural Correlates of Auditory visual Speech Perception in Noise written by Jaimie Gilbert and published by ProQuest. This book was released on 2009 with total page 173 pages. Available in PDF, EPUB and Kindle. Book excerpt: Speech perception in noise may be facilitated by presenting the concurrent optic stimulus of observable speech gestures. Objective measures such as event-related potentials (ERPs) are crucial to understanding the processes underlying a facilitation of auditory-visual speech perception. Previous research has demonstrated that in quiet acoustic conditions auditory-visual speech perception occurs faster (decreased latency) and with less neural activity (decreased amplitude) than auditory-only speech perception. These empirical observations provide support for the construct of auditory-visual neural facilitation. Auditory-visual facilitation was quantified with response time and accuracy measures and the N1/P2 ERP waveform response as a function of changes in audibility (manipulation of the acoustic environment by testing a range of signal-to-noise ratios) and content of optic cue (manipulation of the types of cues available, e.g., speech, nonspeech-static, or non-speech-dynamic cues). Experiment 1 (Response Time Measures) evaluated participant responses in a speeded-response task investigating effects of both audibility and type of optic cue. Results revealed better accuracy and response times with visible speech gestures compared to those for any non-speech cue. Experiment 2 (Audibility) investigated the influence of audibility on auditory-visual facilitation in response time measures and the N1/P2 response. ERP measures showed effects of reduced audibility (slower latency, decreased amplitude) for both types of facial motion, i.e., speech and non-speech dynamic facial optic cues, compared to measures in quiet conditions. Experiment 3 (Optic Cues) evaluated the influence of the type of optic cue on auditory-visual facilitation with response time measures and the N1/P2 response. N1 latency was faster with both types of facial motion tested in this experiment, but N1 amplitude was decreased only with concurrent presentation of auditory and visual speech. The N1 ERP results of these experiments reveal that the effect of audibility alone does not explain auditory-visual facilitation in noise. The decreased N1 amplitude associated with the visible speech gesture and the concurrent auditory speech suggests that processing of the visible speech gesture either stimulates N1 generators or interacts with processing in N1 generators. A likely generator of the N1 response is the auditory cortex, which matures differently without auditory stimulation during a critical period. The impact of auditory-visual integration deprivation on neural development and ability to make use of optic cues must also be investigated. Further scientific understanding of any maturational differences or differences in processing due to auditory-visual integration deprivation is needed to promote utilization of auditory-visual facilitation of speech perception for individuals with auditory impairment. Research and (re)habilitation therapies for speech perception in noise must continue to emphasize the benefit of associating and integrating auditory and visual speech cues.

Book Neural Correlates of Auditory Cognition

Download or read book Neural Correlates of Auditory Cognition written by Yale E. Cohen and published by Springer Science & Business Media. This book was released on 2012-10-19 with total page 336 pages. Available in PDF, EPUB and Kindle. Book excerpt: Hearing and communication present a variety of challenges to the nervous system. To be heard and understood, a communication signal must be transformed from a time-varying acoustic waveform to a perceptual representation to an even more abstract representation that integrates memory stores with semantic/referential information. Finally, this complex, abstract representation must be interpreted to form categorical decisions that guide behavior. Did I hear the stimulus? From where and whom did it come? What does it tell me? How can I use this information to plan an action? All of these issues and questions underlie auditory cognition. Since the early 1990s, there has been a re-birth of studies that test the neural correlates of auditory cognition with a unique emphasis on the use of awake, behaving animals as model. Continuing today, how and where in the brain neural correlates of auditory cognition are formed is an intensive and active area of research. Importantly, our understanding of the role that the cortex plays in hearing has the potential to impact the next generation of cochlear- and brainstem-auditory implants and consequently help those with hearing impairments. Thus, it is timely to produce a volume that brings together this exciting literature on the neural correlates of auditory cognition. This volume compliments and extends many recent SHAR volumes such as Sound Source Localization (2005) Auditory Perception of Sound Sources (2007), and Human Auditory Cortex (2010). For example, in many of these volumes, similar issues are discussed such as auditory-object identification and perception with different emphases: in Auditory Perception of Sound Sources, authors discuss the underlying psychophysics/behavior, whereas in the Human Auditory Cortex, fMRI data are presented. The unique contribution of the proposed volume is that the authors will integrate both of these factors to highlight the neural correlates of cognition/behavior. Moreover, unlike other these other volumes, the neurophysiological data will emphasize the exquisite spatial and temporal resolution of single-neuron [as opposed to more coarse fMRI or MEG data] responses in order to reveal the elegant representations and computations used by the nervous system.

Book Neural Correlates of Native language Speech Perception and Non native Speech Sound Learning

Download or read book Neural Correlates of Native language Speech Perception and Non native Speech Sound Learning written by Pamela Fuhrmeister and published by . This book was released on 2020 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Many studies of non-native speech sound learning report a great deal of individual variability; some learners master the sounds of a second language with ease, while others struggle to perceive and produce sounds, even after years of learning the language. Although some contributions of phonological, auditory, or cognitive skills have been found to predict non-native speech sound learning ability as measured by laboratory tasks, the field lacks a comprehensive understanding of where these differences originate from. Recent findings, however, suggest that individual differences in sleep duration may predict learning after a period of offline consolidation, though these findings are mixed. Another issue is that the large amount of individual variability seen in studies of non-native learning makes it difficult to obtain precise estimates of effect sizes. Therefore, the first aim of this dissertation was to replicate and extend recent behavioral and neuroimaging findings in non-native speech sound learning with a larger sample size than is typical. The second goal was to test a new question, namely, that how consistently and categorically listeners perceive native-language sounds will predict success on non-native speech sound learning tasks. Finally, we sought to establish whether measures of brain structure can predict how categorically listeners perceive sounds in the native language and how consistently they respond to those sounds. We did not replicate recent findings showing behavioral improvement after sleep on non-native speech sound learning tasks, nor did we replicate the finding that sleep duration predicts overnight improvement. However, gyrification of the bilateral transverse temporal gyri and hippocampal volume predicted an individual's overnight improvement, suggesting a role for memory consolidation, even though we did not see overnight improvement at the group level. We additionally did not find that individual differences in categorical perception predicted non-native speech sound learning, which presents a challenge for some predominant theories of non-native speech sound learning, which future research will have to address. Overall, learners with reduced surface area and volume in frontal regions showed more graded and consistent perception of native-language speech sounds, supporting the notion that these regions underlie categorical perception.

Book Speaker Perception and Recognition  An Integrative Framework for Computational Speech Processing

Download or read book Speaker Perception and Recognition An Integrative Framework for Computational Speech Processing written by Oxana Lapteva and published by kassel university press GmbH. This book was released on 2011 with total page 192 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Dynamic Neural Correlates of Perceiving and Imagining Speech

Download or read book Dynamic Neural Correlates of Perceiving and Imagining Speech written by Siyi Deng and published by . This book was released on 2011 with total page 97 pages. Available in PDF, EPUB and Kindle. Book excerpt: Recent studies have suggested that speech perception is processed by distinct brain networks, and correlated neural signals generated by these networks can be identified from brain electromagnetic recordings such as EEG. We studied the dynamic property of 40-Hz gamma band steady-state auditory responses evoked by speech and non-speech stimuli, and found that these two categories of signals are processed differentially over the left and right auditory cortex. We applied an envelope-based Hilbert-Huang decomposition of the data, and extracted signals from a functional network that is only correlated with speech stimuli. We believe these are evidences that speech signals are preferentially processed at a longer (syllabic) time scales than that of non-speech (phonetic). In a separate study, the envelope correlated neural signals have been used to successfully classify the perceived and imagined syllabic rhythms from EEG. We then developed a new method of geometrically accurate spline surface Laplacian (SSL) to improve the spatial resolution of EEG and to estimate the radial current source density on the inner skull surface. Numerical simulations and real EEG data have shown that the new method is more accurate than traditional spherical SSL by incorporating the surface curvature information (manuscript submitted). In a third experiment we recorded EEG when subjects were asked to listen or imagine a small set of selected sentences. We apply a fast wavelet transform technique to counter the lag and compression uncertainty of imagined speech, and apply the new SSL to the extracted signal to estimate the anatomical origin of the neural correlates.

Book Neural Correlates of Unimodal and Multimodal Speech Perception in Cochlear Implant Users and Normal hearing Listeners

Download or read book Neural Correlates of Unimodal and Multimodal Speech Perception in Cochlear Implant Users and Normal hearing Listeners written by Hannah E. Shatzer and published by . This book was released on 2020 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Spoken word recognition often involves the integration of both auditory and visual speech cues. The addition of visual cues is particularly useful for individuals with hearing loss and cochlear implants (CIs), as the auditory signal they perceive is degraded compared to individuals with normal hearing (NH). CI users generally benefit more from visual cues than NH perceivers; however, the underlying neural mechanisms affording them this benefit are not well-understood. The current study sought to identify the neural mechanisms active during auditory-only and audiovisual speech processing in CI users and determine how they differ from NH perceivers. Postlingually deaf experienced CI users and age-matched NH adults completed syllable and word recognition tasks during EEG recording, and the neural data was analyzed for differences in event-related potentials and neural oscillations. The results showed that during phonemic processing in the syllable task, CI users have stronger AV integration, shifting processing away from primary auditory cortex and weighting the visual signal more strongly. During whole-word processing in the word task, early acoustic processing is preserved and similar to NH perceivers, but again displaying robust AV integration. Lipreading ability also predicted suppression of early auditory processing across both CI and NH participants, suggesting that while some neural reorganization may have occurred in CI recipients to improve multisensory integrative processing, visual speech ability leads to reduced sensory processing in primary auditory cortex regardless of hearing status. Findings further support behavioral evidence for strong AV integration in CI users and the critical role of vision in improving speech perception.

Book Speech Perception

    Book Details:
  • Author : Lori L. Holt
  • Publisher : Springer Nature
  • Release : 2022-02-22
  • ISBN : 3030815420
  • Pages : 260 pages

Download or read book Speech Perception written by Lori L. Holt and published by Springer Nature. This book was released on 2022-02-22 with total page 260 pages. Available in PDF, EPUB and Kindle. Book excerpt: This volume reviews contemporary developments in the auditory cognitive neuroscience of speech perception, including both behavioral and neural contributions. It serves as an important update on the current state of research in speech perception. The Auditory Cognitive Neuroscience of Speech Perception in Context Lori L. Holt, and Jonathan E. Peelle Subcortical Processing of Speech Sounds Bharath Chandrasekaran, Rachel Tessmer, and G. Nike Gnanateja Cortical Representation of Speech Sounds: Insights from Intracranial Electrophysiology Yulia Oganian, Neal P. Fox, and Edward F. Chang A Parsimonious Look at Neural Oscillations in Speech Perception Sarah Tune, and Jonas Obleser Extracting Language Content From Speech Sounds: The Information Theoretic Approach Laura Gwilliams, and Matthew H. Davis Speech Perception under Adverse Listening Conditions Stephen C. Van Hedger, and Ingrid S. Johnsrude Adaptive Plasticity in Perceiving Speech Sounds Shruti Ullas, Milene Bonte, Elia Formisano, and Jean Vroomen Development of Speech Perception Judit Gervain Interactions Between Audition and Cognition in Hearing Loss and Aging Chad S. Rogers, and Jonathan E. Peelle Dr. Lori Holt is a Professor of Psychology at Carnegie Mellon University and has affiliations with the Center for the Neural Basis of Cognition and the Center for Neuroscience University of Pittsburgh. Dr. Jonathan E. Peelle is a Professor in the Department of Otolaryngology at the Washington University in St. Louis. Dr. Allison Coffin is an Associate Professor in the Department of Integrative Physiology and Neuroscience at Washington State University Vancouver. Dr. Arthur N. Popper is Professor Emeritus and research professor in the Department of Biology at the University of Maryland, College Park. Dr. Richard R. Fay is Distinguished Research Professor of Psychology at Loyola, Chicago.

Book Neural Correlates of Auditory Word Processing in Infants and Adults

Download or read book Neural Correlates of Auditory Word Processing in Infants and Adults written by Katherine Elizabeth Travis and published by . This book was released on 2011 with total page 122 pages. Available in PDF, EPUB and Kindle. Book excerpt: For the majority of people, words are first learned and are communicated in high proportions in the auditory modality. However, the neural dynamics underlying speech perception are poorly understood. Even more limited, is knowledge of the neurophysiological processes and neuroanatomical structures that afford developing language abilities in infants. This dissertation investigates these issues in a series of related studies that are aimed at characterizing the spatial and temporal neural dynamics of auditory word processing in both developing 12-19 month old infants and adults. The first study, performed in adults, reveals new evidence for a neural response that is selective for auditory words, relative to acoustically-matched control sounds. This response appears to index a stage in speech processing wherein an incoming word sound is translated from an acoustic signal into a linguistically relevant code. This information can then be passed along the speech processing stream so that eventually the appropriate meaning of a word can be selected amongst representations stored within associative left fronto-temporal networks. The second study, performed in both adults and 12-18 month old infants, demonstrates that the neural mechanism responsible for encoding lexico-semantic word information has similar spatial and temporal characteristics in infants and adults. Prior work has not been able to establish whether infants and adults share similar neural substrates for language, and these findings suggest that the neurophysiological processes important for word understanding reside within similar neural networks throughout the lifespan. Finally, to gain a better understanding of the regional neuroanatomical changes that take place in the developing cortex of 12-19 month old-infants, the third study examines age-related changes tissues signal properties assessed with magnetic resonance imaging. This a period in development that is pivotal for emerging linguistic, cognitive and sensorimotor behaviors, however, the maturational changes that occur brain structures are poorly understood at these ages. This study reveals large changes in structural measures within precisely the specific areas that were demonstrated to generate lexico-semantic activity in study two. Together, these studies help to advance current understanding of neurophysiological processing stages and neural structures involved in auditory word processing in both the developing and mature brain. These findings invite a host of new studies that will continue to further knowledge of how speech processing is instantiated within the brain. Finally, with the use of multimodal imaging techniques such as those described in the present studies, there is increasing potential for new research aimed at understanding the neurobiological underpinning of language and other cognitive behaviors.

Book The Cognitive and Neural Organisation of Speech Processing

Download or read book The Cognitive and Neural Organisation of Speech Processing written by Patti Adank and published by Frontiers Media SA. This book was released on 2016-03-18 with total page 148 pages. Available in PDF, EPUB and Kindle. Book excerpt: Speech production and perception are two of the most complex actions humans perform. The processing of speech is studied across various fields and using a wide variety of research approaches. These fields include, but are not limited to, (socio)linguistics, phonetics, cognitive psychology, neurophysiology, and cognitive neuroscience. Research approaches range from behavioural studies to neuroimaging techniques such as Magnetoencephalography, electroencephalography (MEG/EEG) and functional Magnetic Resonance Imaging (fMRI), as well as neurophysiological approaches, such as the recording of Motor Evoked Potentials (MEPs), and Transcranial Magnetic Stimulation (TMS). Each of these approaches provides valuable information about specific aspects of speech processing. Behavioural testing can inform about the nature of the cognitive processes involved in speech processing, neuroimaging methods show where (fMRI and MEG) in the brain these processes take place and/or elucidate on the time-course of activation of these brain areas (EEG and MEG), while neurophysiological methods (MEPs and TMS) can assess critical involvement of brain regions in the cognitive process. Yet, what is currently unclear is how speech researchers can combine methods such that a convergent approach adds to theory/model formulation, above and beyond the contribution of individual component methods? We expect that such combinations of approaches will significantly forward theoretical development in the field. The present research topic comprise a collection of manuscripts discussing the cognitive and neural organisation of speech processing, including speech production and perception at the level of individual speech sounds, syllables, words, and sentences. Our goal was to use findings from a variety of disciplines, perspectives, and approaches to gain a more complete picture of the organisation of speech processing. The contributions are grouped around the following five main themes: 1) Spoken language comprehension under difficult listening conditions; 2) Sub-lexical processing; 3) Sensorimotor processing of speech; 4) Speech production. The contributions used a variety of research approaches, including behavioural experiments, fMRI, EEG, MEG, and TMS. Twelve of the 14 contributions were on speech perception processing, and the remaining two examined speech production. This Research Topic thus displays a wide variety of topics and research methods and this comprehensive approach allows an integrative understanding of currently available evidence as well as the identification of concrete venues for future research.

Book Dissecting the function of networks underpinning language repetition

Download or read book Dissecting the function of networks underpinning language repetition written by Matthew A Lambon Lambon Ralph and published by Frontiers E-books. This book was released on 2014-12-17 with total page 135 pages. Available in PDF, EPUB and Kindle. Book excerpt: In the 19th century, ground-breaking observations on aphasia by Broca and Wernicke suggested that language function depends on the activity of the cerebral cortex. At the same time, Wernicke and Lichtheim also elaborated the first large-scale network model of language which incorporated long-range and short-range (transcortical connections) white matter pathways in language processing. The arcuate fasciculus (dorsal stream) was traditionally viewed as the major language pathway for repetition, but scientists also envisioned that white matter tracts travelling through the insular cortex (ventral stream) and transcortical connections may take part in language processing. Modern cognitive neuroscience has provided tools, including neuroimaging, which allow the in vivo examination of short- and long-distance white matter pathways binding cortical areas essential for verbal repetition. However, this state of the art on the neural correlates of language repetition has revealed contradictory findings, with some researchers defending the role of the dorsal and ventral streams, whereas others argue that only cortical hubs (Sylvian parieto-temporal cortex [Spt]) are crucially relevant. An integrative approach would conceive that the interaction between these structures is essential for verbal repetition. For instance, different sectors of the cerebral cortex (e.g., Spt, inferior frontal gyrus/anterior insula) act as hubs dedicated to short-term storage of verbal information or articulatory planning and these areas in turn interact through forward and backward white matter projections. Importantly, white matter pathways should not be considered mere cable-like connections as changes in their microstructural properties correlate with focal cortical activity during language processing tasks. Despite considerable progress, many outstanding questions await response. The articles in this Research Topic tackle many different and critical new questions, including: (1) how white matter pathways instantiate dialogues between different cortical language areas; (2) what are the specific roles of different white matter pathways in language functions in normal and pathological conditions; (3) what are the language consequences of discrete damage to branches of the dorsal and ventral streams; 4) what are the consequences (e.g., release from inhibition) of damage to the left white matter pathways in contralateral ones and viceversa; (5) how these pathways are reorganised after brain injury; (5) can the involvement/sparing of white matter pathways be used in outcome prediction and treatment response; and (5) can the microstructure of white matter pathways be remodelled with intensive rehabilitation training or biological approaches. This Research Topic includes original studies, and opinion and review articles which describe new data as well as provocative and insightful interpretations of the recent literature on the role of white matter pathways in verbal repetition in normal and pathological conditions. A brief highlight summary of each is provided below.

Book Language Talent and Brain Activity

Download or read book Language Talent and Brain Activity written by Grzegorz Dogil and published by Walter de Gruyter. This book was released on 2009 with total page 375 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book describes and assesses pronunciation talent in its various dimensions, such as production and perception or the segmental and suprasegmental levels of speech. Special focus is put on the psychological and neural correlates of phonetic perf

Book The Effects of Age of Acquisition and Proficiency on the Neural Correlates of Categorical Percption of Non native Speech

Download or read book The Effects of Age of Acquisition and Proficiency on the Neural Correlates of Categorical Percption of Non native Speech written by Victoria E. Wagner and published by . This book was released on 2017 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Age of Acquisition (AoA) and second language (L2) proficiency have been shown to influence bilingual neural recruitment and neuroanatomy, but previous literature shows inconsistencies. The current studies used multiple regression analyses to understand the influence of AoA and L2 proficiency on neural processing for categorical perception in Spanish-English bilinguals during a speech identification task. Functional data showed that AoA and L2 proficiency differentially recruited areas previously associated with speech processing. Increased L2 proficiency was associated with increased activity in bilateral inferior frontal gyrus and middle frontal gyrus as well as right superior temporal gyrus, middle temporal gyrus and angular gyrus. AoA was associated with a separate region of MFG. The data suggest that increased proficiency is associated with higher-level strategies such as attentional mechanisms and semantic processing to aid in a perceptual task. Study 2 focused on the influence of AoA and L2 proficiency on neuroanatomy. Structure based morphometry and multiple regression analyses were used to determine the relationship of AoA, L2 proficiency and L2 use and brain structure in speech processing areas. Significant relationships were found in left MTG, left supramarginal gyrus and right angular gyrus. The results suggest that L2 proficiency and AoA are associated with structural measures in speech processing areas, those associated with higher-level processing. The studies combine to provide a better understanding of the variability of AoA and L2 proficiency in bilinguals and how it impacts speech processing through recruitment of different neural regions that may underlie different strategies to complete a speech perception task.

Book Neural Control of Speech

Download or read book Neural Control of Speech written by Frank H. Guenther and published by MIT Press. This book was released on 2016-07-15 with total page 426 pages. Available in PDF, EPUB and Kindle. Book excerpt: A comprehensive and unified account of the neural computations underlying speech production, offering a theoretical framework bridging the behavioral and the neurological literatures. In this book, Frank Guenther offers a comprehensive, unified account of the neural computations underlying speech production, with an emphasis on speech motor control rather than linguistic content. Guenther focuses on the brain mechanisms responsible for commanding the musculature of the vocal tract to produce articulations that result in an acoustic signal conveying a desired string of syllables. Guenther provides neuroanatomical and neurophysiological descriptions of the primary brain structures involved in speech production, looking particularly at the cerebral cortex and its interactions with the cerebellum and basal ganglia, using basic concepts of control theory (accompanied by nontechnical explanations) to explore the computations performed by these brain regions. Guenther offers a detailed theoretical framework to account for a broad range of both behavioral and neurological data on the production of speech. He discusses such topics as the goals of the neural controller of speech; neural mechanisms involved in producing both short and long utterances; and disorders of the speech system, including apraxia of speech and stuttering. Offering a bridge between the neurological and behavioral literatures on speech production, the book will be a valuable resource for researchers in both fields.

Book Behavioral and Neural Correlates of Human Time Perception  Imagination and Production

Download or read book Behavioral and Neural Correlates of Human Time Perception Imagination and Production written by Tzu-Han Zoe Cheng and published by . This book was released on 2023 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Rhythmicity is a key component that allows humans to attend to, predict, and respond to the environment. In particular, temporal processing is fundamental to the perception and production of complex sounds, such as speech and music. Neural entrainment hypothesizes that internal oscillators synchronize with external stimuli, providing a unified mechanism for supramodal temporal processing. Extensive research demonstrates the entrainment effect on human time perception for non-speech musical sounds; fewer studies have shown entrainment effects for duration perception in spoken language. To date, it remains unclear how humans encode temporal properties and generate rhythm according to them, and whether and how entrainment mechanisms represent timing information in the brain. My aim is to address these important open questions. In Chapter 1 of my thesis, I reviewed the existing literature and gaps therein. Chapter 2 compared whether entrainment or interval models more accurately predict human time perception. Entrainment models more accurately predicted duration discrimination, but the effect diminished after 2-4 cycles, while interval models predicted more accurately thereafter. Chapter 3 tested entrainment effects on more ecologically valid contexts--speech sounds, and found that entrainment can transfer from tones to speech sounds, suggesting a domain-general entrainment effect with a constraint by acoustical similarity. Chapter 4 examined neural evidence of entrainment in hierarchically organized drumming rhythm. The study found that both auditory and motor regions represent the rhythms imagined by the subjects. A motor-to-auditory information flow was found in all listening conditions without overt movements, suggesting that the motor system actively maintains hierarchical information and exerts a top-down influence on auditory processing and metrical imagery of rhythms. Chapter 5 further investigated rhythm production using self-paced tapping and synchronization, finding that synchronization relies on auditory-motor interaction in beta-band, only observed in individuals who tap relatively stably in the self-paced tapping task without external cues. In summary, this thesis work contributes to the theoretical understanding of how humans perceive, imagine and produce temporal events, particularly in a rhythmic context, at the behavioral and neural levels. My hope is that this work can improve real-life applications and inform work with clinical populations who have timing-related deficits.