Neurological basics of language processing

Language processing and speech generation is a complex procedure humans try to understand. It has been a common approach in neuroscience to name macro-anatomical brain structures after their functional purpose, in terms of classical behavioral paradigms. For example, there is an area in the left occipito-temporal sulcus which is called ‘the visual word form area’ because it is known to be the domain of reading. But this very brain area has also been labeled the ‘lateral occipital tactile-visual area’. Why is that? Well, this common approach leads to a false conclusion that the labeled brain area only has one functional purpose, which is described by the label. Moreover due to the several functions of brain areas this labelling approach makes it possible to have many names for the same brain structure, depending on the research focus of the respective scientist.

This problem also arises for the description of language processing. In 2016 scientists made an approach to declare that the ‘classic model’ (‘Wernicke-Lichtheim-Geschwind model’) which originates in the late 19th century is outdated. The model assumes that language abilities are mainly localized in two areas in the dominant (i.e., mostly the left) hemisphere located in the perisylvian fissure. The more anterior Broca’s area is associated with speech production and is connected via the arcuate fasciculus to Wernicke’s area, which is associated with speech comprehension. Although the model still proves useful in describing aphasias, it has also received increasing criticism. In addition to a limited spatial precision (e.g. in the drawings of the considered brain regions in the original paper and in studies that could not find a homogeneous localization) it also implies the notion of a high degree of functional modularity which was not supported. Furthermore the model only focuses on broader cortical structures neglecting subcortical structures and their connections.

According to recent research, it takes time to process information, so it seems in a way trivial to assume that it also takes a certain amount of time for the brain to process continuous acoustic stimuli to comprehend continuous speech. Results of a study from 2018 indicate that traversed processes due to the comprehension of continuous speech can be anatomically and temporally differentiated. The acoustic information engages in several subcortical structures and their connections amongst one another. It passes for example through the auditory cortex which is followed by responses over the central sulcus and the inferior frontal gyrus. Moreover, the semantic composition was related to certain bilateral temporal and frontal brain activities, indicating that speech processing, as a complex multidimensional ability, is a complex neurological task involving multiple brain regions.