The Brain andReading

Dr. Catherine Stoodley

Dept of Physiology, Anatomy and Genetics

University of Oxford, UK

cjs@physiol.ox.ac.uk

Abstract

What happens in the brain during reading and while learning to read? How can our understanding of the brain inform how we teach children to read, and help children who are struggling to learn to read?Reading is one of the most complex tasks that humans learn; it involves the coordinated interplay of the visual, auditory, motor and language systems of the brain. While language develops innately with the appropriate environmental influences, reading is a cultural construct which must be explicitly taught. That said, implicit learning - starting with exposure to the printed word well before reading instruction commences - also significantly influences reading development. Genetics contributes about 40% to one's ability to read; this suggests a considerable influence of the environment in the development of literacy skills. The regions of the brain involved in reading and reading development are predictably diverse given the complexity of the task: through the combination of lesion, behavioural, neurophysiological and imaging studies the identities of the brain regions comprising the 'reading network' are beginning to emerge. These include occipital visual regions, temporo-parietal 'word form' areas, temporal and frontal language regions, and subcortical areas such as the cerebellum. We are also starting to understand how developmental changes in the brain contribute to literacy acquisition - and how learning to read might in turn change and shape the brain.  With time, instruction, and exposure to text, the decoding aspect of reading becomes automatic and fluent in most children. At this stage, they are able to progress to the challenge of understanding and interpreting the information given to them via text. 

A brain primer: An overview of the brain, neurophysiology and techniques

Maryanne Wolf places behaviour at the top of a 5-layered pyramid built on genetic foundations. The top layer, layer 1, represents behaviour; layer 2 contains the multiple cognitive processes underlying complex behaviour; layer 3 represents the larger structures of the brain which work to perform different cognitive processes; layer 4 includes the neurons that comprise the structures; and layer 5 represents the genetic code which determines and guides the development of our brains, ensuring that the neurons find their correct places, and that long-term changes in synaptic strength can take place through coding for special proteins and enzymes in the brain. Thus our behaviour is based on a foundation of genetic influence, which in turn dictates the structure of the brain. The structure of the brain and its circuitry influences our cognitive functions and ultimately complex behaviours such as reading.  An understanding of the basic structure and function of the brain will enhance our ability to develop teaching methods that are targeted at the way the brain works.

This 'brain primer' is for anyone not too familiar with neurophysiology and brain structure. It will hopefully make the 'reading and the brain' aspect of this text more comprehensible by explaining some of the basic brain terms that will be used!

Microstructure of the brain: Neurons and synapses

Our behaviour is based on the building blocks of the brain, the nerve cells or neurons, and how they are connected with one another. Neurons and their connections build up neural circuits, and groups of neurons working together are often called 'processing centres' within the brain. The ability of these circuits to change over time depending on experience is what underlies learning and plasticity.

There are 100 billion neurons in the brain and their structural characteristics are crucial to how the brain functions (Fig. 1).  Neurons come in different shapes and sizes, but they all have certain basic parts: the cell body or soma, where the nucleus is; the dendrites, which are like the branches of a tree, and receive the input to the neuron from other nerve cells; and the axon, which is like the trunk of the tree, and this is the route that information leaving the neuron takes. The size and shape of neurons can dictate their function. Neurons with large dendritic trees are able to integrate large amounts of information from varying sources, and weigh the relative inputs to yield a measured output. Neurons that have very large axons can conduct action potentials more rapidly - this is the electric activity of the brain that enables neurons to communicate. Such rapidly-conducting axons are Figure 1.

A neuron. involved in the detection and processing of information that needs to be dealt with quickly, such as the movement of an object. Another structural aspect of these cells is the level of myelin, a fatty covering that serves to insulate axons and speeds up how fast they can transmit action potentials. Neurons that are covered with myelin comprise the 'white matter' of the brain (as the myelin makes the cells appear white). During early childhood, the degree and patterning of myelination of neurons is crucial to the way the brain processes information. Auditory fibres are myelinated before birth, meaning that babies can recognize their mother's voices and hear language even prenatally. Other neurons are not myelinated for years after birth.

Neurons communicate with each other via synapses, where the electrical signal is converted into a chemical signal that can traverse the gap between neurons, and be converted back into an electrical signal in the next neuron. The chemicals that act at these synapses are the neurotransmitters, such as serotonin, adrenaline, and dopamine. The dendrites of neurons contain specific receptors for different neurotransmitters, and it is these receptors - and their influence on the neuron - that determine the effects of any given neurotransmitter.

Gross structure of the brain

The brain is a complex structure that has evolved significantly in humans as compared to our closest primate relatives. The largest expansion has been in the cortex of the brain; the bumps and grooves that most people think of when they picture a 'brain'.  The deep foldings of the human brain enable a large number of nerve cells to fit into a relatively small space - our skulls. 

The brain consists of two halves or hemispheres, each divided into four lobes of cortex (frontal, temporal, parietal and occipital lobes; see Fig. 2). The two sides of the brain are connected via the corpus callosum. The cortex sits over sub-cortical structures, such as the basal ganglia, thalamus, and fibre tracts running from the cortex to the spinal cord for the control of movement. The cortex in humans is a series of deep folds and bulges - sulci and gyri, respectively.

There are three main sulci landmarks of the human brain. These include the large sulcus which divides the brain into two hemispheres; the central sulcus, which separates the frontal and parietal lobes; and the lateral sulcus (also known as the Sylvian fissure), which separates the temporal and parietal lobes. Around the lateral sulcus on the lefthand side of the brain are the regions crucial for language function.  The four lobes of the brain - frontal, temporal, occipital and parietal - contain specific areas that are specialized for certain types of processing.  Korbinian Brodmann, at the turn of the last century, mapped out the different areas of the cortex into 47 'Brodmann areas' (BAs) that are still used today. These areas are particularly used during structural and neuroimaging studies to define the regions of activity and interest.

Figure 2. The cortex of the left hemisphere. Brodmann's areas 37, 39, 40, 44 and 45 are circumscribed by dashed lines. The primary sensorimotor areas (vision, audition, somatosensory (touch) and motor areas) are labelled.

Moving up from the spinal cord, the brain 'sits' on the brain stem structures, which aside from containing large tracts of fibres connecting the brain with the spinal cord, supports the most basic functions relevant to life. It contains the respiratory centres and helps to control heart rate. Still underneath the helmet of the cortex lie midbrain structures such as the four bumps that are the inferior and superior colliculi. The superior colliculi are involved in eye movements which are important in reading. In addition, on both sides of the brain there are subcortical mirror structures such as the hypothalamus (involved in feeding, fighting, fleeing and mating), the egg-shaped thalamus (which acts as a relay station for sensory and motor inputs projecting to the cortex), and the limbic regions of the brain, which are involved in memory and emotions. Sitting off the back of the brainstem is the 'mini brain', the cerebellum, which like the 'big brain' contains two cortex-covered hemispheres and is crucial to the control of movement, procedural learning, and possibly has a role in more complicated cognitive functions due to its rich interconnections with the frontal lobe of the brain. The cerebellum is very densely packed, containing more than half the neurons of the entire brain. Cerebellar dysfunction has been implicated in several developmental disorders, including dyslexia, autism and attention deficit disorder.

It is important to remember that while some functions are localized to some extent within different areas of the brain, it is the connections, the circuits, that really determine what any area of the brain is doing. What are its inputs? Where does it send axons to? This is what ultimately determines the function of any set of neurons in the brain.  That said, the four lobes of the brain can be given general functions. The frontal lobe is involved in 'executive' functions - planning movement, expressing emotion and appropriate behaviour, and complex problem-solving. The more posterior region of the frontal lobe is the primary motor cortex, which consists of a 'map' of the body, with the areas requiring the finest motor control (such as the articulatory apparatus and fingers) over-represented. The parietal lobe contains the primary somatosensory processing area and is also involved in integrating visual and somatosensory input to yield representations of our bodies in space. The temporal lobe is involved in complex object recognition and contains the primary auditory processing regions. The occipital lobe is involved in the early stages of visual processing.

The main language areas of the brain are, in most people, localized to the region around the Sylvian fissure in the left hemisphere of the brain (see Fig. 2). Many of these areas have been localized based on studying patients with damage to these areas; doctors would remove the brains of patients with well-characterized language deficits and examine them post mortem, looking to see where the site of damage was. Two such regions are Broca's area, involved in the motor output of speech and conveniently located alongside the primary motor areas devoted to the tongue, lips, and larynx - areas important for speech, and Wernicke's area, which is located posterior to the primary auditory processing area, which is involved in language comprehension.

Brain function and plasticity

Changes in the relationships between neurons and neural systems enable us to learn and develop, modifying our actions and behaviour based on experience. The relative levels of activity between neurons can strengthen or depress the relationship between those two neurons. In certain areas of the brain, these changes are long-term, and form the basis of learning and memory. During development in particular, the brain works on a 'use it or lose it' principle; synapses that are consistently active are strengthened while others are pruned away. Early work on the developing visual system of chicks showed that the area of the brain devoted to processing information from an eye that is deprived of visual input shrinks in size. The timing of our learning is also important, as there seem to be certain 'critical periods' during development when the brain is much more receptive to learning and development (e.g., the critical period for language development). Hence all of our brains are shaped by our experience and environments, not only particular educational and home environments, but factors such as nutrition as well.

Techniques used to study the brain

There are several main techniques used to study reading and the brain, ranging from the genetic to the behavioural level. Caution must be taken in interpreting data and in saying what we 'know' about the brain - this is all theoretical, based on experimentation, all of which is influenced by many different factors, including subject selection, experimental design, and the analyses applied. The brain is hugely interactive and there is quite a bit of individual variation amongst different brains which are functioning behaviourally normally.  It is also important to remember that behaviour is at the top level of processing that we can measure, and thus many different factors can lead to a similar behavioural result. 

At the genetic level, studies are being conducted that investigate families with a high incidence of developmental reading disorders. A behavioural 'phenotype' (such as speed of reading) is measured in large numbers of families, and linkage analysis is performed to try and match up the behavioural performance with particular genetic patterns.  Other genetic-based studies investigate the similarity (and difference) in performance between monozygotic and dizygotic twins.  This work gives us an idea of what aspects of reading performance are heritable, and what aspects are due to environmental influences.

Another way to study the brain is to measure the activity of neurons. This can be done using several methods, including event-related or evoked potentials, in which electrodes measure neural activity via the scalp of participants, and various neuroimaging methods. Imaging technologies tend to either have good spatial resolution (i.e., give you a very good idea of where things happen) or good temporal resolution (when things happen, at the millisecond (msec) level). The basic goal is to find out which areas of the brain are active, and to what degree (underactive, hyperactive), during specific tasks. However, knowing what areas are more active does not tell us how the network is achieving the function, just that that an area is more active during a process than another region. Positron emission tomography (PET) measures blood flow in the brain by radio-labelling oxygen and using this to visualize where the blood flow is increasing, an indication of an increase in neural activity. Functional magnetic resonance imaging (fMRI) is also thought to indicate the flow of oxygenated blood to different areas of the brain.  Both PET and fMRI have very good spatial resolution. Magnetoencephalography (MEG) is a non-invasive technique that allows the measurement of electric currents from the scalp with excellent temporal (real-time) resolution, giving us information about brain function on the msec level.

We can also learn about the brain by studying the structure of the brain in both normal and clinical populations. For instance, much of what we know about the brain has come from lesion studies, investigating the behaviour of patients following damage to the brain and post mortem studies of the brains of patients with particular conditions. Magnetic resonance imaging (MRI) gives very good structural detail about the shape and size of the different structures of the brain and the cortex. Diffusion tensor imaging (DTI) measures the diffusion of water around white matter (myelinated fibres), enabling the visualization of the myelination and direction of fibre tracts; this can be used to study developmental changes in myelination over time, and can provide a quantitative measure of the integrity of the white matter pathways between subjects.

Finally, we can measure the behaviour of human subjects completing tasks designed to tap certain brain processes. This represents the majority of reading research, and includes studies of the sensitivity of the sensory systems (auditory and visual psychophysics), performance of motor tasks, and asking participants to perform different variations on reading measures.  Many studies compare children and adults who are reading well with those who have diagnosed reading difficulties, in order to determine what is going wrong in reading disorders.  This also helps us to understand the reading network in normally-reading brains.

How the brain learns to read

Children are expected to learn the basics of reading in about 2000 days - which is amazing given our species took tens of thousands of years to develop the appropriate cognitive tools to read! The beginning of this process starts with 'priming' the brain for reading, the first time a baby is held and a story is read to them. Being read to by family and caregivers is extremely important; children implicitly start to understand that the scribbles on the page have words that go with them. 

In children from birth to age 5 there are many physiological changes that take place in the brain, accompanied by cognitive and linguistic changes. There are also important social and affective developments that will influence a child's reading 'readiness'. Post-natally, the structural and functional changes taking place in the brain include changes in cortical and subcortical structures, and the myelination of sensory and motor regions.  In the visual system, the child develops visual representations of his or her world, builds up an understanding of perceptual invariance and improves their ability to detect visual features.  Similarly in the auditory system, the child builds up their representations of sounds from their environment, their ability to discriminate changes in frequency and amplitude of sounds (dependent on good temporal processing ability), and sound identification.  Children will begin to recognize the sounds of their own language.  For speech, the articulatory motor system must be honed and developed, allowing their speech to become intelligible to others.  The inter-relationships between the different sensory modalities (e.g. touch, sight, sound) and the association areas of the brain develop during this time period, accompanied by increased myelination of neurons. Cognitively, children develop increased attention span and memory capacity. They begin to develop analogous reasoning and inferential abilities, along with understanding that a visual symbol can be labelled by name (symbolic representation). Children begin to understand that books have words, and words have letters, and that these words correspond to speech units.  Finally, the linguistic capabilities of children increase from imitating and babbling to being able to name objects and people, followed by an explosion in vocabulary. Phonological development includes the awareness of different sounds in words, the ability to discriminate these sounds from each other, segment words into sounds, and play with the sounds in words (phoneme manipulation).  Children start to increase the length of their utterances, and show increased understanding of grammatical constructions and devices. 

Cognitive neuroscientists often categorize learning into two different categories: explicit/declarative and implicit/procedural learning [see McClelland 1998]. Explicit learning involves the active acquisition of specific knowledge: the alphabet, historical dates, or vocabulary in another language.  Explicit learning is known to depend on a subcortical brain structure, the hippocampus, as well as related structures in the medial temporal lobes. The hippocampus, either directly or indirectly, receives extensive inputs from nearly all of the cortical association areas as well as forebrain areas [Squire, Shimamura and Amaral 1989]. After a period of time, memories gradually become consolidated to different areas of the brain, and after this point hippocampal damage does not affect a given memory; in humans this consolidation process can go on for many years. 

On the other hand, implicit learning is a more passive acquisition of knowledge due to experience. Implicit learning tasks do not require previous explicit memory of a prior event, yet performance still reflects one's experience.  In contrast to explicit learning, implicit learning is thought to be unaffected by differences in IQ [Reber et al. 1991; McGeorge et al. 1997].  Amnesiac patients, who have greatly impaired ability to explicitly learn new things, can perform at the same level as control participants on implicit learning tasks, even though there are significant differences between the groups on tests of explicit knowledge [Reber and Squire 1994]. The neural network underlying implicit/procedural learning is not yet fully understood, though many studies implicate subcortical structures such as the cerebellum and basal ganglia along with cortical areas such as the supplementary motor area and prefrontal cortex in these types of tasks. Karmiloff-Smith [1992, 1994] suggested that cognitive development relies on procedural learning to begin the initial phase of setting up a new stage of representation.  Therefore, an implicit/procedural learning deficit in children could inhibit new skill acquisition; indeed, there is evidence that children and adults with reading disorders show poor implicit learning [Vicari et al. 2003, 2005; Sperling et al. 2004; Stoodley et al. 2005]. 

The relationship between explicit and implicit acquisition of knowledge is still under debate, but it appears to be condition-dependent.  In a recent study investigating the learning of a fake script, Bitan and Karni [2004] found that in some individuals knowledge about letter decoding could evolve implicitly from training on whole-word recognition. However, there was a dissociation between the subjects' declarative letter knowledge and their ability to effectively apply their implicit letter knowledge. They also found that long-term retention was better in the adults who were taught the script explicitly.  It is important to note that this task involved adult participants who are already good readers and understood the breakdown of language into script. It may be that in children, the relationship between implicit and explicit acquisition during development of literacy skills may follow different patterns than in adults.

Demands of reading on the brain

Reading is likely the most difficult skill we have to learn, requiring the integration of visual, auditory, motor and language systems of the brain.  In order for successful reading development, all of these systems must be functioning well both on their own and in conjunction with one another. In 1917, Bronner described the complicated task of reading as involving "the perception and discrimination of forms and sounds; associations of sounds with the visual appearance of letters; linkage of names with clusters of letters, and meaning with groups of words; memory, motor, visual and auditory factors; and motor processes as subsumed under processes of inner speech and reading aloud". Unlike spoken language, which does not need to be explicitly taught given normal development and environment, reading is a cultural construct and requires explicit instruction to master. Learning to read and write require very different and more complex skills than speaking [see Lundberg and Hoien 2001], as one must develop an understanding of both the spoken properties and written form of the language. A child must learn the alphabetic principle - that a particular symbol represents a given sound, such that 'cat' is comprised of /kuh/ /aah/ and /tuh/. This decomposition of language is necessary for literacy but not for oral communication. One reason the segmentation of words into distinct phonemes is so difficult is that phonemes blend together in natural speech. 

What actually happens in the brain when one reads? Visual codes (orthography) must be translated into word sounds (phonology); meaning (semantics) emerges when the sounds correspond to a recognized, known word.  The act of reading engages multiple brain systems which need to work together to a millisecond scale in order for reading to become fluent and automatic. In order to read this article, you must orient your attention to the task at hand, based on the appropriate signals from the motivational limbic system of the brain. Your eyes must be moved to the appropriate place on the page, and the visual system must simultaneously interpret the visual symbols on the page while maintaining proper fixation and eye movements as you move your eyes along the line of text. The visual system provides information about the shapes of letters and indeed whole words, forwarding this onward to the auditory and linguistic systems of the brain for further processing. This is where the visual pattern of letters and words is linked to the sound of the word, leading to word identification. For many words, those that are stored in our individual word-storage lexicons, this process is extremely rapid - rapid enough so that your brain can process the word during the eye movement to the next word. The activation of the language system engages the comprehension processes that enable you to glean information from the words, in addition to the grammatical structures which enhance the understanding of your specific language. When reading continuous text, these grammatical and semantic systems must work in close conjunction with working memory, to enable you to hold the idea represented at the beginning of the sentence and integrate it with the end of the sentence.  What you read is interpreted within the context of your own background knowledge and understanding of any idiomatic phrases or jargon.

The Dual Route model of reading, first introduced by Morton [1969], incorporates both phonology and vision into a model of reading.  There are two routes that can lead to the identification of a word, which in adults can be distinguished from one another (notably by damage to different areas of the brain) but are not likely to be entirely separate as was once thought [Seidenberg et al. 1994]. Familiar words that are already known in one's lexicon can be identified through the visual recognition of the form of the word (lexical / orthographic processing). Unfamiliar words must be translated into their sounds via the phonological (sublexical) route.  Support for two different routes to word identification comes from studies of acquired dyslexias (reading disorders) following damage to the brain. Phonological dyslexics can read irregular exception words but are unable to read nonwords (pseudowords). Surface dyslexics can sound out words and nonwords but cannot decode exception words. Acquired phonological dyslexia is usually caused by an infarct to the left middle cerebral artery that affects temporo-parietal and frontal regions; surface dyslexia is more often associated with atrophy in the anterolateral temporal lobe. Pure alexia (reading disorder) is characterized by poor reading of both irregular and nonwords and can be caused by left occipito-temporal damage.

The first 'route' to reading is the phonological route. 'Phonology' is strictly defined as 'the system of speech sounds in a language' [Steinmetz 1993].  The phonological route involves going back to the alphabet and interposing extra stages of translating the letters into the sounds they represent, then blending them together to give the auditory representation of the word and, from there, its meaning. To do this, we need to understand how words can be broken down into their constituent phonemes, to match what the letter/sound decoding system gives us. As so few words are familiar to beginning readers, learning to read involves training in 'phonics' and letter/sound translation. Early ability to do this is a very strong predictor of successful, rapid literacy acquisition.  Furthermore, skilled readers activate phonological information when reading earlier and more automatically than less skilled readers [Booth et al. 2000; Plaut and Booth, 2000; Booth, Perfetti and MacWhinney 1999].

The second route is the lexical or orthographic route. The lexical, visual recognition of known words is the fastest route to word decoding. There is no need for slow and laborious phonological mediation to decode the words as they are represented in visual form and known in one's lexicon. This process of visual recognition is essential for decoding irregular words (such as 'yacht') whose meanings cannot be derived from normal letter-sound correspondences. One needs to recognize the visual form of the word in order to retrieve its meaning. This is dependent on the rapid visual analysis of letters and their order. The visual form can then be linked directly to the lexicon to retrieve the word's meaning. By definition, the visual lexicon contains only representations of words that the reader has previously encountered [Castles and Coltheart 1993].  Thus, one's ability to read through the lexical/orthographic route is likely to change as a result of reading experience. The more word representations that are stored in the lexicon, the more rapid and fluent reading is.

Normal readers use both of these routes successfully and fluently.  If either route is slightly impaired, it may lead to reading difficulties.  There are many theories as to how phonological and orthographic information interacts during reading development, in fluent readers, and in those with reading difficulties [e.g., Morton 1969; Seidenberg and McClelland 1989; Jacobs and Grainger 1994; Van Orden and Goldringer 1996; Manis et al. 1996].  Most current models acknowledge that lexical/orthographic and phonological processes, however distinct, rarely operate entirely independently. In fact, Van Orden and Goldringer [1996] believe that the dynamic interaction between the visual and phonological functions is the earliest source of constraints on perception. 

In the very early stages of literacy acquisition children often identify words visually rather than as being composed of letters that correspond to sounds [Frith 1985; Ehri & Wilce 1985]. The interaction and necessity of both the lexical and sub-lexical routes is well-supported: letter knowledge and phonemic segmentation skill measured in kindergarten have been shown to be the two best predictors of first grade reading achievement [Bradley & Bryant 1978; Share et al. 1984].  In addition, Talcott et al. [2000] found that an orthographic choice test [after Olson et al. 1994] was the best predictor of reading ability in a sample of normal 10-year-olds.  Bradley and Bryant [1983] found that visual training in letter forms, as well as phonic training in the sounds that the letters represent, are necessary to achieve the fastest reading improvement in 4-8 year olds [see Rayner et al. 2001 for review]. Thus, both visual- and auditory-based processes are integral to normal literacy acquisition.

Soon after first learning to decode text, children will be not only retrieving the meaning of single words, but will need to understand groups of words in short phrases and sentences.  Early in this process the child will begin to move their eyes to the next word, even before the last is fully understood; thus reading quickly becomes a massively complicated cognitive operation - it is not surprising that many have difficulty mastering it!

Reading and the brain: What do we know?

During word reading, neural activity occurs in numerous areas: left-lateralized regions in occipital and occipito-temporal cortex, the left frontal gyrus, bilateral regions in the cerebellum, primary motor cortex, and the superior and middle temporal cortex, and medial regions in the supplementary motor area and anterior cingulate.  Aside from the basic visual and auditory cortical areas, the parts of the brain that make associations between visual and auditory information and their links with language-related regions, are crucial to the ability to read. 

Lesion studies

Much of the information about reading and the brain comes from studies of patients who have sustained brain damage that has rendered them unable to read.  One of the earliest descriptions of such lesions was by Dejerine [1891, 1892] who found that 'word blindness' was associated with lesions to the left angular gyrus and the temporo-parietal junction. The patient could still understand speech, speak and write but could no longer read words on the page. Damasio and Damasio [1983, 1986] suggested that Dejerine's patient suffered from a perceptual disorder, due to the inability of the visual information to bypass the area of damage and reach the extrastriate cortex of the left hemisphere for processing.  Interesting data comes from studies of brain lesions in Japanese patients: damage to the posterior parietal lobe selectively upsets kanji reading (which is based on pictographs representing whole words), whereas damage to the posterior temporal lobe affects the ability to read kana, which is a syllabic script [Iwata 1986]. Lesion studies can be difficult to interpret because damage caused by poor brain development, tumors, trauma or stroke never destroy one region in isolation (or remove one function) - and it is clear that reading is mediated by a network of regions.  

Cognitive ability and reading

General cognitive ability (IQ) accounts for about 25% of the variance in children's reading ability.  In the grand scheme of things, this is not a very large percentage, indicating that environmental effects such as text exposure and teaching methods contribute substantially to a child's reading ability. A commonly-encountered paradox is that children with relatively low IQ scores can be taught to read successfully (though sometimes with limited comprehension) whereas other children with strong cognitive ability struggle to acquire the basic decoding skills necessary to begin to read. 

Genetics

Twin studies, family studies and molecular genetic studies all provide evidence that reading ability (and disability) is to some extent a result of our genes. Furthermore, particular aspects of reading ability, such as phonological or orthographic skill, may show different patterns of inheritance [see review by Grigorenko 2001]. Significant variance in both phonological and orthographic skill can be attributed to heritable factors, and each of these factors accounts for independent variance in word recognition skill [Olson et al.1994].  About half of the effect of IQ survives controlling for genetic effects which are shared with both IQ and reading. Genes likely affect the development of the brain by influencing the migration of neurons during early brain development. The similarity in reading between siblings living in the same family can be as high as 70%. Much of the important information regarding the influence of genetics and environment comes from studies of dizygotic and monozygotic twins [Olson et al.1989]. Monozygotic twins will share identical genetic inheritance, whereas dizygotic twins will share, on average, half of their genes with one another. Heritability can explain around 70% of variance in reading in monozygotic twins and about 40% in dizygotic twins. There has been significant effort to establish the genetic basis of reading disorder (dyslexia). Recent linkage studies have, not surprisingly for such a complicated task, confirmed that several genes are involved, including sites on chromosomes 6 and 18, although there is also evidence that chromosomes 1, 2, 3, 13 and 15 contain sites that may influence reading ablity [see Francks et al.2002 for a review].  Most interesting is that several of the identified genes seem to be involved in controlling neuronal migration during early cortical development in utero. The most common finding for genetic linkage to reading problems is on the short arm of chromosome 6; there are two genes close to each other that seem to contribute to the control of neuronal migration [Francks 2004]. Stein [2001] suggests that the development of all the large (magnocellular) neuron systems in the brain could be under the control of the genes implicated in reading difficulties. These systems are crucial for the rapid processing of changing information (temporal processing) throughout the visual, auditory and motor systems. Different alleles may affect different individuals more in one system than another in an idiosyncratic way, which may explain why some children and adults with reading difficulties are only impaired in the visual or auditory system, but not both.

Different language systems

The effect of different language systems on reading development is a recent area of research receiving significant attention. Cross-linguistic studies of reading and reading disabilities can give us information about how the language that is being learned affects the neural circuitry involved in the reading process. Languages differ in the depth of their orthography, i.e. the consistency of the letter-sound mappings.  For example, English has a notoriously deep orthography, with many irregular words, verbs that do not follow normal conjugation patterns, and multiple variations on the pronunciation of any given letter, particularly vowels.  Languages such as Spanish, German, Italian, Finnish and Greek have shallow orthographies, with consistent letter-sound relationships. In these languages, children with reading difficulties read very slowly but accurately, whereas in English and other deep orthographies (French, Danish) children are both inaccurate and slow readers.

Anatomy of theReading Brain: Visual system

It may seem obvious to the layperson that vision is crucial to reading; surprisingly, this is strongly disputed by some experts. This is because at the moment the dominant theory in the field is that phonological skill is much more crucial - that learning the individual letter sounds is the most important prerequisite to reading.  However, vision is important even to the learning of phonology. It is not until children understand that words can be represented as a sequence of visual symbols that they can discover that the syllables that they speak can be split down further into individual letter-sounds (the alphabetic principle). It is possible that children do not begin to learn the phonemic structure of words until after they begin to learn that words can be represented as a sequence of letters [Morais et al.1979]. Adults who cannot read only start to grasp the idea of letter-sounds once they have been taught the alphabetic principle [Castro-Caldas et al.1998]. In Japanese (an ideographic language), children's phonological skills are limited to the mora (syllabic) level, as these children do not need to learn to subdivide word sounds into letter-sounds as their language does not require it.

The anatomy of the 'reading brain' begins in the retina, where the photoreceptors respond to the image of the word on the page.  Information about letter symbols and order is relayed via the lateral geniculate nucleus (LGN) of the thalamus to the primary visual cortex in the occipital lobe.  The visual system can be subdivided into two main processing streams - the dorsal stream for processing moving stimuli and for controlling eye movements and visual attention, and the ventral stream which processes fine detail of objects, including faces (see Fig. 3).  The dorsal stream projects from the primary visual cortex dorsally to the parietal cortex; the ventral stream projects ventrally from the primary motor cortex towards the inferior temporal lobe.  The ventral stream is involved in processing stimuli with high spatial resolution (such as text), while the dorsal stream processes low spatial resolution, moving stimuli.

Figure 3. The dorsal and ventral visual streams. (Source: http://www.uwosh.edu/departments/ psychology/Vreven/Lab/Images/brain.jpg)

The dorsal stream may be important in understanding the order of letters in words, the order of words in sentences, and is certainly involved in proper eye movements. It is largely comprised of magnocellular neurons, which are large and heavily myelinated, and thus are rapidly-conducting neurons. The magnocellular system is involved in the control of eye movements, which is crucial to successful, fluent reading of text. The eyes must 'scan' the visual information in a way that enables the fluent integration of important information, while inhibiting the processing of unnecessary distracters.  During reading, eye movements (called saccades) are made to keep the region of the retina with the highest visual acuity  focused on the appropriate region of text when the eyes pause (called fixations).  The magnocellular system guides each saccade to land close to the center of the next word. The average fixation during reading lasts 200-250 msec and the average saccade length is about 8 letter spaces.  It is during the 250 msec-long fixations that the details of letters can be processed; thus successfully keeping the eyes stationary is another important component of reading text. The size and speed of saccades differ depending on reading experience, difficulty of text, and the size of the effective visual window (this is generally about 7-8 letter spaces to the right of fixation for English readers) [see Rayner 1996, and Rayner 1998 for review]. More experienced readers make faster fixations and have longer saccades.Reading rate is a combination of the average fixation time and number of fixations as well as the frequency of regressive eye movements made backwards. An example of the importance of saccadic movements for reading comes from a study by Gilchrist et al. [1997], who report the case of a woman with congenital extraocular muscular fibrosis who, unable to move her eyes, reads by moving her head in saccade-like movements. If the ability to make accurate saccades to words is impaired, reading could be compromised [Crawford and Higham 2001]. Indeed, there is evidence that the magnocellular system is not functioning properly in children and adults with developmental dyslexia [see Stein and Walsh 1997; Stein 2001 for summaries].

The magnocellular system/dorsal visual processing stream are also involved in visual attention. The dorsal stream feeds back signals to the primary visual cortex, which can change the focus of attention onto a word so that detailed information about the letters can be sent from the primary visual cortex to the ventral visual processing stream where the word/letters can be identified [Vidyasagar 2004]. This 'attentional spotlight' can feed through letters from each fixation for identification and can also define the spatial location of letters with respect to one another. Serial visual search relies on the magnocellular system [Cheng et al. 2004]; poor readers are slower at serial visual search than good readers [Iles et al. 2000] and are also unduly affected by distractors [Facoetti et al. 2001], suggesting magnocellular function is important for successful text decoding. Because training this visual attentional search mechanism can take a long time, this may be a limitation on the amount of time it takes for children to learn to read fluently.Reading involves training the visual attention system to move in a left-to-right linear fashion rather than how it normally works, which is fairly randomly, concentrating on salient features [Horowitz and Wolfe 1998].

The ventral visual stream, which projects from the primary visual cortex to the inferior temporal lobe, is crucial for the identification of individual letters that are moved into the attentional spotlight by the magnocellular system. The visual 'word form area' in anterior fusiform gyrus (BA 37, occipito-temporal region) is important for automatic recognition of complex objects - and this includes patterns of letters and words. Whether or not this region is truly a 'visual word form area' is debated; Dehaene et al. [2004] argue that this region is appropriate for word recognition because it is involved in complex object processing, while Price et al. [2003] suggest that this region acts specifically to store words for retrieval. McCandliss and colleagues [2003] argue that the VWFA is plastic at first but becomes specialised through the process of literacy acquisition. Brain imaging shows this region to be active in every writing system that has been studied, implying that this is a general, rather than language-specific, region that is important in reading.

Anatomy of theReading Brain: Auditory system

The sounds crucial to the understanding of language are processed by the auditory system. The hair cells in the cochlea are responsible for the transduction of the pressure changes in air which comprise sound into electrical signals.  Cortical auditory regions in the temporal lobe are necessary for the processing of the complex signals that compose speech.  Heschl's gyrus (BA 41) is the primary auditory processing area. Information from each cochlea arrives here via several brainstem relays, bringing important information about the amplitude (loudness) and frequency (pitch) composition of sounds.  In the speech signal, the changes in frequency and amplitude are crucial. For example, the difference between 'b' and 'd' is a brief frequency change: one goes up in frequency ('d') and one goes down ('b').  Thus it is important that the brain is able to process auditory information that changes over time, as well as detect information that is arriving in rapid succession (temporal processing). Children who have language and literacy difficulties can need longer periods of time between two auditory stimuli to process them [see Tallal 1985], and may also have difficulty detecting frequency and amplitude modulations (FM and AM, respectively) at certain frequencies [see Stein 2001; Talcott et al.1999; Talcott and Witton 2002; Witton et al.1998, 2002]. These very low-level auditory processing abilities have been found to correlate with reading skill in both poor and good readers, adults and children.

Structurally, the auditory and language areas of the brain are situated primarily in the temporal lobe of the brain. One of the hallmarks of human brains is that the planum temporale is larger in the left hemisphere than in the right hemisphere (though this is not the case in some people with reading disorders), and damage to the left hemisphere of the brain leads to language disturbances, whereas similar damage to regions in the right hemisphere do not.  The two main language areas are Broca's area in the inferofrontal cortex, which is involved in speech production, and Wernicke's area in the superior temporal gyrus, which is involved in speech comprehension.

Putting it all together - the Reading Network

As summarized above, reading involves the integration of the visual form of a word (orthography) with the auditory sound of the word (phonology), retrieval of the meaning (semantics) of the word; reading aloud also involves the pronunciation of the word, and control of the many muscles of the articulatory apparatus. Given the involvement of so many sensory and motor systems in reading, it is not likely that there is one 'reading area' in the brain - reading is a relatively recent phenomenon, and evolution would not have had time to produce such a structural change in the brain. Thus reading involves a widely distributed network of areas including the occipital regions, temporal lobe areas, precentral motor regions and the inferior frontal areas.

Figure 4. The reading network.  The areas highlighted include the occipito-temporal (OT) region, the visual word form area (VWFA), the middle temporal gyrus (MTG), the angular (AG) and supramarginal (SMG) gyri and the inferior frontal gyrus (IFG).

Four main areas contribute to the reading network, centred in the left hemisphere, whether reading silently or aloud (see Fig. 4). Letters are identified in the fusiform visual word form area. Then, at the left temporo-parietal junction, Wernicke's area communicates with the supramarginal and angular gyri. Here auditory and visual representations are integrated with the meaning of words. These areas project via the arcuate fasciculus to the left inferior frontal gyrus (including Broca's area). Finally, the cerebellum coordinates the other three areas in its role as a motor and cognitive modulator.

Visual word form area

Our representations of both letters and words need to be invariant for font, size, and location in visual space.  It has been proposed that a region in the occipito-temporal area of the left cortical hemisphere is where our visual representations of letters and words are located. This region is situated under the surface of the occipital lobe and extends forward into the back of the middle temporal gyrus [Cohen et al. 2004], appropriately located in the ventral visual 'what' processing stream. In the posterior fusiform gyrus, letters seem to be processed according to their visual features rather than as orthographic, word-level, representations. This orthographic representation is done more anteriorly in the visual word form area. This region is activated equally by letters appearing in both upper and lowercase fonts, but less active during viewing of pseudowords and not at all by strings of letter-like symbols (presumably this is language-dependent - e.g. Chinese script for Chinese readers, as opposed to Hebrew script for Chinese readers).   Price and Devlin [2003] explain that this region is also activated when participants hear, repeat or think about word sounds, and when blind subjects read words in Braille. Interestingly, the degree of activation in this area mirrors the degree of orthographic ability of the subject. Certainly the visual word form area seems likely to be an important part of the circuit translating visual symbols to meaning. This visual representation is projected forwards to the middle temporal gyrus where it is integrated with the phonological representation of the word.

Temporo-parietal junction - Angular and Supramarginal gyri

This is where the angular (BA 39) and supramarginal (BA 40) gyri meet Wernicke's area (speech comprehension). It is here where the visual forms of letters are likely converted into their sounds, and from there their meaning is extracted from the semantic lexicon. The more anterior region of the superior temporal gyrus, including Heschl's gyrus and the planum temporale, may be involved in speech comprehension. More posterior regions (including the supramarginal and angular gyri) are thought to be involved in phonological analysis and the conversion of graphemes to phonemes. The supramarginal gyrus (BA 40) occupies an area on the other side of the Sylvian fissure from the temporal lobe, and is activated in many semantic processes, just like Wernicke's area, below the fissure. The angular gyrus (BA 39) shares with BA 37 its 'prime location' at a cortical juncture site; in this case, where the temporal, parietal, and occipital lobes meet. Dejerine [1891, 1892] first observed that a lesion to the left angular gyrus produced profound loss of reading and writing. The 20th century neurologist Norman Geschwind considered the angular gyrus to be the "seat of symbolic activity", where the information from all modalities was integrated. While there is as yet no consensus, some  imaging studies suggest phonological information about a word is 'assembled' here and integrated with visual input, while other studies show that semantic information is involved both here and in the nearby supramarginal gyrus (Area 40). Simos et al. [2002] found that in skilled readers this area (particularly the SMG) is most activated when dealing with unfamiliar words rather than familiar words. As the visual word form area correlates with orthographic skill, activity here seems to correlate with phonological ability. The angular and supramarginal areas are particularly active in children when they are learning to read; in fact, how active these regions are predicts how well children will learn to read.

Inferior frontal gyrus

The inferior frontal gyrus (IFG) includes the regions surrounding Broca's speech production area. Even though this is considered a 'speech production' area, it is also active during silent reading. The posterior IFG seems to be associated with phonological rehearsal, important for holding words in phonological short term memory. During fluent text reading one must hold the first words of a given sentence in short-term storage in order to extract the meaning of the whole sentence based on its grammatical structure (syntax). The anterior portion of the IFG may be important in the retrieval of a word's meaning [Poldrack et al.1999], as they are more engaged by low-frequency words and pseudowords than common words. It is thought that the anterior IFG is important during learning to read, by participating in the decoding of new words [Pugh et al. 2005]. In older children, the poorer the reader, the more active this region is - this may be because poor readers are still heavily reliant on letter/sound translation rather than immediate visual-semantic recognition of the word.

Cerebellum

The role of the cerebellum in reading has only been recently proposed [see Nicolson, Fawcett and Dean 2001 for review]. Most neurophysiologists consider this large 'mini brain' to be solely part of the motor domain. It is true that the cerebellum is crucial in the coordination of smooth movements, but it also is involved in motor learning - adapting performance to become more fluent and accurate over time, so that skills become automatic. Because the cerebellum has been found to have reciprocal connections with frontal lobe regions, a role for the cerebellum in higher-level cognitive tasks has been proposed [Schmahmann 2004]. The cerebellum receives significant information from the magnocellular visual system, including the parietal angular and supramarginal gyri [Stein 1992]. In terms of reading, the cerebellum will play a role in the control of eye movements, the articulation of spoken words, and the control of hand movements during writing. The cerebellum is highly active during reading tasks and lesions to the cerebellum can lead to reading problems [Moretti et al. 2002, Scott et al. 2001]. Furthermore, anatomical studies have shown that the size of the right cerebellar hemisphere predicts phonological and orthographic skills [Rae et al. 2002] and differences in cerebellar anatomy have been seen in children with reading disorders [see Leonard et al. 2001]. Behavioural studies of cerebellar function, including balancing tasks, motor tasks and motor learning tasks [Stoodley et al. 2005a; Stoodley et al. 2005b; Stoodley and Stein, in press; Nicolson, Fawcett and Dean 2001] indicate that poor readers perform worse on these tasks than good readers, suggesting that the cerebellum may be part of the network that is important for successful reading.

Different 'routes' and the reading network

Although all of these regions are involved in the 'reading network', they each have slightly different, though interdependent, roles. How are phonological, orthographic and semantic information processed in this network? While the effect of word type on brain activation depends on the task, it seems that real words and pseudowords modulate activation in areas associated with semantic and phonological processing, respectively [see Mechelli and Price 2005].

Phonological processing is known to engage parts of the inferior frontal gyrus, regions in the temporal lobe, and the right cerebellum. Reading yields more activation in left posterior superior temporal and premotor regions that have been associated with phonological processing in absence of orthographic input. These active regions depend on the task involved. Phoneme detection activates the auditory association cortex (including planum temporale) and frontal areas; word-level phonological information processing seems to activate Wernicke's area along with the frontal regions; phonological judgement activates the frontal regions along with the insula (underneath the lateral fissure). Phonological vs. semantic tasks increase activation in more dorsal and posterior left inferior frontal regions, with bilateral activation in the insulae and supramarginal gyri. The angular gyrus is activated during phonological processing but also during a myriad of other tasks (including semantic and orthographic tasks). Thus the angular gyrus may be acting to integrate phonological, orthographic and/or semantic processes. The cerebellum is also active during rhyming tasks, and in particular the right cerebellum (which connects to the left cerebral cortex) may be involved in phonological processing.  Interestingly, a recent study showed that when phonological judgements were based on non-orthographic stimuli (pictures rather than print), there was no activation in the temporo-parietal or angular gyrus regions, suggesting that these areas are involved particularly in the phonological processing of text [Katzir, Misra and Poldrack 2005].

Orthography is unique to written language; thus orthographic representations are trained by the particular language that a child is taught, be it alphabetic or ideographic. Feiz et al. [1999] found that simply looking at a visual symbol activates the primary visual cortex and secondary visual areas; only when word forms that are permissible for the participants' language are presented do they see activity in lateral and medial occipital regions as well as fusiform and lingual gyri.  In a cross-cultural (English, Chinese and Japanese) analysis of imaging studies, Bolger et al. [2005] found that orthographic processing consistently involves the occipital/posterior fusiform regions (BA 18, 37/19) along with regions in the left middle fusiform and the posterior inferior temporal gyrus (BA 37). The left occipito-temporal regions are activated more by reading than auditory word processing, indicating that the visual input of word forms is important to stimulate this region maximally. When reading and object naming are compared, the left occipito-temporal regions are more active during object naming (e.g. these are not likely a special set of 'reading' neurons). This area might act as an interface in the retrieval of phonology from visual input.

Semantic processing, the retrieval of the meaning of a word from its orthographic and phonological representations, tends to increase left hemisphere activation in the anterior left inferior frontal regions, the angular gyrus, the middle temporal cortex and the anterior fusiform gyrus. The regions involved in semantic processing are generally located close to or overlap with phonological processing areas (e.g., the prefrontal cortex and superior temporal lobe).  Unlike many orthographic and phonological processing areas, regions involved in meaning extraction can be distributed bilaterally, in both the left and right hemispheres of the brain. Poldrack and colleagues [1999] suggest that anterior prefrontal areas are involved in direct semantic retrieval, whereas the more posterior prefrontal areas are involved in the access and retrieval of semantic information from phonological forms.

How do all these systems work together in terms of timing? Which areas are active first? This is difficult to figure out using functional MRI scanning, as it does not have very good time resolution. Magnetoencephalography (MEG) is a much better measure for determining the timing of the activation of the regions of the reading network. A recent study by Pammer et al. [2004] used MEG to investigate the brain activation patterns in adult readers during the first 500 msec after exposure to a word. They found that activity in the left fusiform gyrus expands in two directions (posterior-anterior and medial-lateral) during the first half-second following word exposure.  Interestingly, contrary to what would be expected given a more hierarchical model of word recognition, activity in the visual word form area did not occur until nearly 200 msec after the word stimulus and the activity in inferior frontal gyrus areas (BA 44/46) preceded and was co-active with the visual word form area. If one assumed a neat progression of word identification from the visual to the language systems, the word form area would be predicted to be active before the more frontal areas. There was, however, a widely distributed pattern of activity between 200-500 msec in the left hemisphere of the brain. The authors suggest that following the presentation of a word there is an initial 'fast sweep' of the visual-temporal-frontal brain systems followed by slower, more deliberate processing amongst the areas in the reading network.

It has been suggested that 'dorsal' and 'ventral' processing streams exist for written language in a similar way to how they exist for the processing of basic visual information [see McDougall et al. 2005].  Owen, Borowsky and Sarty [2002] suggest that there is a ventral pathway for sight vocabulary which includes occipito-temporal and insular cortices, and a dorsal pathway for less familiar exception words and pseudohomophones including the superior occipital and inferior parietal lobules and Broca's area.  Furthermore, Pugh et al. [2000] argue that the dorsal (temporo-parietal) pathway is associated with rule-based analyses engaged during phonological processing and the ventral (occipito-temporal) pathway is associated with the identification of words in sight-vocabulary. Posner and Raichle [1994] suggest that the ventral pathway is more automatic while the dorsal pathway involves more controlled processing of written words. These proposals take us back to the idea that there are two ways to identify a word: the fast 'ventral' orthographic-dependent pathway and the slower 'dorsal' pathway, which relies on phonological processing.

Cross-cultural reading studies

While these regions of the brain are certainly involved in reading, can we determine what processes are specific to reading? It can be informative to examine the distribution and activity in the reading network across different languages and writing systems. In order to do this, Bolger, Perfetti and Schneider [2005] performed a meta-analysis of 43 brain imaging studies from three different language groups: Western alphabetic languages (Roman alphabet); Japanese Kana and Kanji; and Chinese. The visual word form area was consistently activated across all of these languages, indicating that in all 'reading brains' this area is important for the representation of text.  McBride-Chang and Kail [2002] found that when comparing kindergarten and first-graders in Hong Kong and theUS that phonological awareness was the strongest predictor of reading ability. Hence it is clear that regardless of which type of visual symbol is employed, reading in all languages requires a good understanding of both the visual forms and the sounds in words. 

Western alphabets engaged a left occipital-temporal-frontal circuit including bilateral occipital areas (BA 19) and the posterior fusiform gyrus (BA 37), the left posterior superior temporal gyrus (BA 22), left inferior parietal areas (angular gyrus (BA 39) and supramarginal gyrus (BA 40)) and the inferior frontal cortex, the insula and premotor cortex.  More regular orthographies rely more on the temporal regions of the cortex, while languages such as French and English with more irregular words rely more on the fusiform area (BA 37). This is not surprising given that more irregular languages require an increased demand on learning distinct visual patterns of irregular words. Chinese and Japanese Kanji are logographic, and hence show greater involvement of the visual occipital regions and the posterior fusiform gyrus which are involved in visual-spatial functions. In Chinese children who are poor readers, the clearest deficits were in rapid naming and orthographic skills, both of which are reliant on efficient visual processing [see Ho et al. 2004].

A recent cross-linguistic (English, French and Italian) study of reading disabled populations showed that the same region of the left hemisphere - including the middle temporal gyrus and the inferior and superior temporal gyri and middle occipital gyrus - showed reduced activation in poor readers across these languages [Paulesu et al. 2001]. A follow-up structural analysis of these subjects showed that the reduction in activity is due to altered density of gray and white matter in these regions [Silani et al. 2005], suggesting that both structural and functional abnormalities of the left-hemisphere reading network exist in poor readers of both deep and shallow orthographies. This data substantiates the concept that this left-hemisphere network of regions is important in reading as a cognitive process, irrespective of the language that one is reading in; however the weighting of the relative areas of the network may be dependent upon whether the language is alphabetic or logographic [Bolger et al. 2005; see also Siok et al. 2004].

Developmental implications - what happens functionally and structurally as a result of learning to read?

What happens to the brain as children learn to read? Recent studies have demonstrated that the acquisition of reading skills is reflected by a progressively greater activation in left occipital, temporal and frontal regions and progressively less activation in posterior right hemisphere regions [Turkeltaub et al. 2003; Shaywitz et al. 2002]. The neural network for reading can be strongly left-lateralized by 6 or 7 years of age [Gaillard et al. 2003], reflecting the development of the 'reading network' described above. 

Studying patterns of myelination in the white matter are important as significant changes take place in frontal networks between the ages of 3-6 years and axon diameters and myelin sheaths grow rapidly from birth to 2 years old. Maturation of white matter is important to brain development in childhood; maturation in small regions of white matter correlate with specific cognitive functions. The peak growth rates in fibers connecting the sensory-reading areas takes place from 8-11 years. This specifically happens in sections of the corpus callosum connecting cortical regions in and near language areas. Growth is accompanied by significant size increases in temporal, parietal and occipital lobes [see Benes 1998].

In terms of structural data, Deutsch et al. [2005] conducted a diffusion tensor imaging (DTI) study investigating the white-matter tracts in children of a range of reading ability. They found that white matter structure in left temporo-parietal region - part of the reading network - differs between normal and poor readers; the integrity of this pathway correlated with reading, spelling and rapid naming abilities. The magnitude of differences in the children were smaller than a similar study conducted in adults [Klingberg et al. 2000], suggesting that the structure of the left temporo-parietal pathways is important in the development of fluent reading, particularly as poor structure in these pathways is associated with reading disorders. Because these white matter differences are present so early in life, it suggests that white matter differences are a cause rather than an effect of poor reading. Nagy et al. [2004] also used DTI to investigate developmental myelination patterns in children aged 8-18 years old. Again, reading ability in this sample correlated with white matter integrity in the left temporal lobe, in the same regions Klingberg et al. [2000] found deficits in adults with reading disability.

In adults, Klingberg et al. [2000] used DTI to compare reading impaired and normal adults. They found differences in the temporo-parietal region bilaterally, although the difference was more marked in the left hemisphere. The directionality of the neurons in this region correlated with reading performance. Based on the direction of fibres and the location of the affected fibres, the data suggests that the affected region lies at the border between the arcuate fasciculus and visual projection fibers. This is interesting as the arcuate fasciculus connects Wernicke's and Broca's language areas; damage here often leads to language disturbances.

Booth and colleagues [2004] used functional MRI to look how neural activation during reading tasks differs in adults and children. In adults, there was greater activation than children during cross-modal lexical tasks in the area proposed to be involved in orthographic-phonological mapping, indicating that this area becomes increasingly specialized for this function during development. In both orthographic-to-phonological mapping and phonological-to-orthographic mapping, adults showed greater activity in the angular gyrus. Again, this data suggests that adults have a better-developed posterior heteromodal system for orthographic and phonological mapping than children. This has also been supported by previous studies, which have found that adults perform better than children on cross-modal orthographic and phonological conversion tasks, and this improved performance is in turn associated with greater activation in the supramarginal/angular and superior temporal gyrus areas [Booth et al. 2003], both crucial aspects of the proposed reading network.

How can what we know impact education now?

Non-neural influences on reading development are well known, and include educational opportunity, socio-economic status and other socio-cultural factors [Adams 1990]; even though these are not directly brain-related, they will no doubt shape the developing brain. How does what we know about the brain and how it learns to read contribute to plans for the best education for children learning to read? The 'whole word' approach that became popular in the 1960s left children without the skills needed to utilize their phonological systems for reading unfamiliar words and decoding words not yet in their lexicon. While now phonics teaching has returned to primary schools, it should be remembered that the fastest route to successful decoding is via the visual pathways, once the visual pattern of the word is stored. Thus the most successful reading programs should incorporate both exposure to the visual patterns of words as well as phonological training. Ideally teaching will educate the whole brain, the whole reading system, and be fun and engaging so that the child will be motivated to read in the first place.

Genetics seem to contribute a significant, yet not enormous, amount to reading variance. This changes how the brain develops, which in turn affects the basic auditory and visual processing that must be the first step to language and reading development. Thus basic neurophysiological processing will constrain the speed at which children learn to read; indeed things such as chronic ear infections in childhood have been found to adversely affect language and literacy development.  How can we incorporate the development of the visual, auditory and motor systems during reading education? Some programs claim that a multi-modal approach to teaching reading, including tactile manipulation of letter shapes, auditory games including singing nursery rhymes, as well as visual introduction of the shapes of letters and words, is best practice in primary school teaching.

At the moment, much of the neuroscience research investigating reading and the brain is focused on developmental reading disorders, their diagnosis and identification, and their remediation. Although this field is still controversial, progress has been made in terms of identifying neurobiological differences between reading disabled and normally-reading adults and children at the genetic [see Grigorenko 2001, Francks et al. 2002], brain activity level [see Mechelli and Price 2005 for recent review of neuroimaging studies] and in the sensorimotor systems [see Stein 2001 for review]. Furthermore, understanding the neurobiological deficits has lead to the development of unique remediation programs aimed at this level [see Eden and Moats 2002 for review].  As our understanding of reading disability increases, hopefully so too will our understanding of how normally-developing children read.

Summary and future directions

What we know and understand about how the brain reads involves the interplay of regions that deal with complicated visual symbols and convert these symbols into meaning either via letter-sound correspondences or directly if the word is already known. Reading continuous text requires that this word-recognition system operates in conjunction with the regions of the brain controlling eye movements, and working memory systems are heavily involved during all of these processes. While we are beginning to amass a basic understanding of the neural underpinnings of reading, there are many areas that remain unexplored. First of all, most of the research thus far has focused on the accuracy of reading, rather than the speed of reading. For many adults with a history of reading difficulties, they achieve a reasonable level of decoding skill, but never achieve rapid and fluent reading. How is this done? Another aspect of reading that we have yet to thoroughly explore is reading comprehension. One would imagine that successful decoding of words is the first step in reading comprehension, but what further processes are involved in understanding what we have read? After the early years of school, much of our learning is based on gleaning information from text.  When decoding text is very difficult and demanding, extracting meaning and garnering enjoyment will be secondary to decoding; once decoding is fluent and automatic, meaning and enjoyment can become the primary goals of reading. Thus understanding the neuroscience underlying reading comprehension is of major importance for education in the later years of schooling. While many deficits of children with reading disabilities have been characterized, there is still relatively little research dealing with the fact that reading disabilities are problems in learning to read; characterizing how implicit and explicit learning contribute to literacy development will again improve how these aspects of learning are targeted in education.  Also with regard to reading disorders, future goals will be to develop reliable early screening tests for children at risk for reading failure; these may include looking at neural activity patterns in infants and genetic tests, in addition to the early assessment of phonological ability and sensory-motor processing. Translating our basic research about the neuroscience of reading into effective, physiologically-based teaching methods is an important next step in reading research. In doing so, we need to be aware of the individual variation in any group of children, and figure out the best ways to teach in a way that plays to the strengths of each child's individual brain processing.

Acknowledgements

Dr C. Stoodley is funded by the Dyslexia Research Trust. The author would like to thank John Stein, Maryanne Wolf, Joseph DiNunzio and Maki Koyama for their contribution to the writing of this article.

REFERENCES

Adams M (1990) Beginning to Read.Cambridge, MA: MIT Press.

Benes FM (1989) Myelination of cortical-hippocampal relays during late adolescence. Schitzophrenia Bulletin 15: 585-593.

Bitan T, Karni A (2004) Procedural and declarative knowledge of word recognition and letter decoding in reading an artificial script. Cognitive Brain Research 19:229-243.

Booth JR, Burman DD, Meyer JR, Gitelman DR, Parrish TB, and Mesulam MM (2004) Development of brain mechanisms for processing orthographic and phonological representations. Journal of Cognitive NeuroscienceI 16: 1234-1249.

Bradley L, Bryant P (1978) Difficulties in auditory organisation as a possible cause of reading backwardness. Nature 271:746-747.

Bradley L, Bryant P (1983) Categorising sounds and learning to read -- a causal connection. Nature 301:419-421.

Bronner AF. The Psychology of Special Abilities and Disabilities. 1917.Boston: Little, Brown & Co.

Castles A, Coltheart M (1993) Varieties of developmental dyslexia. Cognition 47:149-80.

Castro-Caldas A, Petersson KM, Reis A, Stone-Elander S, Ingvar M (1998) The illiterate brain: Learning to read and write during childhood influences the functional organization of the adult brain. Brain 121:1053-63.

Cheng A,Eysel UT, Vidyasagar TR (2004) The role of the magnocellular pathway in serial deployment of visual attention. European Journal of Neuroscience 20:2188-92.

Cohen L, Dehaene S (2004) Specialization within the ventral stream: the case for the visual word form area. Neuroimage 22:466-476.

Crawford T, Higham S (2001) Dyslexia and the centre-of-gravity effect. Experimental Brain Research 137:122-126.

Damasio AR, Damasio H (1983) The anatomic basis of pure alexia.  Neurology 33: 1573-1583.

Damasio AR, Damasio H (1986) Hemianopia, hemiachromatopsia and the mechanisms of alexia.  Cortex 22: 161-169.

Dehaene S, Jobert A, Naccache L, et al.(2004) Letter binding and invariant recognition of masked words: Behavioral and neuroimaging evidence. Psychological Science 15:307-314.

Dejerine J (1891) Sur un cas de cecite verbale avec agraphie, suivi d'autopsie. C R Societe du Biologie 43:197-201.

Dejerine J (1892) Contribution a l'etude anatomo-pathologique et clinique des differentes varietes de cecite verbale. Memoires de la Societe de Biologie 4:61-90.

Eden G, Moats L (2002) The role of neuroscience in the remediation of students with dyslexia. Nature Neuroscience 5: 1080-1084.

Ehri L, Wilce L (1985) Movement into reading: Is the first stage of printed word learning visual or phonetic? Reading Research Quarterly 20: 163-179.

Facoetti A, Molteni M (2001) The gradient of visual attention in developmental dyslexia. Neuropsychologia 39:352-357.

Fiez J, Balota D, Raichle M, Petersen S (1999) Effects of lexicality, frequency, and spelling-to-sound consistency on the functional anatomy of reading. Neuron 24:205-218.

Francks C, MacPhie I,Monaco A (2002) The genetic basis of dyslexia. Lancet Neurology 1:483-490.

Frith U (1985) Beneath the surface of developmental dyslexia. In: Patterson K, Marshall J, Coltheart M, editors. Surface Dyslexia.London: Routledge and Kegan Paul. p 310-330.

Gaillard WD, Balsamo LM, Ibrahim Z, Sachs BC, Xu B (2003) fMRI identifies regional specialization of neural networks for reading in young children. Neurology 60:94-100.

Gilchrist I, Brown V,Findlay J (1997) Saccades without eye movements. Nature 390:130-131.

Grigorenko EL (2001) Developmental dyslexia: An update on genes, brains and environments. Journal of Child Psychology and Psychiatry 42: 91-125.

Ho CS-H, Chan DW-O, Lee S-H, Tsang S-M, Luan VH (2004) Cognitive profiling and preliminary subtyping in Chinese developmental dyslexia. Cognition 91: 43-75.

Horowitz TS, Wolfe JM (1998) Visual search has no memory. Nature 394:575-7.

Iles J, Walsh V, Richardson A (2000) Visual search performance in dyslexia. Dyslexia 6:163-77.

Jacobs A, Grainger J (1994) Models of visual word recognition -- sampling the state of the art. Journal of Experimental Psychology: Human Perceptual Performance 20:1311-1334.

Karmiloff-Smith A (1992) Beyond modularity: a developmental perspective on cognitive science.Cambridge, MA: MIT Press.

Katzir T, Misra M, Poldrack RA (2005) Imaging phonology without print: Assessing the neural correlates of phonemic awareness using fMRI. NeuroImage 27: 106-115.

Leonard CM, Eckert MA and Bishop DVM (2005) The neurobiology of developmental disorders. Cortex 41: 277-281.

Lundberg I, Hoien T (2001) Dyslexia and phonology. In: Fawcett A, Ed. Dyslexia: Theory and Good Practice.London: Whurr Publishers. pp 109-123.

Manis F, Seidenberg M, Doi L, McBride-Chang C, Petersen A (1996) On the bases of two subtypes of developmental dyslexia. Cognition 58:157-195.

McBride-Chang C and Kail RV (2002) Cross-cultural similarities in the predictors of reading acquisition. Child Development 73: 1392-1407.

McCandliss B, Cohen L, Dehaene S (2003) The visual word form area: expertise for reading in the fusiform gyrus. Trends in Cognitive Science 7:293-299.

McClelland JL (1998) Complementary learning systems in the brain: a connectionist approach to explicit and implicit cognition and memory. Annals of the New York Academy of Sciences 843: 153-169.

McGeorge P, Crawford J, Kelly S (1997) The relationships between psychometric intelligence and learning in an explicit and implicit task. Journal of Experimental Psychology: Learning, Memory and Cognition 23:239-245.

Moretti R, Bava A, Torre P, Antonello R, Cazzato G (2002) Reading errors in patients with cerebellar vermis lesions. Journal of Neurology 249:461-468.

Morton J (1969) The interaction of information in word recognition. Psychological Review 76:165-178.

Nagy Z, Westerberg H, and Klingberg T (2004) Maturation of white matter is associated with the development of cognitive functions during childhood. Journal of Cognitive Neuroscience 16: 1227-1233.

Nicolson R, Fawcett A, Dean P (2001) Developmental dyslexia: the cerebellar deficit hypothesis. Trends in Neuroscience 24:508-511.

Olson R, Forsberg H, Wise B, Rack J (1994) Measurement of word recognition, orthographic, and phonological skills. In:Lyon G, Ed. Frames of reference for the assessment of learning disabilities: New views on measurement issues.Baltimore: Paul H. Brookes Publishing Co. pp 243-277.

Olson R, Wise B, Connors F, Rack J, Fulker D (1989) Specific deficits in component reading and language skills. Journal of Learning Disability 22:339-348.

Pammer K, Hansen PC, Kringelbach ML, Holliday I, Barnes G, Hillebrand A, Singh KD, and Cornelissen PL (2004) Visual word recognition: the first half second. NeuroImage 22: 1819-1825.

Paulesu E, Demonet J-F, Fazio F, et al.(2001) Dyslexia: cultural diversity and biological unity. Science 291:2165-2167.

Plaut D, Booth J (2000) Individual and developmental differences in semantic priming: Empirical and computational support for a single-mechanism account of lexical processing. Psychological Review 107:786-823.

Poldrack RA, Wagner AD, Prull MW, Desmond JE, Glover GH, Gabrieli JDE (1999) Functional specialization for semantic and phonological processing in the left inferior frontal cortex. NeuroImage 10: 15-35..

Price C and Mechelli A (2005)Reading and reading disturbance. Current Opinion in Neurobiology 15: 231-238.

Price C, Winterburn D, Giraud A, Moore C, Noppeney U (2003) Cortical localization of the visual and auditory word form areas: A reconsideration of the evidence. Brain and Language 86:272-286.

Rae C, Harasty J, Dzendrowskyj T, et al.(2002) Cerebellar morphology in developmental dyslexia. Neuropsychologia 40:1285-1292.

Rayner K (1996) What can we learn about reading processes from eye movements? In: Chase C, Rosen G, Sherman G, Eds. Developmental Dyslexia: Neural, Cognitive and Genetic Mechanisms. Baltimore:York Press. pp 89-106.

Rayner K (1998) Eye movements in reading and information processing: 20 years of research. Psychological Bulletin 124:372-422.

Rayner K, Foorman BR, Perfetti CA, Pesetsky D, Seidenberg MS (2001) How psychological science informs the teaching of reading. Psychological Sciences 2: 31-74.

Reber P, Squire L (1994) Parallel brain systems for learning with and without awareness. Learning and Memory 1:217-229.

Reber A, Walkenfeld F, Hernstadt R (1991) Implicit and explicit learning: individual differences in IQ. Journal of Experimental Psychology, Learning, Memory and Cognition 17:888-896.

Scott R, Stoodley C, Anslow P, et al.(2001) Lateralized cognitive deficits in children following cerebellar lesions. Developmental Medicine and Child Neurology 43:685-691.

Seidenberg M, Manis F (1994) On the bases of 'surface' and 'phonological' subtypes of developmental dyslexia. Meeting of the Psychonomic Society.St. Louis, MO.

Seidenberg MS, McClelland JL (1989) A distributed, developmental model of word recognition and naming. Psychological Review 96:523-568.

Share D, Jorm A, Maclean R, Matthews R (1984) Sources of individual differences in reading acquisition. Journal of Educational Psychology 76: 1309-1324.

Shaywitz BA et al.(2002) Disruption of posterior brain systems for reading in children with developmental dyslexia. Biological Psychiatry 52: 101-110.

Silani G, Frith U, Demonet J, et al.(2005) Brain abnormalities underlying altered activation in dyslexia: A voxel based morphometry study. Brain 128:2453-2461.

Simos P, Breier J, Fletcher J, Bergman E, Papanicolaou A (2000) Cerebral mechanisms involved in word reading in dyslexic children: a magnetic source imaging approach. Cerebral Cortex 10:809-816.

Siok WT,Perfetti CA, Jin Z, Tan LH (2004) Biological abnormality of impaired reading is constrained by culture. Nature 431: 71-76.

Sperling A, Lu Z-L, Manis F (2004) Slower implicit categorical learning in adult poor readers. Annals of Dyslexia 54:281-303.

Squire LR, Shimamura AP and Amaral DG (1989) Memory and the hippocampus. In Byrne JH and Berry Wo (Eds.) Neural models of plasticity: Experimental and theoretical approaches (pp. 208-239).New York: Academic Press.

Stein J. (2001) The sensory basis of reading problems. Developmental Neuropsychology 20:509-34.

Stoodley CJ, Fawcett AJ,Nicolson RI, Stein JF (2005) Impaired balancing ability in dyslexic children. Experimental Brain Research.

Stoodley CJ, Harrison EP, Stein JF (2005) Implicit motor learning deficits in dyslexic adults. Neuropsychologia.

Stoodley CJ and Stein JF (in press) A processing speed deficit in dyslexic adults? Evidence from a peg-moving task. Neuroscience Letters.

Talcott J, Witton C (2001) A sensory linguistic approach to the development of normal and dysfunctional reading skills. In: Witruk E, Friederici A, Lachmann T, Eds. Basic functions of language, reading and reading disability.Boston: Kluwer. pp 213-240.

Talcott J, Witton C, McClean M, et al.(1999) Can sensitivity to auditory frequency modulation predict children's phonological and reading skills? Neuroreport 10:2045-2050.

Talcott J, Witton C, McLean M, et al.(2000) Dynamic sensory sensitivity and children's word decoding skills. Proceedings of the National Academy of Sciences USA 97:2952-2957.

Tallal P, Stark R, Mellits E (1985) Identification of language-impaired children on the basis of rapid perception and production skills. Brain and Language 25:314-322.

Turkeltaub PE, Gareau L, Flowers DL, Zeffiro TA, Eden GF (2003) Development of neural mechanisms for reading. Nature Neuroscience 6: 767-773.

Van Orden G,Goldringer SD (1996) Phonologic mediation in skilled and dyslexic reading. In CH Chase, GD Rosen, GF Sherman (Eds.) Developmental dyslexia: Neural, cognitive and genetic mechanisms (pp. 185-223). Baltimore:York Press.

Vicari S, Finzi A, Menghini D, Marotta L, Baldi S, Petrosini L (2005) Do children with developmental dyslexia have an implicit learning deficit? Journal of Neurology Neurosurgery and Psychiatry 76:1392-7.

Vicari S, Marotta L, Menghini D, Molinari M, Petrosini L (2003) Implicit learning deficit in children with developmental dyslexia. Neuropsychologia 41:108-114.

Vidyasagar TR (2004) Neural underpinnings of dyslexia as a disorder of visuo-spatial attention. Clinical Experimental Optometry 87:4-10.

Witton C, Stein JF, Stoodley CJ, Rosner BS, Talcott JB (2002) Separate influences of acoustic AM and FM sensitivity on the phonological decoding skills of impaired and normal readers. Journal of Cognitive Neuroscience 14:866-74.

Witton C, Talcott JB, Hansen PC, et al.(1998) Sensitivity to dynamic auditory and visual stimuli predicts nonword reading ability in both dyslexic and normal readers. Current Biology 8:791-7.

Yakovlev PI andLecours AR (1967) The myelogenetic cycles of regional maturation of the brain. In: Minkowski A (Ed) Regional development of the brain in early life (pp. 3-65).Oxford: Blackwell Scientific Publications.

Indice