To fill a half-full glass

Dear readers,

As the semester and this module draw to a close, I would like to share some reflective thoughts. The first is: I don’t like whining. When doing a module on how education could be improved, it is all too easy to forget about just how incredibly fortunate we are. Education in the UK, and in Western Europe as a whole, is lightyears ahead and superior to education in some other places. Compared to their peers in poorer countries, Western children’s education is not cut short by war, or by the need to work mind-numbing manufacturing jobs to help support the family. Literacy in the UK is close to 100%. Children have access to computers and other modern technologies. Young people in Western cultures are treated as individuals. Corporal punishment is illegal and frowned upon. Children’s opinions are accorded more respect and interest by adults than in any other culture or historical era. Rote learning has largely been done away with, except in the areas where it makes sense (e.g. the times tables). These are all things we can be proud of. Let’s count ourselves lucky.

However, justified criticism is not whining. The most important lesson I have taken from this module is the issue of information abundance, and education’s failure to adapt to it (generally speaking, although there are already some very promising approaches, e.g. Mitra). The fact that all important factual information can be found on the internet in a matter of minutes or less means our schools and universities need to be restructured. The memorisation of facts will have to take a back seat (though some level of general knowledge is obviously important). I agree with Jesse that in the education system of the future the aquisition of skills will (and should!) play the central role.

I also hope that the luxury of time afforded to us by information abundance will allow for more holistic personal development. This can take several forms. On my blog I looked at Daniel Quinn’s proposal to teach the skills that our civilisation is based on (such as navigating by the stars) and considered approaches such as Outward Bound that aim to boost young people’s self-confidence and general wellbeing through outdoor education.

Another issue that is dear to my heart is that, to put it simplistically, stuff isn’t always easy. Schools should convey this message. Acquiring some skills simply requires grit and lots of time. On my blog and in my talks I criticised the ‘rapid fire’ approach to maths teaching, which leads many children to conclude that they are not ‘maths people’, and suggested solutions (such as the approach used by KIPP Schools in the US). I also reviewed the findings of the Expert Performance Movement, succinctly summarised by the quote “there is surprisingly little hard evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it” (Ericsson, 2006). This issue also has social implications. I believe that middle-class parents will always find ways to challenge and educate their children, sometimes to an unhealthy extent (see my post on ‘concerted cultivation’). In order to reduce social inequalities schools should challenge students of all backgrounds to reach their full potential. Students must acquire the self-efficacy to deal with setbacks and delayed gratification. Being too soft and undemanding is misguided and simply puts students from poorer and less educated families at a disadvantage. As Shakespeare’s King Lear sais: “Nothing can come of nothing”.

In conclusion: I have greatly enjoyed this module and would like to thank everyone who through their talks and blogs introduced me to new and stimulating ideas. These ideas will stay with me, and I hope that some of my ideas have influenced you. We all agree that there is loads of work to be done. While, as I suggest above, the glass of education is probably half-full, it could certainly do with some topping up. To stick with the (clumsy?) metaphor, I believe this module, more than any other, has helped us become better bartenders at the counter of knowledge. Now let’s fill that glass!

Children are not colour blind!

Morgan Freeman: I don’t want a black history month. Black history is American history.
Mike Wallace: How we gonna get rid of racism until…
Morgan Freeman: Stop talking about it. I’m going to stop calling you a white man. And I’m going to ask you to stop calling me a black man. I know you as Mike Wallace. You know me as Morgan Freeman.

The sentiments expressed by actor Morgan Freeman (2005) on the US television show 60 Minutes are quite common. Many people (especially those who consider themselves left-wing or liberal) would like to live in a ‘colour-blind’ society, and feel uneasy talking about or highlighting racial differences. These attitudes influence child-rearing practices, especially in families with very young children. For instance Brown et al. (2007) found that out of 17,000 American families with children aged five to six, 45% had never or almost never discussed racial issues with their children. White parents were particularly reluctant to talk about race, with 75% never bringing up the topic in discussions with their children. The reasoning behind this is simple and compelling. Parents worry that mentioning race or ethnicity “unavoidably teaches a child a racial construct” and that even positive comments such as “it is wonderful that a black person can be president” will encourage children to “see divisions within society” (Bronson & Merryman, 2009).

I used to subscribe 100% to this point of view, though some studies I read about recently have changed my opinion. For example Patterson and Bigler (2006) conducted a study with preschoolers, who were randomly instructed to wear either a blue or a red t-shirt at school for three weeks. During this time the teachers never mentioned the t-shirts again or refered to the groups as “Reds” and “Blues”. At the end of the three weeks there was no outright hatred between the groups, and the children still played with each other. The students did show strong in-group preferences however. Reds tended to believe that all other Reds were nice, but that only some of the Blues were pleasant (and vice-versa), and children in both groups thought that their own group was smarter and more likely to win a race. This shows that children will use salient features to categorise individuals and develop in-group favouritism, even when adults are not making a big fuss about the categories.

Indeed, Katz (2003) showed that children as young as six months are sensitive to race, spending more time staring at faces that are a different race from their parents. At age five to six, children given a stack of pictures of people and instructed to sort them into two piles in whichever way they wanted were more likely to sort the pictures by race (68%) than by any other factor (age, gender, mood…).

So we can conclude that children (even very young ones) are patently not colour-blind. They notice salient features like race, and this influences their behaviour. Since many parents are reluctant to bring up the topic, children must draw their own inferences about race. And the conclusions they reach can be bizarre. For instance Vittrup (2007) interviewed the five to seven year old children of white self-proclaimed ‘multiculturalists’ in Austin, Texas. When asked “Do your parents like black people?” 14% of these children answered that their parents did not like black people, while 38% did not know.

It seems that for children to get the message that racial discrimination is bad, adults must be extraordinarily direct and specific. General statements along the lines of “everybody’s equal” will probably not be associated with race by the children, and will have no effect on attitudes and behaviour (Vittrup, 2007). Bronson and Merryman (2009) note that most parents are quite comfortable talking to children about gender, and that this could be a model for talking about race. Just like adults reinforce the fact that “mommies can be doctors just like daddies” they could be telling children directly that doctors can be any skin colour. It is time to stop avoiding the issue and be more open about race, especially with younger children who are unlikely to understand the usual vague message of tolerance promoted by well-meaning parents and teachers.

The opposite of the 10,000 Hour Rule

Last week I wrote about expertise, and how individuals must often devote over 10,000 hours to practice in a particular area before they achieve a level of mastery that makes them true ‘experts’. This week I will highlight an approach that is almost the diametral opposite of the 10,000 Hour Rule (but still requires hard work and self-discipline).

American entrepreneur Seth Godin (2012) observes that in our age of information abundance “the only barrier to learning for most young adults in the developed world is now merely the decision to learn”. US college student Jeremy Gleick appears to have taken these words to heart. For the last two and half years the bioengineering student has set aside one hour a day as his ‘learning hour’. During this time Jeremy tries to learn something new. His only rules are that it must not involve uni work or merely consist of reading a novel. In January, Jeremy topped 1000 ‘learning hours’, most of them spent online. In these 1000 hours Jeremy has covered a truly phenomenal number of topics. His first hour was spent watching a documentary on gamma ray bursts, and he has since read works by Steven Pinker and Stephen Hawking, learnt about ancient civilisations, alchemy, the American civil war and the lives of ants, and taught himself sign language, card tricks, juggling, yoga, hypnosis and blacksmithing, among other things.

So why is this important? Firstly, it is fun. After a straight month of daily ‘learning hours’, Jeremy missed a day, which he claims “somehow felt off”. Since then he has kept up a perfect streak. Even if he is staying at a friend’s house he merely excuses himself for an hour. In two and a half years Jeremy claims he has never found a subject he is not “somewhat interested in”.

Secondly, it will probably pay off in the future. Seth Godin (2012) writes that in 1960 the top ten employers in the US were: GM, AT&T, Ford, General Electric, U.S. Steel, Sears, A&P, Esso, Bethlehem Steel, and IT&T. Eight of these companies, Godin claims, offered hard-working people good long-term career prospects with the promise of substantial pay. Today, the top ten American employers are: Walmart, Kelly Services (a company offering temporary staffing services), IBM, UPS, McDonald’s, Yum (Taco Bell, KFC et al.), Target (a discount retailer), Kroger (a supermarket chain), Hewlett-Packard and The Home Depot. Of these only two (software and technology giants IBM and Hewlett-Packard) offer the kind of fulfilling career path that large companies of the 1960s provided. In order not to join the growing ranks of “burger flippers”, Godin writes, one must now either be an artist (“someone who brings new thinking and generosity to his work”) or a linchpin.

But in order to do this you need certain ‘raw materials’, I believe. In science, philosophy and art there is the concept of emergence. Ramachandran (2011) writes that emergence simply means that “the whole is greater – sometimes vastly so – than the mere sum of the parts”. A good example is salt. Neither of the two substances that make up salt (chlorine – a greenish poisonous gas, and sodium – a shiny metal) have anything saltlike about them. But together they form an edible white crystal. My argument is that in order to be truly original, one must first acquire the raw material: a substantial knowledge base from which great thoughts can emerge.

Unfortunately, the school system seems to be failing at this. Blogger Quantum Progress (2012) writes that: “So much of school consists of a teacher delivering pre-digested morsels of knowledge to students that students often flounder when seeking out learning on their own.” A solution, the blogger writes, would be to make all students take a course called Introduction to Life Long Learning Now (or something along those lines :)) where they can share with each other ways to find good sources (interesting books, online tutorials etc.).

Blood, sweat and tears – The 10,000 Hour Rule

What do violin soloists, chess grandmasters, surgeons and the Beatles have in common? Not much, you’re probably saying to yourself. Let me tell you a few stories, and you will understand what unites these very different groups.

In 1993 psychologist K. Anders Ericsson conducted a questionnaire study at the renowned Berlin Academy of Music. Ericsson asked professors to sort their violin students into three groups. In the first group were the students the professors deemed ‘least able’ and who would probably become teachers in the state school system. The second group were violinists considered ‘good’, who would probably play for a philharmonic orchestra or similar one day. The third group were the students the professors deemed ‘stars’. They would probably have careers as renowned violin soloists.

Ericsson found that all students had begun playing the violin at around age 5. They differed greatly, however, in the amount of time they had practiced. By age 20 the ‘least able’ students estimated that they had practiced a total of 4,000 hours. The ‘good’ students averaged around 8,000 hours. None of the ‘stars’ had practiced for less than 10,000 hours however. Interestingly, there was not a single student who had practiced for over 10,000 hours who was not classed as a star by the professors. This leads us to a fascinating conclusion: among students talented enough to get into a good music school, what distinguishes future star soloists from the rest is merely the amount of time they are willing to practice!

This pattern of extensive practice shows up over and over again wherever there is expertise involved.  Gladwell (2008), for instance, writes about the Beatles, who travelled to Hamburg five times between 1960 and 1962 to play in clubs. Before undertaking these trips the Beatles were “no good onstage” their biographer, Philip Norman writes. In Hamburg, the Beatles had to play seven nights a week, often for up to eight hours a night. By the time they had their first commercial success (1964), the Beatles had performed live an estimated 1,200 times, which, Norman notes, is more than most bands manage in their entire careers. He concludes: “The Hamburg crucible is one of the things that set the Beatles apart.”

The aforementioned Ericsson has edited a book called The Road to Excellence: The Acquisition of Expert Performance in the Arts and Sciences, Sports and Games collecting similar data and anecdotes from various fields. One group highlighted are chess grandmasters, a title awarded to strong chess players by the world chess organisation FIDE that is held for life. When the book was published (1996) the title had been awarded around 1,000 times. All but one individual had become grandmasters after competing in official tournaments for over 10 years. This individual was Bobby Fischer, widely considered the greatest chess player of all times, and he had played for 9 years before being awarded the title. Another group highlighted are surgeons, the only group of doctors who actually perform better the longer they are out of medical school. According to Ericsson this is because they are constantly exposed to practice involving immediate feedback and specific goal-setting.

So what does all of this mean? Ericsson and other psychologists of the Expert Performance Movement believe that we are vastly underestimating the importance of practice. Studies and anecdotal evidence from many fields show that ‘experts’ often have a vast amount of practice under their belts. “There is surprisingly little evidence that anyone could attain any kind of exceptional performance without spending a lot of time perfecting it.” Ericsson (2006) writes. What this means in practice is that individuals should be encouraged to pick something they are reasonably good at it and work hard at it. Applying an expertise philosophy to schools, Dubner and Levitt (2006) argue that students “should be taught to follow their interests earlier in their schooling”, something that will sound familiar to anyone who has seen Sir Ken Robinson’s TED talks. In the US, KIPP Schools too appear to be following an expertise approach, requiring that students study maths, a subject widely considered difficult, for up to 120 minutes a day.

On a finally note, while hard work (to be fair) does not sound particularly enjoyable, is there not something quite refreshing about these findings? While our genes undoubtedly play a great role, endowing us with a certain levels of intelligence and creativity, we may be more in charge of our own destiny than previously assumed.

Interesting links:

A Star is Made – S. J. Dubner & S. D. Levitt (2006), The New York Times

How I would improve education

In this post, I will review opinions and research myself and other bloggers have focused on these past few weeks, and consider how they can be applied, in practice, to reforming the education system.

It is clear that there is a need for change. American cultural critic Daniel Quinn writes that the length of schooling considered ‘adequate’ has increased dramatically over the past two centuries, from around 4-5 years before the industrial revolution to the 12-13 years now common in developed countries. While our knowledge of the world has of course increased dramatically as well, much of this extra time seems to be spent learning things that any child would acquire anyway (i.e. the names of the primary colours) or will inevitably forget since the information will rarely be used (for instance how to analyse a poem).

Many (for instance Diana Laufenberg and Sugata Mitra) have stressed that we now live in an age of information abundance, where all important factual information can be found on the internet in a matter of minutes or less. To me this means that it is today less appropriate than ever to cram children’s minds with ‘filler’ that they will soon forget. Since it would be impractical to shorten schooling again (Quinn notes this would flood the job market with surplus labour), and since most modern teenagers (compared to those 200 years ago, say) are probably not emotionally mature enough to enter the world of work, I believe we should stick with the customary 12 years of schooling.

What this gives us is the luxury of time. And the luxury to teach things that really matter in ways that are effective. Some blogs and talks for this module have noted the importance of encouraging interest in STEM subjects (science, technology, engineering and mathematics). This requires fostering positive feelings towards maths from an early age. Boe (2002) found that a country’s maths performance can be predicted by its students’ persistence at filling out a 120 item questionnaire not requiring any mathematical skill. This might explain why cultures which value persistence (Korea, Japan, China) do well in international comparisons of maths performance (Gladwell, 2008). KIPP Schools, established in some of the most economically deprived areas of the US, have taken this message aboard, requiring that students do 90-120 minutes of maths a day. This extra time allows students to develop real self-efficacy and skill in maths, compared to the usual ‘rapid fire’ approach that leads all but the strongest students to believe that they are not ‘maths people’.

The extra time afforded by information abundance also means we can teach some of the skills that have made human civilisation what it is. Quinn suggests that children will enjoy and benefit from activities like building shelters or learning to navigate by the stars. This could occur by guided participation (maybe involving peer-instruction?) within the ‘zone of proximal development’, a very useful concept introduced by Lev Vygotsky. Children and teenagers should also be given more time to spend outdoors. Hartig et al. (2003) found that natural surroundings increase positive affect, and Kaplan and Kaplan (1989) found that individuals with access to natural settings are healthier and report greater life satisfaction. ‘Forest kindergartens’ have great positive effects (Gorges, 2004) and wilderness expeditions for urban youth were shown to provide an “immensely satisfying” way of “relating to one’s surroundings and responding to one’s daily opportunities and challenges” (Kaplan & Talbot, 1983). As European societies are plagued daily by stabbings and other mindless violence, a happier generation more in touch with themselves and the world around them would be very welcome, I think.

Something I have not yet focused on is how schools can help young people discover and foster individual talents and give them a good start in the world of work. This is something I will be considering in future blog posts.

Into the wild!

After last week’s ranting, I thought I’d concentrate on something more pleasant for this week’s post: the wilderness. Or, more precisely, the positive effects of outdoor and wilderness education

Davis (2003) identifies three approaches to what he terms environmental education and wilderness work. Firstly, there are programmes focusing on environmental science, natural history and wilderness skills, which aim to instil greater environmental sensitivity and ecological awareness. This type of approach is often integrated into the school system, and can encompass field trips to national parks and participation in community service and outdoor science projects.

Secondly, there are approaches focusing  mainly on personal growth and increasing self-esteem, confidence, teamwork and leadership skills. A good example is the non-profit group Outward Bound, which organises expeditions for over 200,000 participants per year around the world. On Outward Bound trips, participants typically set up a base camp and spend the first days completing confidence-building challenges. They then set off into the wilderness with an instructor. As the wilderness skills of the group increase, the instructor delegates more responsibility to the group, allowing the participants to make their own decisions. Towards the end of the expedition, anyone who wishes can participate in a “solo”, leaving the group and spending some reflective time alone in the wilderness. Expeditions normally conclude with a final challenge which shows the participants how much they have developed during the course. Walsh and Golins (1976) developed the Outward Bound Process Model to account for the positive effects of Outward Bound programmes. Sadly, personal growth approaches have sometimes shown a darker side, particularly in the United States where aggressive ‘boot-camp’ style wilderness therapy programmes have lead to a number of deaths (see here for an example). Davis notes, however, that this should not detract from the value of “careful and professional programs”.

The third approach, ‘wilderness rites of passage’, is more overtly spiritual. Davis observes that modern society seems to offer few meaningful ways to mark transitions, and that existing rituals such as graduation or confirmation are often disconnected from the patterns of our lives and the larger social context. Wilderness rites of passage (conducted by various individuals and organisations including Marin Academy, a prestigious private high school in California), offer young people a relatively safe alternative to creating their own rites of passage involving drugs or other dangerous/illegal activities, and are based on the initiation practices of tribal cultures, chiefly the Native American ‘vision quest’ tradition. Participants pass through a number of stages: 1. Preparation, when the individual prepares practically and mentally for the trip, 2. Severance, when participants set up a base camp and adjust to life outdoors and may burn an object symbolising their old self, 3. Threshold, when participants spend 3-4 days and nights in solitude (similar to the Outward Bound “soloing” tradition), contemplating and fasting, 4. Return, when participants are celebrated and reincorporated into the community, and 5. Implementation, when the individual applies the insights gained during the quest to their everyday life and behaviour.

So what are the psychological benefits of these wilderness activities? Wilson (1984) popularised the term biophilia, proposing that evolution has endowed all humans with an innate attraction to the natural world, and a number of studies have explored the effects of natural environments. For instance Hartig et al. (2003) found that positive affect increased and blood pressure and anger decreased when walking in a nature reserve, compared to an urban environment, and Kaplan and Kaplan (1989) found that individuals with access to natural settings are healthier and report greater life satisfaction. Wuthnow (1978) conducted a survey, reporting that 82% of respondents had “experienced the beauty of nature in a deeply moving way”. An approach involving suburban and inner-city youth known as the Outdoor Challenge Program was evaluated by Kaplan and Talbot (1983), who found that the wilderness helped participants feel “at peace with themselves” and offered “a way of relating to one’s surroundings and responding to one’s daily opportunities and challenges, that was immensely satisfying “.

Some thoughts on higher education

In his piece ‘End the University as We Know It’ the theologian Mark C. Taylor identifies a number of flaws he perceives in the way higher education is run. Some issues he raises are overspecialisation and lack of cooperation between different institutions and the departments within them, and the fact that with current levels of funding, teaching and reasearch would not be possible without exploiting underpaid postgraduates. I strongly agree with these points, and know many of you will as well, so I won’t discuss them further here.

Some of Taylor’s points I do take issue with however. For instance his claim that universities prepare students too much for careers in academia instead of equipping them for jobs in other fields. Admittedly, the vast majority of graduates will not go into academia, but I think that is beside the point. In my opinion, universities should exist primarily to conduct research and familiarise students with a scientific way of thinking, all other functions being of secondary importance. Of course, students who are not interested in staying in research should be allowed to go to university (and anyway who, at 18-19, really knows what they want to do with their lives?). But I think it would be harmful to society to water down the scientific content of courses to accommodate for the fact that most students will not become researchers.

According to engineer Jacque Fresco, the language of science is the closest humans have come to developing a universal language that leaves little room for individual interpretation. He writes “If a blueprint for an automobile is given to any technologically developed society anywhere in the world, regardless of political or religious belief, the finished product will be the same. This language was deliberately designed as a more appropriate way to state a problem. It is nearly free of vague interpretations and ambiguities.”

And what benefits the scientific method and universal language of science have brought! According to science writer Timothy Ferris, scientists, judging ideas “not by their brilliance but by whether they survived experimental tests”, have produced “the greatest increases in knowledge, health, wealth, and happiness in all human history”.

It is my belief that (aside from personal relationships and individual morality/spirituality), we have science to thank for nearly everything that positively impacts our lives. Isn’t learning the language of science, a language one might claim our entire modern civilisation is built upon, more important and widely applicable than being stuffed with mundane job skills? I believe that graduates, whatever field they end up in, will learn the particular skills they need on the job anyway. Universities cannot possibly equip students with the skills for all jobs that exist in the world. They can however familiarise them with the language of science (which, incidentally, is why I think  Taylor is wrong about traditional dissertations, though non-traditonal products, such as computer science students submitting a piece of software instead of a full dissertation, might be a great in some areas).

To reiterate, I do not believe that students who do not want to become academics should be kept out of university. Absolutely not! I also do not think that everybody should go to university (the entrepreneur James Altucher suggests a number of alternatives such as starting a business, travelling or creating art). But it should be an option, regardless of what the student’s future career plans may be. My point is that preparing students for the world of work must not come at any cost to the scientific rigour of course content. I believe that learning the language of science (maybe the most beautiful language of all?) can be useful and personally rewarding for any student, regardless of their future path, and is vital to solving many of the world’s problems such as pollution, poverty and hunger.

Concerted Cultivation vs. Natural Growth (Annette Lareau)

Annette Lareau is an American sociologist with an interest in parenting, social class and racial issues. Between 1993 and 1995, Lareau and her field workers investigated the daily lives of 88 American families with children. The investigators instructed families to treat them like “the family dog”, accompanying them to sports practice and doctor’s appointments, and spending several days and nights at their homes.

While one might expect the researchers to observe many different styles of parenting, reflecting the diversity of the sample, the parenting styles observed by Lareau and colleagues can in fact be sorted neatly into just two categories. Race appeared to have little effect on parenting, but socio-economic status did. According to Lareau, working-class parents pursued an approach that she called ‘accomplishment of natural growth’ while middle-class families adopted a strategy termed ‘concerted cultivation’.

Children from concerted cultivation households spend much time in after school classes or programmes such as taking piano lessons or being on a football team. Parents in these families are very involved in their children’s free time, shuttling them from activity to activity. Concerted cultivation parents also emphasize negotiation, encouraging their children to question authority figures, including themselves. As a result, children from concerted cultivation homes are accustomed early to structured environments, tend to be less intimidated by authority and acquire a sense of “entitlement”, believing they are “worthy of adult interest” and can “customize” their environment. Lareau gives the example of middle-class ‘Alex’, who is taken to the doctor’s by his mother. In the car, she tells her son that he should not be shy and ask the doctor anything he wants. Alex interacts in a relaxed way with the doctor, asking him questions and even interrupting him when he gets his age wrong and uses a word Alex does not know.

In contrast, children from poorer natural growth homes tend to spend most of their time playing outside with siblings and other children from their area. Parents spend little time at home because they are working, waiting for public transportation or queuing at social service agencies. They do not “schedule” their children’s time or care much about cultivating their children’s talents and interests. Parenting tends to be authoritarian, with children following commands without negotiation. Around authority figures such as teachers, working-class children and their parents tend to be subdued and passive, looking at the ground and not asking questions.

Lareau takes great pains to stress that one style is not “morally better” than the other (indeed, the working-class children often seemed more creative, independent and better behaved). She does claim, however, that children raised by concerted cultivation enjoy a huge advantage in educational and professional settings, having learnt to be organised, confident and articulate.

Coming from a family that is probably somewhere in between these two extremes (although tending towards concerted cultivation), I agree that both approaches have their merits. While the advantages of concerted cultivation are evident, the disadvantages may be less obvious. For instance in her book ‘The Price of Privilege‘ American psychologist Madeline Levine suggests that middle-class children may have more psychological problems than expected, often suffering from depression, drug addiction or anorexia.

Fortunately, there appears to be a middle way. The ‘slow parenting’ movement advocates letting children explore the world independently and opposes ‘over-parenting’. Parents choosing slow parenting are often middle-class and educated, and in my opinion this approach offers a way of dissociating the positive effects of concerted cultivation (self-confidence, entitlement) from the negative effects (lack of creativity and independence). For a practical example, ‘forest kindergartens’, where children aged 3-6 play relatively independently in a wooded area, are often associated with the slow parenting movement. While it has been argued that forest kindergartens do not adequately prepare children for school, Gorges (2004) found that such children showed more knowledge, creativity and social skills than their peers and were better at maths, reading, sports and music.

Links:

The first half of this article reviews Annette Lareau’s book ‘Unequal Childhoods: Class, Race, and Family Life ‘.

http://idler.co.uk/idleparent/ is the website of Tom Hodgkinson, a slow parenting advocate from the UK.

Zone of proximal development (Lev Vygotsky)

Lev Vygotsky was a Russian psychologist who was born in 1896. During his short life (he died of tuberculosis at age 38), Vygotsky pursued very diverse interests, mainly in the fields of child development and education. Since his work was banned in the Soviet Union for almost two decades, and translation into English only began in the 1960s and 70s, Vygotsky’s ideas remain relatively unknown in the Western world (compared to contemporaries like Piaget, for instance).

One of the ideas Vygotsky is best known for is the ‘zone of proximal development’ concept. In Mind in Society, a collection of his writings translated into English, Vygotsky criticises the assumption that only the things children can do on their own should be considered indicative of their mental abilities. He gives the example of an investigator examining two children of equal chronological age, who, through standardised testing, are determined to have a mental age of 8. Vygotsky writes that it would be wrong to assume that both children will take the same developmental path. The investigator could show the children different ways of dealing with problems they cannot yet solve (for instance by initiating the solution and asking the child to finish it, or by offering leading questions) and find that the first child can, under these circumstances, solve problems up to a 12-year-old’s level, while the second can not.

The difference between actual developmental level (what the child can already do) and the level of potential development (problems that can be solved “under adult guidance, or in collaboration with more capable peers”) is what Vygotsky terms the zone of proximal development. Vygotsky writes that “all current testing systems” consider only solutions which a child reaches without assistance, demonstrations or leading questions, while “learning and imitation” are ignored and considered purely “mechanical processes”. Vygotsky argues that this is the wrong approach, and that imitation, for instance, can tell us a lot about the level of a child’s mental development. He gives the example of a teacher showing a young child how to solve a higher mathematics problem. The child would not understand the solution, no matter how often she tried imitating it, because it is not “within her developmental level”.

Vygotsky’s theory has many practical implications. In Mind in Society, Vygotsky argues that orienting learning toward what children can already do can be harmful, and gives the example of special needs education. He writes that early special needs educators realised that mentally retarded childen found abstract thinking difficult, and thus eliminated all tasks requiring abstract thought from special needs teaching.  According to Vygotsky, this probably reinforced the children’s handicap while suppressing any “rudiments of abstract thought” they might have had. For a more personal example of how Vygotsky’s ideas can be applied in practice, I would like to mention oral exams. The ‘mini-viva’ that was part of the assessment for the Evolutionary Psychology module at our university, for instance, did not just test our knowledge but also how we could apply it to new scenarios. This occurred within the framework of a conversation with an adult expert (the module organiser) who offered keywords and other support if we appeared ‘stuck’ (and thus, arguably, learnt more about us than if he had just let us talk uninterrupted for 12 minutes). In future blog posts, I will consider individuals who have expanded upon Vygotsky’s ideas, and discuss how his theories can be applied to issues like peer-to-peer instruction.

If you are interested, some chapters of ‘Mind in Society’ can be found here.

Daniel Quinn

Image source: wikipedia

For my first ‘proper’ post on this blog, I would like to give a short introduction to views of Daniel Quinn, a fiction and non-fiction writer and self-proclaimed ‘cultural critic’ from the United States. Quinn was born in 1935 and spent most of his early career in educational and consumer publishing, before turning to freelance writing and lecturing on educational and environmental issues. In his talk ‘Schooling: The Hidden Agenda’, given in 2000 at the Houston Unschoolers Group Family Learning Conference, Quinn presents a thought provoking (and somewhat radical ;)) perspective on formal education in modern societies.

Quinn begins by recalling the 1960s, when he was working for a publisher of educational materials. He recounts that his world view at this time was as “rock-solid and conventional” as that of any American senator, bus driver, movie star, or medical doctor. Quinn was in his mid twenties, and working on a new mathematics programme for primary school children, when he was struck by a remarkable realisation: children in the younger grades “spend most of their time learning things that no one growing up in our culture could possibly avoid learning. Quinn gives the example of a child missing on the day the primary colours are introduced. Would that child go through life “wondering what color the sky is”? Instead of teaching children such things, Quinn wonders whether it would not make more sense to teach children things that ” they will not inevitably learn and that they would actually enjoy learning at this age” (this incidentally, can be considered the paradigm I intend to adopt for this blog :)).

Quinn speaks about the Cold War, when the possibility of a nuclear war destroying human civilisation was very real and tangible. He wonders how many of us are able to find edible plants, stalk game animals or make tools from scratch, skills that modern humans have to a large extent lost, but which are, essentially, “the skills that our civilization is actually built on”. Why not teach children how to tan a hide, build a shelter, or even how to navigate by the stars, Quinn asks.

Quinn then comes to the widespread notion that something is wrong with schools. He provocatively asks whether we might be mistaken, and whether schools may be doing superbly at the job they were designed for. Quinn describes the situation two hundred years ago, when American society was primarily agrarian. In those days, four to six years of education were considered adequate for nearly everyone, and children began working on the farm at around age 11. As society industrialised, less manual labour was required, and it became important to give all these surplus children something to do. Mandatory education was thus extended to grade eight, and politicians and educators set about creating material to fill school curricula, such as requiring children to memorise the capital of every US state.

The author then skips forward to the Great Depression, when it became vital to keep youngsters from entering the job market for as long as possible. During this time, the importance of a 12-year education was promoted, and curricula were filled with yet more material, such as requiring students to analyse poems. After WWII, higher education was expanded, leaving today’s youngsters with two options. Quinn notes that around 99.9% of high school graduates either get a (relatively low payed)  job immediately after school, or enter higher education. He contrasts the modern education system with tribal cultures, where children “graduate” at age 13, having a “survival value” of 100% as they have mastered all the skills they need to cope as adults. Quinn observes that modern society does not want graduates to have a high survival value, since this would allow them to opt out of the established economic system.

Quinn then argues that the modern educational system is bolstered by several cultural myths. The most important of these is the idea that children do not want to learn. Quinn believes this to be fallacious, and points to the natural inquisitiveness of infants and young children and to the fact that the transmission of culture across generations would be impossible without an innate desire to learn. He does however note that the biological clock for learning, present from birth, is somewhat muffled by a biological clock for mating which becomes important from the onset of puberty (incidentally the time when youngsters in tribal societies have mastered adult survival skills). The educational system, according to Quinn, ignores this fundamental fact about human biology. Moreover, Quinn notes that humans have no reason to retain information that is not important to them. He argues that students will forget most of the things they learnt at school, and points out that he himself took two years of classical Greek and would now be unable to read “a single sentence”. Finally, Quinn debunks the myth that schooling must take longer because our knowledge of the world has dramatically increased. Were students to study all new fields like aeronautics, particle physics and neurophysiology in depth, schooling would not take 12 years but several lifetimes!

When I read Daniel Quinn’s views on education a few years ago, I found them fascinating, and I can certainly say that they have changed my outlook. I think most of us, from personal experience, would agree with the observation that students do not retain most of the information they learn at school :), and I find it interesting to hear about some of the historical and economic background of how school curricula became the way they are (Quinn’s observations hold true for pretty much any western industrialised nation, I think). Since much curricular content appears to be essentially ‘filler’, most of which is soon forgotten, or consists of information that anyone growing up in our society will learn anyway, can it hurt to spend some time teaching children some of the survival skills that made human civilisation what it is? Skills that are fun to master, and that can help children gain a better understanding of who they are, and of their place in the world?

The full text of Daniel Quinn’s talk ‘Schooling: The Hidden Agenda’ can be found here.

www.ishmael.org is Daniel Quinn’s website.

Follow

Get every new post delivered to your Inbox.