IELTS Reading Questions

The IELTS speaking section is split into 3 sections. Here we take a look at the first section and go through some of the questions and topics that might come up.

IELTS Reading Questions

Take a look at some of these reading questions. Each quiz is designed like in the IELTS test, with 3 sections made up of 2/3/4 questions.

IELTS Reading Academic 1

In this challenge, the questions are set up as they would be in the IELTS exam. 3 sections, with 2/3/4 questions in each section. Each section should take 20 minutes, and you will have 1 hour to answer all the questions.

Electroreception

A   Open your eyes in sea water and it is difficult to see much more than a murky, bleary green colour. Sounds, too, are garbled and difficult to comprehend. Without specialised equipment humans would be lost in these deep sea habitats, so how do fish make it seem so easy? Much of this is due to a biological phenomenon known as electroreception – the ability to perceive and act upon electrical stimuli as part of the overall senses. This ability is only found in aquatic or amphibious species because water is an efficient conductor of electricity.

B   Electroreception comes in two variants. While all animals (including humans) generate electric signals, because they are emitted by the nervous system, some animals have the ability – known as passive electroreception – to receive and decode electric signals generated by other animals in order to sense their location.

C   Other creatures can go further still, however. Animals with active electroreception possess bodily organs that generate special electric signals on cue. These can be used for mating signals and territorial displays as well as locating objects in the water. Active electroreceptors can differentiate between the various resistances that their electrical currents encounter. This can help them identify whether another creature is prey, predator or something that is best left alone. Active electroreception has a range of about one body length – usually just enough to give its host time to get out of the way or go in for the kill.

D   One fascinating use of active electroreception – known as the Jamming Avoidance Response mechanism – has been observed between members of some species known as the weakly electric fish. When two such electric fish meet in the ocean using the same frequency, each fish will then shift the frequency of its discharge so that they are transmitting on different frequencies. Doing so prevents their electroreception faculties from becoming jammed. Long before citizens’ band radio users first had to yell “Get off my frequency!” at hapless novices cluttering the air waves, at least one species had found a way to peacefully and quickly resolve this type of dispute.

E   Electroreception can also play an important role in animal defences. Rays are one such example. Young ray embryos develop inside egg cases that are attached to the sea bed. The embryos keep their tails in constant motion so as to pump water and allow them to breathe through the egg’s casing. If the embryo’s electroreceptors detect the presence of a predatory fish in the vicinity, however, the embryo stops moving (and in so doing ceases transmitting electric currents) until the fish has moved on. Because marine life of various types is often travelling past, the embryo has evolved only to react to signals that are characteristic of the respiratory movements of potential predators such as sharks.

F   Many people fear swimming in the ocean because of sharks. In some respects, this concern is well grounded – humans are poorly equipped when it comes to electroreceptive defence mechanisms.  Sharks, meanwhile, hunt with extraordinary precision. They initially lock onto their prey through a keen sense of smell (two thirds of a shark’s brain is devoted entirely to its olfactory organs). As the shark reaches proximity to its prey, it tunes into electric signals that ensure a precise strike on its target; this sense is so strong that the shark even attacks blind by letting its eyes recede for protection.

G   Normally, when humans are attacked it is purely by accident. Since sharks cannot detect from electroreception whether or not something will satisfy their tastes, they tend to “try before they buy”, taking one or two bites and then assessing the results (our sinewy muscle does not compare well with plumper, softer prey such as seals). Repeat attacks are highly likely once a human is bleeding, however; the force of the electric field is heightened by salt in the blood which creates the perfect setting for a feeding frenzy.  In areas where shark attacks on humans are likely to occur, scientists are exploring ways to create artificial electroreceptors that would disorient the sharks and repel them from swimming beaches.

H   There is much that we do not yet know concerning how electroreception functions. Although researchers have documented how electroreception alters hunting, defence and communication systems through observation, the exact neurological processes that encode and decode this information are unclear. Scientists are also exploring the role electroreception plays in navigation. Some have proposed that salt water and magnetic fields from the Earth’s core may interact to form electrical currents that sharks use for migratory purposes.

Question

Label the diagram.

Choose NO MORE THAN TWO WORDS from the passage for each answer.

 

Shark’s alert the young ray to its presence

Embryo moves its in order to breathe

Embryo stops sending when predator close by

Electroreception

A   Open your eyes in sea water and it is difficult to see much more than a murky, bleary green colour. Sounds, too, are garbled and difficult to comprehend. Without specialised equipment humans would be lost in these deep sea habitats, so how do fish make it seem so easy? Much of this is due to a biological phenomenon known as electroreception – the ability to perceive and act upon electrical stimuli as part of the overall senses. This ability is only found in aquatic or amphibious species because water is an efficient conductor of electricity.

B   Electroreception comes in two variants. While all animals (including humans) generate electric signals, because they are emitted by the nervous system, some animals have the ability – known as passive electroreception – to receive and decode electric signals generated by other animals in order to sense their location.

C   Other creatures can go further still, however. Animals with active electroreception possess bodily organs that generate special electric signals on cue. These can be used for mating signals and territorial displays as well as locating objects in the water. Active electroreceptors can differentiate between the various resistances that their electrical currents encounter. This can help them identify whether another creature is prey, predator or something that is best left alone. Active electroreception has a range of about one body length – usually just enough to give its host time to get out of the way or go in for the kill.

D   One fascinating use of active electroreception – known as the Jamming Avoidance Response mechanism – has been observed between members of some species known as the weakly electric fish. When two such electric fish meet in the ocean using the same frequency, each fish will then shift the frequency of its discharge so that they are transmitting on different frequencies. Doing so prevents their electroreception faculties from becoming jammed. Long before citizens’ band radio users first had to yell “Get off my frequency!” at hapless novices cluttering the air waves, at least one species had found a way to peacefully and quickly resolve this type of dispute.

E   Electroreception can also play an important role in animal defences. Rays are one such example. Young ray embryos develop inside egg cases that are attached to the sea bed. The embryos keep their tails in constant motion so as to pump water and allow them to breathe through the egg’s casing. If the embryo’s electroreceptors detect the presence of a predatory fish in the vicinity, however, the embryo stops moving (and in so doing ceases transmitting electric currents) until the fish has moved on. Because marine life of various types is often travelling past, the embryo has evolved only to react to signals that are characteristic of the respiratory movements of potential predators such as sharks.

F   Many people fear swimming in the ocean because of sharks. In some respects, this concern is well grounded – humans are poorly equipped when it comes to electroreceptive defence mechanisms.  Sharks, meanwhile, hunt with extraordinary precision. They initially lock onto their prey through a keen sense of smell (two thirds of a shark’s brain is devoted entirely to its olfactory organs). As the shark reaches proximity to its prey, it tunes into electric signals that ensure a precise strike on its target; this sense is so strong that the shark even attacks blind by letting its eyes recede for protection.

G   Normally, when humans are attacked it is purely by accident. Since sharks cannot detect from electroreception whether or not something will satisfy their tastes, they tend to “try before they buy”, taking one or two bites and then assessing the results (our sinewy muscle does not compare well with plumper, softer prey such as seals). Repeat attacks are highly likely once a human is bleeding, however; the force of the electric field is heightened by salt in the blood which creates the perfect setting for a feeding frenzy.  In areas where shark attacks on humans are likely to occur, scientists are exploring ways to create artificial electroreceptors that would disorient the sharks and repel them from swimming beaches.

H   There is much that we do not yet know concerning how electroreception functions. Although researchers have documented how electroreception alters hunting, defence and communication systems through observation, the exact neurological processes that encode and decode this information are unclear. Scientists are also exploring the role electroreception plays in navigation. Some have proposed that salt water and magnetic fields from the Earth’s core may interact to form electrical currents that sharks use for migratory purposes.

Question

Complete the summary below.

Choose NO MORE THAN THREE words from the passage for each answer.

Shark Attack

A shark is a very effective hunter. Firstly, it uses its to smell its target. When the shark gets close, it uses to guide it toward an accurate attack. Within the final few feet the shark rolls its eyes back into its head. Humans are not popular food sources for most sharks due to their  Nevertheless, once a shark has bitten a human, a repeat attack is highly possible as salt from the blood increases the intensity of the

Electroreception

A   Open your eyes in sea water and it is difficult to see much more than a murky, bleary green colour. Sounds, too, are garbled and difficult to comprehend. Without specialised equipment humans would be lost in these deep sea habitats, so how do fish make it seem so easy? Much of this is due to a biological phenomenon known as electroreception – the ability to perceive and act upon electrical stimuli as part of the overall senses. This ability is only found in aquatic or amphibious species because water is an efficient conductor of electricity.

B   Electroreception comes in two variants. While all animals (including humans) generate electric signals, because they are emitted by the nervous system, some animals have the ability – known as passive electroreception – to receive and decode electric signals generated by other animals in order to sense their location.

C   Other creatures can go further still, however. Animals with active electroreception possess bodily organs that generate special electric signals on cue. These can be used for mating signals and territorial displays as well as locating objects in the water. Active electroreceptors can differentiate between the various resistances that their electrical currents encounter. This can help them identify whether another creature is prey, predator or something that is best left alone. Active electroreception has a range of about one body length – usually just enough to give its host time to get out of the way or go in for the kill.

D   One fascinating use of active electroreception – known as the Jamming Avoidance Response mechanism – has been observed between members of some species known as the weakly electric fish. When two such electric fish meet in the ocean using the same frequency, each fish will then shift the frequency of its discharge so that they are transmitting on different frequencies. Doing so prevents their electroreception faculties from becoming jammed. Long before citizens’ band radio users first had to yell “Get off my frequency!” at hapless novices cluttering the air waves, at least one species had found a way to peacefully and quickly resolve this type of dispute.

E   Electroreception can also play an important role in animal defences. Rays are one such example. Young ray embryos develop inside egg cases that are attached to the sea bed. The embryos keep their tails in constant motion so as to pump water and allow them to breathe through the egg’s casing. If the embryo’s electroreceptors detect the presence of a predatory fish in the vicinity, however, the embryo stops moving (and in so doing ceases transmitting electric currents) until the fish has moved on. Because marine life of various types is often travelling past, the embryo has evolved only to react to signals that are characteristic of the respiratory movements of potential predators such as sharks.

F   Many people fear swimming in the ocean because of sharks. In some respects, this concern is well grounded – humans are poorly equipped when it comes to electroreceptive defence mechanisms.  Sharks, meanwhile, hunt with extraordinary precision. They initially lock onto their prey through a keen sense of smell (two thirds of a shark’s brain is devoted entirely to its olfactory organs). As the shark reaches proximity to its prey, it tunes into electric signals that ensure a precise strike on its target; this sense is so strong that the shark even attacks blind by letting its eyes recede for protection.

G   Normally, when humans are attacked it is purely by accident. Since sharks cannot detect from electroreception whether or not something will satisfy their tastes, they tend to “try before they buy”, taking one or two bites and then assessing the results (our sinewy muscle does not compare well with plumper, softer prey such as seals). Repeat attacks are highly likely once a human is bleeding, however; the force of the electric field is heightened by salt in the blood which creates the perfect setting for a feeding frenzy.  In areas where shark attacks on humans are likely to occur, scientists are exploring ways to create artificial electroreceptors that would disorient the sharks and repel them from swimming beaches.

H   There is much that we do not yet know concerning how electroreception functions. Although researchers have documented how electroreception alters hunting, defence and communication systems through observation, the exact neurological processes that encode and decode this information are unclear. Scientists are also exploring the role electroreception plays in navigation. Some have proposed that salt water and magnetic fields from the Earth’s core may interact to form electrical currents that sharks use for migratory purposes.

Question

Match the summary sentences to each paragraph

how electroreception might help creatures find their way over long distances
a possible use for electroreception that will benefit humans
how electroreception can be used to help fish reproduce
a description of how some fish can avoid disrupting each other’s electric signals
the term for the capacity which enables an animal to pick up but not send out electrical signals
why only creatures that live in or near water have electroreceptive abilities

Fair games?

For seventeen days every four years the world is briefly arrested by the captivating, dizzying spectacle of athleticism, ambition, pride and celebration on display at the Summer Olympic Games. After the last weary spectators and competitors have returned home, however, host cities are often left awash in high debts and costly infrastructure maintenance. The staggering expenses involved in a successful Olympic bid are often assumed to be easily mitigated by tourist revenues and an increase in local employment, but more often than not host cities are short changed and their taxpayers for generations to come are left settling the debt.

Olympic extravagances begin with the application process. Bidding alone will set most cities back about $20 million, and while officially bidding only takes two years (for cities that make the shortlist), most cities can expect to exhaust a decade working on their bid from the moment it is initiated to the announcement of voting results from International Olympic Committee members. Aside from the financial costs of the bid alone, the process ties up real estate in prized urban locations until the outcome is known. This can cost local economies millions of dollars of lost revenue from private developers who could have made use of the land, and can also mean that particular urban quarters lose their vitality due to the vacant lots. All of this can be for nothing if a bidding city does not appease the whims of IOC members – private connections and opinions on government conduct often hold sway (Chicago’s 2012 bid is thought to have been undercut by tensions over U.S. foreign policy).

Bidding costs do not compare, however, to the exorbitant bills that come with hosting the Olympic Games themselves. As is typical with large-scale, one-off projects, budgeting for the Olympics is a notoriously formidable task. Los Angelinos have only recently finished paying off their budget-breaking 1984 Olympics; Montreal is still in debt for its 1976 Games (to add insult to injury, Canada is the only host country to have failed to win a single gold medal during its own Olympics). The tradition of runaway expenses has persisted in recent years. London Olympics managers have admitted that their 2012 costs may increase ten times over their initial projections, leaving tax payers 20 billion pounds in the red.

Hosting the Olympics is often understood to be an excellent way to update a city’s sporting infrastructure. The extensive demands of Olympic sports include aquatic complexes, equestrian circuits, shooting ranges, beach volleyball courts, and, of course, an 80,000 seat athletic stadium. Yet these demands are typically only necessary to accommodate a brief influx of athletes from around the world. Despite the enthusiasm many populations initially have for the development of world-class sporting complexes in their home towns, these complexes typically fall into disuse after the Olympic fervour has waned. Even Australia, home to one of the world’s most sportive populations, has left its taxpayers footing a $32 million-a-year bill for the maintenance of vacant facilities.

Another major concern is that when civic infrastructure developments are undertaken in preparation for hosting the Olympics, these benefits accrue to a single metropolitan centre (with the exception of some outlying areas that may get some revamped sports facilities). In countries with an expansive land mass, this means vast swathes of the population miss out entirely. Furthermore, since the International Olympic Committee favours prosperous “global” centres (the United Kingdom was told, after three failed bids from its provincial cities, that only London stood any real chance at winning), the improvement of public transport, roads and communication links tends to concentrate in places already well-equipped with world-class infrastructures. Perpetually by-passing minor cities creates a cycle of disenfranchisement: these cities never get an injection of capital, they fail to become first-rate candidates, and they are constantly passed over in favour of more secure choices.

Finally, there is no guarantee that an Olympics will be a popular success. The “feel good” factor that most proponents of Olympic bids extol (and that was no doubt driving the 90 to 100 per cent approval rates of Parisians and Londoners for their cities’ respective 2012 bids) can be an elusive phenomenon, and one that is tied to that nation’s standing on the medal tables. This ephemeral thrill cannot compare to the years of disruptive construction projects and security fears that go into preparing for an Olympic Games, nor the decades of debt repayment that follow (Greece’s preparation for Athens 2004 famously deterred tourists from visiting the country due to widespread unease about congestion and disruption).

There are feasible alternatives to the bloat, extravagance and wasteful spending that comes with a modern Olympic Games. One option is to designate a permanent host city that would be re-designed or built from scratch especially for the task. Another is to extend the duration of the Olympics so that it becomes a festival of several months. Local businesses would enjoy the extra spending and congestion would ease substantially as competitors and spectators come and go according to their specific interests. Neither the “Olympic City” nor the extended length options really get to the heart of the issue, however. Stripping away ritual and decorum in favour of concentrating on athletic rivalry would be preferable.

Failing that, the Olympics could simply be scrapped altogether. International competition could still be maintained through world championships in each discipline. Most of these events are already held on non-Olympic years anyway – the International Association of Athletics Federations, for example, has run a biennial World Athletics Championship since 1983 after members decided that using the Olympics for their championship was no longer sufficient. Events of this nature keep world-class competition alive without requiring Olympic-sized expenses.

Question

Do the following statements agree with the information given in the Reading Passage?

True - if the statement agrees with the information

False - if the statement contradicts the information

Not Given - if there is no information on this

Residents of host cities have little use for the full range of Olympic facilities.

Australians have still not paid for the construction of Olympic sports facilities.

People far beyond the host city can expect to benefit from improved infrastructure.

It is difficult for small cities to win an Olympic bid.

When a city makes an Olympic bid, a majority of its citizens usually want it to win.

Whether or not people enjoy hosting the Olympics in their city depends on how athletes from their country perform in Olympic events.

Fewer people than normal visited Greece during the run up to the Athens Olympics.

Fair games?

For seventeen days every four years the world is briefly arrested by the captivating, dizzying spectacle of athleticism, ambition, pride and celebration on display at the Summer Olympic Games. After the last weary spectators and competitors have returned home, however, host cities are often left awash in high debts and costly infrastructure maintenance. The staggering expenses involved in a successful Olympic bid are often assumed to be easily mitigated by tourist revenues and an increase in local employment, but more often than not host cities are short changed and their taxpayers for generations to come are left settling the debt.

Olympic extravagances begin with the application process. Bidding alone will set most cities back about $20 million, and while officially bidding only takes two years (for cities that make the shortlist), most cities can expect to exhaust a decade working on their bid from the moment it is initiated to the announcement of voting results from International Olympic Committee members. Aside from the financial costs of the bid alone, the process ties up real estate in prized urban locations until the outcome is known. This can cost local economies millions of dollars of lost revenue from private developers who could have made use of the land, and can also mean that particular urban quarters lose their vitality due to the vacant lots. All of this can be for nothing if a bidding city does not appease the whims of IOC members – private connections and opinions on government conduct often hold sway (Chicago’s 2012 bid is thought to have been undercut by tensions over U.S. foreign policy).

Bidding costs do not compare, however, to the exorbitant bills that come with hosting the Olympic Games themselves. As is typical with large-scale, one-off projects, budgeting for the Olympics is a notoriously formidable task. Los Angelinos have only recently finished paying off their budget-breaking 1984 Olympics; Montreal is still in debt for its 1976 Games (to add insult to injury, Canada is the only host country to have failed to win a single gold medal during its own Olympics). The tradition of runaway expenses has persisted in recent years. London Olympics managers have admitted that their 2012 costs may increase ten times over their initial projections, leaving tax payers 20 billion pounds in the red.

Hosting the Olympics is often understood to be an excellent way to update a city’s sporting infrastructure. The extensive demands of Olympic sports include aquatic complexes, equestrian circuits, shooting ranges, beach volleyball courts, and, of course, an 80,000 seat athletic stadium. Yet these demands are typically only necessary to accommodate a brief influx of athletes from around the world. Despite the enthusiasm many populations initially have for the development of world-class sporting complexes in their home towns, these complexes typically fall into disuse after the Olympic fervour has waned. Even Australia, home to one of the world’s most sportive populations, has left its taxpayers footing a $32 million-a-year bill for the maintenance of vacant facilities.

Another major concern is that when civic infrastructure developments are undertaken in preparation for hosting the Olympics, these benefits accrue to a single metropolitan centre (with the exception of some outlying areas that may get some revamped sports facilities). In countries with an expansive land mass, this means vast swathes of the population miss out entirely. Furthermore, since the International Olympic Committee favours prosperous “global” centres (the United Kingdom was told, after three failed bids from its provincial cities, that only London stood any real chance at winning), the improvement of public transport, roads and communication links tends to concentrate in places already well-equipped with world-class infrastructures. Perpetually by-passing minor cities creates a cycle of disenfranchisement: these cities never get an injection of capital, they fail to become first-rate candidates, and they are constantly passed over in favour of more secure choices.

Finally, there is no guarantee that an Olympics will be a popular success. The “feel good” factor that most proponents of Olympic bids extol (and that was no doubt driving the 90 to 100 per cent approval rates of Parisians and Londoners for their cities’ respective 2012 bids) can be an elusive phenomenon, and one that is tied to that nation’s standing on the medal tables. This ephemeral thrill cannot compare to the years of disruptive construction projects and security fears that go into preparing for an Olympic Games, nor the decades of debt repayment that follow (Greece’s preparation for Athens 2004 famously deterred tourists from visiting the country due to widespread unease about congestion and disruption).

There are feasible alternatives to the bloat, extravagance and wasteful spending that comes with a modern Olympic Games. One option is to designate a permanent host city that would be re-designed or built from scratch especially for the task. Another is to extend the duration of the Olympics so that it becomes a festival of several months. Local businesses would enjoy the extra spending and congestion would ease substantially as competitors and spectators come and go according to their specific interests. Neither the “Olympic City” nor the extended length options really get to the heart of the issue, however. Stripping away ritual and decorum in favour of concentrating on athletic rivalry would be preferable.

Failing that, the Olympics could simply be scrapped altogether. International competition could still be maintained through world championships in each discipline. Most of these events are already held on non-Olympic years anyway – the International Association of Athletics Federations, for example, has run a biennial World Athletics Championship since 1983 after members decided that using the Olympics for their championship was no longer sufficient. Events of this nature keep world-class competition alive without requiring Olympic-sized expenses.

Question

Choose TWO

Which TWO of the following does the author propose as alternatives to the current Olympics?

Fair games?

For seventeen days every four years the world is briefly arrested by the captivating, dizzying spectacle of athleticism, ambition, pride and celebration on display at the Summer Olympic Games. After the last weary spectators and competitors have returned home, however, host cities are often left awash in high debts and costly infrastructure maintenance. The staggering expenses involved in a successful Olympic bid are often assumed to be easily mitigated by tourist revenues and an increase in local employment, but more often than not host cities are short changed and their taxpayers for generations to come are left settling the debt.

Olympic extravagances begin with the application process. Bidding alone will set most cities back about $20 million, and while officially bidding only takes two years (for cities that make the shortlist), most cities can expect to exhaust a decade working on their bid from the moment it is initiated to the announcement of voting results from International Olympic Committee members. Aside from the financial costs of the bid alone, the process ties up real estate in prized urban locations until the outcome is known. This can cost local economies millions of dollars of lost revenue from private developers who could have made use of the land, and can also mean that particular urban quarters lose their vitality due to the vacant lots. All of this can be for nothing if a bidding city does not appease the whims of IOC members – private connections and opinions on government conduct often hold sway (Chicago’s 2012 bid is thought to have been undercut by tensions over U.S. foreign policy).

Bidding costs do not compare, however, to the exorbitant bills that come with hosting the Olympic Games themselves. As is typical with large-scale, one-off projects, budgeting for the Olympics is a notoriously formidable task. Los Angelinos have only recently finished paying off their budget-breaking 1984 Olympics; Montreal is still in debt for its 1976 Games (to add insult to injury, Canada is the only host country to have failed to win a single gold medal during its own Olympics). The tradition of runaway expenses has persisted in recent years. London Olympics managers have admitted that their 2012 costs may increase ten times over their initial projections, leaving tax payers 20 billion pounds in the red.

Hosting the Olympics is often understood to be an excellent way to update a city’s sporting infrastructure. The extensive demands of Olympic sports include aquatic complexes, equestrian circuits, shooting ranges, beach volleyball courts, and, of course, an 80,000 seat athletic stadium. Yet these demands are typically only necessary to accommodate a brief influx of athletes from around the world. Despite the enthusiasm many populations initially have for the development of world-class sporting complexes in their home towns, these complexes typically fall into disuse after the Olympic fervour has waned. Even Australia, home to one of the world’s most sportive populations, has left its taxpayers footing a $32 million-a-year bill for the maintenance of vacant facilities.

Another major concern is that when civic infrastructure developments are undertaken in preparation for hosting the Olympics, these benefits accrue to a single metropolitan centre (with the exception of some outlying areas that may get some revamped sports facilities). In countries with an expansive land mass, this means vast swathes of the population miss out entirely. Furthermore, since the International Olympic Committee favours prosperous “global” centres (the United Kingdom was told, after three failed bids from its provincial cities, that only London stood any real chance at winning), the improvement of public transport, roads and communication links tends to concentrate in places already well-equipped with world-class infrastructures. Perpetually by-passing minor cities creates a cycle of disenfranchisement: these cities never get an injection of capital, they fail to become first-rate candidates, and they are constantly passed over in favour of more secure choices.

Finally, there is no guarantee that an Olympics will be a popular success. The “feel good” factor that most proponents of Olympic bids extol (and that was no doubt driving the 90 to 100 per cent approval rates of Parisians and Londoners for their cities’ respective 2012 bids) can be an elusive phenomenon, and one that is tied to that nation’s standing on the medal tables. This ephemeral thrill cannot compare to the years of disruptive construction projects and security fears that go into preparing for an Olympic Games, nor the decades of debt repayment that follow (Greece’s preparation for Athens 2004 famously deterred tourists from visiting the country due to widespread unease about congestion and disruption).

There are feasible alternatives to the bloat, extravagance and wasteful spending that comes with a modern Olympic Games. One option is to designate a permanent host city that would be re-designed or built from scratch especially for the task. Another is to extend the duration of the Olympics so that it becomes a festival of several months. Local businesses would enjoy the extra spending and congestion would ease substantially as competitors and spectators come and go according to their specific interests. Neither the “Olympic City” nor the extended length options really get to the heart of the issue, however. Stripping away ritual and decorum in favour of concentrating on athletic rivalry would be preferable.

Failing that, the Olympics could simply be scrapped altogether. International competition could still be maintained through world championships in each discipline. Most of these events are already held on non-Olympic years anyway – the International Association of Athletics Federations, for example, has run a biennial World Athletics Championship since 1983 after members decided that using the Olympics for their championship was no longer sufficient. Events of this nature keep world-class competition alive without requiring Olympic-sized expenses.

Question

Purpose-built sporting venues
Cost estimates for the Olympic Games
Personal relationships and political tensions
Bids to become a host city
Urban developments associated with the Olympics

Time Travel

Time travel took a small step away from science fiction and toward science recently when physicists discovered that sub-atomic particles known as neutrinos – progeny of the sun’s radioactive debris – can exceed the speed of light. The unassuming particle – it is electrically neutral, small but with a “non-zero mass” and able to penetrate the human form undetected – is on its way to becoming a rock star of the scientific world.

Researchers from the European Organisation for Nuclear Research (CERN) in Geneva sent the neutrinos hurtling through an underground corridor toward their colleagues at the Oscillation Project with Emulsion-Tracing Apparatus (OPERA) team 730 kilometres away in Gran Sasso, Italy. The neutrinos arrived promptly – so promptly, in fact, that they triggered what scientists are calling the unthinkable – that everything they have learnt, known or taught stemming from the last one hundred years of the physics discipline may need to be reconsidered.

The issue at stake is a tiny segment of time – precisely sixty nanoseconds (which is sixty billionths of a second). This is how much faster than the speed of light the neutrinos managed to go in their underground travels and at a consistent rate (15,000 neutrinos were sent over three years). Even allowing for a margin of error of ten billionths of a second, this stands as proof that it is possible to race against light and win. The duration of the experiment also accounted for and ruled out any possible lunar effects or tidal bulges in the earth’s crust.

Nevertheless, there’s plenty of reason to remain sceptical. According to Harvard University science historian Peter Galison, Einstein’s relativity theory has been “pushed harder than any theory in the history of the physical sciences”. Yet each prior challenge has come to no avail, and relativity has so far refused to buckle.

So is time travel just around the corner? The prospect has certainly been wrenched much closer to the realm of possibility now that a major physical hurdle – the speed of light – has been cleared. If particles can travel faster than light, in theory travelling back in time is possible. How anyone harnesses that to some kind of helpful end is far beyond the scope of any modern technologies, however, and will be left to future generations to explore.

Certainly, any prospective time travellers may have to overcome more physical and logical hurdles than merely overtaking the speed of light. One such problem, posited by René Barjavel in his 1943 text Le Voyageur Imprudent is the so-called grandfather paradox. Barjavel theorised that, if it were possible to go back in time, a time traveller could potentially kill his own grandfather. If this were to happen, however, the time traveller himself would not be born, which is already known to be true. In other words, there is a paradox in circumventing an already known future; time travel is able to facilitate past actions that mean time travel itself cannot occur.

Other possible routes have been offered, though. For Igor Novikov, astrophysicist behind the 1980s’ theorem known as the self-consistency principle, time travel is possible within certain boundaries. Novikov argued that any event causing a paradox would have zero probability. It would be possible, however, to “affect” rather than “change” historical outcomes if travellers avoided all inconsistencies. Averting the sinking of the Titanic, for example, would revoke any future imperative to stop it from sinking – it would be impossible. Saving selected passengers from the water and replacing them with realistic corpses would not be impossible, however, as the historical record would not be altered in any way.

A further possibility is that of parallel universes. Popularised by Bryce Seligman DeWitt in the 1960s (from the seminal formulation of Hugh Everett), the many-worlds interpretation holds that an alternative pathway for every conceivable occurrence actually exists. If we were to send someone back in time, we might therefore expect never to see him again – any alterations would divert that person down a new historical trajectory.

A final hypothesis, one of unidentified provenance, reroutes itself quite efficiently around the grandfather paradox. Non-existence theory suggests exactly that – a person would quite simply never exist if they altered their ancestry in ways that obstructed their own birth. They would still exist in person upon returning to the present, but any chain reactions associated with their actions would not be registered. Their “historical identity” would be gone.

So, will humans one day step across the same boundary that the neutrinos have? World-renowned astrophysicist Stephen Hawking believes that once spaceships can exceed the speed of light, humans could feasibly travel millions of years into the future in order to repopulate earth in the event of a forthcoming apocalypse.  This is because, as the spaceships accelerate into the future, time would slow down around them (Hawking concedes that bygone eras are off limits – this would violate the fundamental rule that cause comes before effect).

Hawking is therefore reserved yet optimistic. “Time travel was once considered scientific heresy, and I used to avoid talking about it for fear of being labelled a crank. These days I’m not so cautious.”

Question

Choose the correct Answer

Stephen Hawking has stated that :

Time Travel

Time travel took a small step away from science fiction and toward science recently when physicists discovered that sub-atomic particles known as neutrinos – progeny of the sun’s radioactive debris – can exceed the speed of light. The unassuming particle – it is electrically neutral, small but with a “non-zero mass” and able to penetrate the human form undetected – is on its way to becoming a rock star of the scientific world.

Researchers from the European Organisation for Nuclear Research (CERN) in Geneva sent the neutrinos hurtling through an underground corridor toward their colleagues at the Oscillation Project with Emulsion-Tracing Apparatus (OPERA) team 730 kilometres away in Gran Sasso, Italy. The neutrinos arrived promptly – so promptly, in fact, that they triggered what scientists are calling the unthinkable – that everything they have learnt, known or taught stemming from the last one hundred years of the physics discipline may need to be reconsidered.

The issue at stake is a tiny segment of time – precisely sixty nanoseconds (which is sixty billionths of a second). This is how much faster than the speed of light the neutrinos managed to go in their underground travels and at a consistent rate (15,000 neutrinos were sent over three years). Even allowing for a margin of error of ten billionths of a second, this stands as proof that it is possible to race against light and win. The duration of the experiment also accounted for and ruled out any possible lunar effects or tidal bulges in the earth’s crust.

Nevertheless, there’s plenty of reason to remain sceptical. According to Harvard University science historian Peter Galison, Einstein’s relativity theory has been “pushed harder than any theory in the history of the physical sciences”. Yet each prior challenge has come to no avail, and relativity has so far refused to buckle.

So is time travel just around the corner? The prospect has certainly been wrenched much closer to the realm of possibility now that a major physical hurdle – the speed of light – has been cleared. If particles can travel faster than light, in theory travelling back in time is possible. How anyone harnesses that to some kind of helpful end is far beyond the scope of any modern technologies, however, and will be left to future generations to explore.

Certainly, any prospective time travellers may have to overcome more physical and logical hurdles than merely overtaking the speed of light. One such problem, posited by René Barjavel in his 1943 text Le Voyageur Imprudent is the so-called grandfather paradox. Barjavel theorised that, if it were possible to go back in time, a time traveller could potentially kill his own grandfather. If this were to happen, however, the time traveller himself would not be born, which is already known to be true. In other words, there is a paradox in circumventing an already known future; time travel is able to facilitate past actions that mean time travel itself cannot occur.

Other possible routes have been offered, though. For Igor Novikov, astrophysicist behind the 1980s’ theorem known as the self-consistency principle, time travel is possible within certain boundaries. Novikov argued that any event causing a paradox would have zero probability. It would be possible, however, to “affect” rather than “change” historical outcomes if travellers avoided all inconsistencies. Averting the sinking of the Titanic, for example, would revoke any future imperative to stop it from sinking – it would be impossible. Saving selected passengers from the water and replacing them with realistic corpses would not be impossible, however, as the historical record would not be altered in any way.

A further possibility is that of parallel universes. Popularised by Bryce Seligman DeWitt in the 1960s (from the seminal formulation of Hugh Everett), the many-worlds interpretation holds that an alternative pathway for every conceivable occurrence actually exists. If we were to send someone back in time, we might therefore expect never to see him again – any alterations would divert that person down a new historical trajectory.

A final hypothesis, one of unidentified provenance, reroutes itself quite efficiently around the grandfather paradox. Non-existence theory suggests exactly that – a person would quite simply never exist if they altered their ancestry in ways that obstructed their own birth. They would still exist in person upon returning to the present, but any chain reactions associated with their actions would not be registered. Their “historical identity” would be gone.

So, will humans one day step across the same boundary that the neutrinos have? World-renowned astrophysicist Stephen Hawking believes that once spaceships can exceed the speed of light, humans could feasibly travel millions of years into the future in order to repopulate earth in the event of a forthcoming apocalypse.  This is because, as the spaceships accelerate into the future, time would slow down around them (Hawking concedes that bygone eras are off limits – this would violate the fundamental rule that cause comes before effect).

Hawking is therefore reserved yet optimistic. “Time travel was once considered scientific heresy, and I used to avoid talking about it for fear of being labelled a crank. These days I’m not so cautious.”

Question

Do the following statements agree with the information given in the Reading Passage?

True - if the statement agrees with the information

False - if the statement contradicts the information

Not Given - if there is no information on this

It is unclear where neutrinos come from.

Neutrinos can pass through a person’s body without causing harm.

It took scientists between 50-70 nanoseconds to send the neutrinos from Geneva to Italy.

Researchers accounted for effects the moon might have had on the experiment.

The theory of relativity has often been called into question unsuccessfully.

This experiment could soon lead to some practical uses for time travel.

Time Travel

Time travel took a small step away from science fiction and toward science recently when physicists discovered that sub-atomic particles known as neutrinos – progeny of the sun’s radioactive debris – can exceed the speed of light. The unassuming particle – it is electrically neutral, small but with a “non-zero mass” and able to penetrate the human form undetected – is on its way to becoming a rock star of the scientific world.

Researchers from the European Organisation for Nuclear Research (CERN) in Geneva sent the neutrinos hurtling through an underground corridor toward their colleagues at the Oscillation Project with Emulsion-Tracing Apparatus (OPERA) team 730 kilometres away in Gran Sasso, Italy. The neutrinos arrived promptly – so promptly, in fact, that they triggered what scientists are calling the unthinkable – that everything they have learnt, known or taught stemming from the last one hundred years of the physics discipline may need to be reconsidered.

The issue at stake is a tiny segment of time – precisely sixty nanoseconds (which is sixty billionths of a second). This is how much faster than the speed of light the neutrinos managed to go in their underground travels and at a consistent rate (15,000 neutrinos were sent over three years). Even allowing for a margin of error of ten billionths of a second, this stands as proof that it is possible to race against light and win. The duration of the experiment also accounted for and ruled out any possible lunar effects or tidal bulges in the earth’s crust.

Nevertheless, there’s plenty of reason to remain sceptical. According to Harvard University science historian Peter Galison, Einstein’s relativity theory has been “pushed harder than any theory in the history of the physical sciences”. Yet each prior challenge has come to no avail, and relativity has so far refused to buckle.

So is time travel just around the corner? The prospect has certainly been wrenched much closer to the realm of possibility now that a major physical hurdle – the speed of light – has been cleared. If particles can travel faster than light, in theory travelling back in time is possible. How anyone harnesses that to some kind of helpful end is far beyond the scope of any modern technologies, however, and will be left to future generations to explore.

Certainly, any prospective time travellers may have to overcome more physical and logical hurdles than merely overtaking the speed of light. One such problem, posited by René Barjavel in his 1943 text Le Voyageur Imprudent is the so-called grandfather paradox. Barjavel theorised that, if it were possible to go back in time, a time traveller could potentially kill his own grandfather. If this were to happen, however, the time traveller himself would not be born, which is already known to be true. In other words, there is a paradox in circumventing an already known future; time travel is able to facilitate past actions that mean time travel itself cannot occur.

Other possible routes have been offered, though. For Igor Novikov, astrophysicist behind the 1980s’ theorem known as the self-consistency principle, time travel is possible within certain boundaries. Novikov argued that any event causing a paradox would have zero probability. It would be possible, however, to “affect” rather than “change” historical outcomes if travellers avoided all inconsistencies. Averting the sinking of the Titanic, for example, would revoke any future imperative to stop it from sinking – it would be impossible. Saving selected passengers from the water and replacing them with realistic corpses would not be impossible, however, as the historical record would not be altered in any way.

A further possibility is that of parallel universes. Popularised by Bryce Seligman DeWitt in the 1960s (from the seminal formulation of Hugh Everett), the many-worlds interpretation holds that an alternative pathway for every conceivable occurrence actually exists. If we were to send someone back in time, we might therefore expect never to see him again – any alterations would divert that person down a new historical trajectory.

A final hypothesis, one of unidentified provenance, reroutes itself quite efficiently around the grandfather paradox. Non-existence theory suggests exactly that – a person would quite simply never exist if they altered their ancestry in ways that obstructed their own birth. They would still exist in person upon returning to the present, but any chain reactions associated with their actions would not be registered. Their “historical identity” would be gone.

So, will humans one day step across the same boundary that the neutrinos have? World-renowned astrophysicist Stephen Hawking believes that once spaceships can exceed the speed of light, humans could feasibly travel millions of years into the future in order to repopulate earth in the event of a forthcoming apocalypse.  This is because, as the spaceships accelerate into the future, time would slow down around them (Hawking concedes that bygone eras are off limits – this would violate the fundamental rule that cause comes before effect).

Hawking is therefore reserved yet optimistic. “Time travel was once considered scientific heresy, and I used to avoid talking about it for fear of being labelled a crank. These days I’m not so cautious.”

Question

Complete the table below.

Choose NO MORE THAN THREE WORDS from the passage for each answer.

Original Theorist Theory Principle
 René Barjavel  Grandfather paradox  Time travel would allow for that would actually make time travel impossible.
 Igor Novikov  Self-consistency principle  It is only possible to alter history in ways that result in no .
 Many-worlds interpretation  Each possible event has an , so a time traveller changing the past would simply end up in a different branch of history than the one he left.
 Unknown  If a time traveller changed the past to prevent his future life, he would not have a as the person never existed.

Your score is

0

IELTS Reading Academic 2

In this challenge, the questions are set up as they would be in the IELTS exam. 3 sections, with 2/3/4 questions in each section. Each section should take 20 minutes, and you will have 1 hour to answer all the questions.

1 / 8

Miles Davis - Icon and iconoclast

An iconoclast is somebody who challenges traditional beliefs or customs

A  At the age of thirteen, Miles Davis was given his first trumpet, lessons were arranged with a local trumpet player, and a musical odyssey began. These early lessons, paid for and supported by his father, had a profound effect on shaping Davis’ signature sound. Whereas most trumpeters of the era favoured the use of vibrato (a wobbly quiver in pitch inflected in the instrument’s tone), Davis was taught to play with a long, straight tone, a preference his instructor reportedly drilled into the young trumpeter with a rap on the knuckles every time Davis began using vibrato. This clear, distinctive style never left Davis. He continued playing with it for the rest of his career, once remarking, ‘If I can’t get that sound, I can’t play anything.’

B  Having graduated from high school in 1944, Davis moved to New York City, where he continued his musical education both in the clubs and in the classroom. His enrolment in the prestigious Julliard School of Music was short-lived, however – he soon dropped out, criticising what he perceived as an over-emphasis on the classical European repertoire and a neglect of jazz. Davis did later acknowledge, however, that this time at the school was invaluable in terms of developing his trumpet-playing technique and giving him a solid grounding in music theory. Much of his early training took place in the form of jam sessions and performances in the clubs of 52nd Street, where he played alongside both up-and-coming and established members of the jazz pantheon such as Coleman Hawkins, Eddie ‘Lockjaw’ Davis, and Thelonious Monk.

C  In the late 1940s, Davis collaborated with nine other instrumentalists, including a French horn and a tuba player, to produce The Birth of Cool, an album now renowned for the inchoate sounds of what would later become known as ‘cool’ jazz. In contrast to popular jazz styles of the day, which featured rapid, rollicking beats, shrieking vocals, and short, sharp horn blasts, Davis’ album was the forerunner of a different kind of sound – thin, light horn-playing, hushed drums and a more restrained, formal arrangement. Although it received little acclaim at the time (the liner notes to one of Davis’ later recordings call it a ‘spectacular failure’), in hindsight The Birth of Cool has become recognised as a pivotal moment in jazz history, cementing – alongside his 1958 recording, Kind of Blue – Davis’ legacy as one of the most innovative musicians of his era.

D  Though Davis’ trumpet playing may have sounded effortless and breezy, this ease rarely carried over into the rest of his life. The early 1950s, in particular, were a time of great personal turmoil. After returning from a stint in Paris, Davis suffered from prolonged depression, which he attributed to the unravelling of a number of relationships, including his romance with a French actress and some musical partnerships that ruptured as a result of creative disputes. Davis was also frustrated by his perception that he had been overlooked by the music critics, who were hailing the success of his collaborators and descendants in the ‘cool’ tradition, such as Gerry Mulligan and Dave Brubeck, but who afforded him little credit for introducing the cool sound in the first place.

E  In the latter decades of his career, Davis broke out of exclusive jazz settings and began to diversify his output across a range of musical styles. In the 1960s, he was influenced by early funk performers such as Sly and the Family Stone, which then expanded into the jazz-rock fusion genre – of which he was a frontrunner – in the 1970s. Electronic recording effects and electric instruments were incorporated into his sound. By the 1980s, Davis was pushing the boundaries further, covering pop anthems such as Cyndi Lauper’s Time After Time and Michael Jackson’s Human Nature, dabbling in hip hop, and even appearing in some movies.

F  Not everyone was supportive of Davis’ change of tune. Compared to the recordings of his early career, universally applauded as linchpins of the jazz oeuvre, trumpeter Wynston Marsalis derided his fusion work as being ‘not true jazz’, and pianist Bill Evans denounced the ‘corrupting influence’ of record companies, noting that rock and pop ‘draw wider audiences’. In the face of this criticism Davis remained defiant, commenting that his earlier recordings were part of a moment in time that he had no ‘feel’ for any more. He firmly believed that remaining stylistically inert would have hampered his ability to develop new ways of producing music. From this perspective, Davis’ continual revamping of genre was not merely a rebellion, but an evolution, a necessary path that allowed him to release his full musical potential.

Question

Do the following statements agree with the views of the writer in The Reading Passage?

True - if the statement agrees with the views of the writer

False - if the statement contradicts the views of the writer

Not Given - if it is impossible to say what the writer thinks about this

 

Davis’ trumpet teacher wanted him to play with vibrato.

According to Davis, studying at Julliard helped him to improve his musical abilities.

Playing in jazz clubs in New York was the best way to become famous.

The Birth of Cool featured music that was faster and louder than most jazz at the time.

Davis’ personal troubles had a negative effect on his trumpet playing.

Davis felt that his contribution to cool jazz had not been acknowledged.

Davis was a traditionalist who wanted to keep the jazz sound pure.

2 / 8

Miles Davis - Icon and iconoclast

An iconoclast is somebody who challenges traditional beliefs or customs

A  At the age of thirteen, Miles Davis was given his first trumpet, lessons were arranged with a local trumpet player, and a musical odyssey began. These early lessons, paid for and supported by his father, had a profound effect on shaping Davis’ signature sound. Whereas most trumpeters of the era favoured the use of vibrato (a wobbly quiver in pitch inflected in the instrument’s tone), Davis was taught to play with a long, straight tone, a preference his instructor reportedly drilled into the young trumpeter with a rap on the knuckles every time Davis began using vibrato. This clear, distinctive style never left Davis. He continued playing with it for the rest of his career, once remarking, ‘If I can’t get that sound, I can’t play anything.’

B  Having graduated from high school in 1944, Davis moved to New York City, where he continued his musical education both in the clubs and in the classroom. His enrolment in the prestigious Julliard School of Music was short-lived, however – he soon dropped out, criticising what he perceived as an over-emphasis on the classical European repertoire and a neglect of jazz. Davis did later acknowledge, however, that this time at the school was invaluable in terms of developing his trumpet-playing technique and giving him a solid grounding in music theory. Much of his early training took place in the form of jam sessions and performances in the clubs of 52nd Street, where he played alongside both up-and-coming and established members of the jazz pantheon such as Coleman Hawkins, Eddie ‘Lockjaw’ Davis, and Thelonious Monk.

C  In the late 1940s, Davis collaborated with nine other instrumentalists, including a French horn and a tuba player, to produce The Birth of Cool, an album now renowned for the inchoate sounds of what would later become known as ‘cool’ jazz. In contrast to popular jazz styles of the day, which featured rapid, rollicking beats, shrieking vocals, and short, sharp horn blasts, Davis’ album was the forerunner of a different kind of sound – thin, light horn-playing, hushed drums and a more restrained, formal arrangement. Although it received little acclaim at the time (the liner notes to one of Davis’ later recordings call it a ‘spectacular failure’), in hindsight The Birth of Cool has become recognised as a pivotal moment in jazz history, cementing – alongside his 1958 recording, Kind of Blue – Davis’ legacy as one of the most innovative musicians of his era.

D  Though Davis’ trumpet playing may have sounded effortless and breezy, this ease rarely carried over into the rest of his life. The early 1950s, in particular, were a time of great personal turmoil. After returning from a stint in Paris, Davis suffered from prolonged depression, which he attributed to the unravelling of a number of relationships, including his romance with a French actress and some musical partnerships that ruptured as a result of creative disputes. Davis was also frustrated by his perception that he had been overlooked by the music critics, who were hailing the success of his collaborators and descendants in the ‘cool’ tradition, such as Gerry Mulligan and Dave Brubeck, but who afforded him little credit for introducing the cool sound in the first place.

E  In the latter decades of his career, Davis broke out of exclusive jazz settings and began to diversify his output across a range of musical styles. In the 1960s, he was influenced by early funk performers such as Sly and the Family Stone, which then expanded into the jazz-rock fusion genre – of which he was a frontrunner – in the 1970s. Electronic recording effects and electric instruments were incorporated into his sound. By the 1980s, Davis was pushing the boundaries further, covering pop anthems such as Cyndi Lauper’s Time After Time and Michael Jackson’s Human Nature, dabbling in hip hop, and even appearing in some movies.

F  Not everyone was supportive of Davis’ change of tune. Compared to the recordings of his early career, universally applauded as linchpins of the jazz oeuvre, trumpeter Wynston Marsalis derided his fusion work as being ‘not true jazz’, and pianist Bill Evans denounced the ‘corrupting influence’ of record companies, noting that rock and pop ‘draw wider audiences’. In the face of this criticism Davis remained defiant, commenting that his earlier recordings were part of a moment in time that he had no ‘feel’ for any more. He firmly believed that remaining stylistically inert would have hampered his ability to develop new ways of producing music. From this perspective, Davis’ continual revamping of genre was not merely a rebellion, but an evolution, a necessary path that allowed him to release his full musical potential.

Question

The Reading Passage has six paragraphs, A–F.

Choose the correct heading for paragraphs A–F from the list of headings below.

E
C
F
B
D
A

3 / 8

A bar at the folies (Un bar aux folies)

 One of the most critically renowned paintings of the 19th-century modernist movement is the French painter Edouard Manet’s masterwork, A Bar at the Folies. Originally belonging to the composer Emmanuel Chabrier, it is now in the possession of The Courtauld Gallery in London, where it has also become a favourite with the crowds.

B  The painting is set late at night in a nineteenth-century Parisian nightclub. A barmaid stands alone behind her bar, fitted out in a black bodice that has a frilly white neckline, and with a spray of flowers sitting across her décolletage. She rests her hands on the bar and gazes out forlornly at a point just below the viewer, not quite making eye contact. Also on the bar are some bottles of liquor and a bowl of oranges, but much of the activity in the room takes place in the reflection of a mirror behind the barmaid. Through this mirror we see an auditorium, bustling with blurred figures and faces: men in top hats, a woman examining the scene below her through binoculars, another in long gloves, even the feet of a trapeze artist demonstrating acrobatic feats above his adoring crowd. In the foreground of the reflection a man with a thick moustache is talking with the barmaid.

C  Although the Folies (-Bergère) was an actual establishment in late nineteenth-century Paris, and the subject of the painting was a real barmaid who worked there, Manet did not attempt to recapture every detail of the bar in his rendition. The painting was largely completed in a private studio belonging to the painter, where the barmaid posed with a number of bottles, and this was then integrated with quick sketches the artist made at the Folies itself.

D  Even more confounding than Manet’s relaxed attention to detail, however, is the relationship in the painting between the activity in the mirrored reflection and that which we see in the unreflected foreground. In a similar vein to Diego Velazquez’ much earlier work Las Meninas, Manet uses the mirror to toy with our ideas about which details are true to life and which are not. In the foreground, for example, the barmaid is positioned upright, her face betraying an expression of lonely detachment, yet in the mirrored reflection she appears to be leaning forward and to the side, apparently engaging in conversation with her moustachioed customer. As a result of this, the customer’s stance is also altered. In the mirror, he should be blocked from view as a result of where the barmaid is standing, yet Manet has re-positioned him to the side. The overall impact on the viewer is one of a dreamlike disjuncture between reality and illusion.

E  Why would Manet engage in such deceit? Perhaps for that very reason: to depict two different states of mind or emotion. Manet seems to be conveying his understanding of the modern workplace, a place  – from his perspective – of alienation, where workers felt torn from their ‘true’ selves and forced to assume an artificial working identity. What we see in the mirrored reflection is the barmaid’s working self, busy serving a customer. The front-on view, however, bears witness to how the barmaid truly feels at work: hopeless, adrift, and alone.

F  Ever since its debut at the Paris Salon of 1882, art historians have produced reams of books and journal articles disputing the positioning of the barmaid and patron in A Bar at the Folies. Some have even conducted staged representations of the painting in order to ascertain whether Manet’s seemingly distorted point of view might have been possible after all. Yet while academics are understandably drawn to the compositional enigma of the painting, the layperson is always likely to see the much simpler, more human story beneath. No doubt this is the way Manet would have wanted it.

Question

Complete each sentence with the correct ending, A–F, below.

Manet misrepresents the images in the mirror because he 12. Manet felt modern workers were alienated because they 13. Academics have re-constructed the painting in real life because they
Academics have re-constructed the painting in real life because they
Manet felt modern workers were alienated because they

4 / 8

A bar at the folies (Un bar aux folies)

 One of the most critically renowned paintings of the 19th-century modernist movement is the French painter Edouard Manet’s masterwork, A Bar at the Folies. Originally belonging to the composer Emmanuel Chabrier, it is now in the possession of The Courtauld Gallery in London, where it has also become a favourite with the crowds.

B  The painting is set late at night in a nineteenth-century Parisian nightclub. A barmaid stands alone behind her bar, fitted out in a black bodice that has a frilly white neckline, and with a spray of flowers sitting across her décolletage. She rests her hands on the bar and gazes out forlornly at a point just below the viewer, not quite making eye contact. Also on the bar are some bottles of liquor and a bowl of oranges, but much of the activity in the room takes place in the reflection of a mirror behind the barmaid. Through this mirror we see an auditorium, bustling with blurred figures and faces: men in top hats, a woman examining the scene below her through binoculars, another in long gloves, even the feet of a trapeze artist demonstrating acrobatic feats above his adoring crowd. In the foreground of the reflection a man with a thick moustache is talking with the barmaid.

C  Although the Folies (-Bergère) was an actual establishment in late nineteenth-century Paris, and the subject of the painting was a real barmaid who worked there, Manet did not attempt to recapture every detail of the bar in his rendition. The painting was largely completed in a private studio belonging to the painter, where the barmaid posed with a number of bottles, and this was then integrated with quick sketches the artist made at the Folies itself.

D  Even more confounding than Manet’s relaxed attention to detail, however, is the relationship in the painting between the activity in the mirrored reflection and that which we see in the unreflected foreground. In a similar vein to Diego Velazquez’ much earlier work Las Meninas, Manet uses the mirror to toy with our ideas about which details are true to life and which are not. In the foreground, for example, the barmaid is positioned upright, her face betraying an expression of lonely detachment, yet in the mirrored reflection she appears to be leaning forward and to the side, apparently engaging in conversation with her moustachioed customer. As a result of this, the customer’s stance is also altered. In the mirror, he should be blocked from view as a result of where the barmaid is standing, yet Manet has re-positioned him to the side. The overall impact on the viewer is one of a dreamlike disjuncture between reality and illusion.

E  Why would Manet engage in such deceit? Perhaps for that very reason: to depict two different states of mind or emotion. Manet seems to be conveying his understanding of the modern workplace, a place  – from his perspective – of alienation, where workers felt torn from their ‘true’ selves and forced to assume an artificial working identity. What we see in the mirrored reflection is the barmaid’s working self, busy serving a customer. The front-on view, however, bears witness to how the barmaid truly feels at work: hopeless, adrift, and alone.

F  Ever since its debut at the Paris Salon of 1882, art historians have produced reams of books and journal articles disputing the positioning of the barmaid and patron in A Bar at the Folies. Some have even conducted staged representations of the painting in order to ascertain whether Manet’s seemingly distorted point of view might have been possible after all. Yet while academics are understandably drawn to the compositional enigma of the painting, the layperson is always likely to see the much simpler, more human story beneath. No doubt this is the way Manet would have wanted it.

Question

Answer the questions below.

Choose NO MORE THAN THREE WORDS from the passage for each answer.

Who was the first owner of A Bar at the Folies?

What is the barmaid wearing?

Which room is seen at the back of the painting?

Who is performing for the audience?

Where did most of the work on the painting take place?

5 / 8

A bar at the folies (Un bar aux folies)

 One of the most critically renowned paintings of the 19th-century modernist movement is the French painter Edouard Manet’s masterwork, A Bar at the Folies. Originally belonging to the composer Emmanuel Chabrier, it is now in the possession of The Courtauld Gallery in London, where it has also become a favourite with the crowds.

B  The painting is set late at night in a nineteenth-century Parisian nightclub. A barmaid stands alone behind her bar, fitted out in a black bodice that has a frilly white neckline, and with a spray of flowers sitting across her décolletage. She rests her hands on the bar and gazes out forlornly at a point just below the viewer, not quite making eye contact. Also on the bar are some bottles of liquor and a bowl of oranges, but much of the activity in the room takes place in the reflection of a mirror behind the barmaid. Through this mirror we see an auditorium, bustling with blurred figures and faces: men in top hats, a woman examining the scene below her through binoculars, another in long gloves, even the feet of a trapeze artist demonstrating acrobatic feats above his adoring crowd. In the foreground of the reflection a man with a thick moustache is talking with the barmaid.

C  Although the Folies (-Bergère) was an actual establishment in late nineteenth-century Paris, and the subject of the painting was a real barmaid who worked there, Manet did not attempt to recapture every detail of the bar in his rendition. The painting was largely completed in a private studio belonging to the painter, where the barmaid posed with a number of bottles, and this was then integrated with quick sketches the artist made at the Folies itself.

D  Even more confounding than Manet’s relaxed attention to detail, however, is the relationship in the painting between the activity in the mirrored reflection and that which we see in the unreflected foreground. In a similar vein to Diego Velazquez’ much earlier work Las Meninas, Manet uses the mirror to toy with our ideas about which details are true to life and which are not. In the foreground, for example, the barmaid is positioned upright, her face betraying an expression of lonely detachment, yet in the mirrored reflection she appears to be leaning forward and to the side, apparently engaging in conversation with her moustachioed customer. As a result of this, the customer’s stance is also altered. In the mirror, he should be blocked from view as a result of where the barmaid is standing, yet Manet has re-positioned him to the side. The overall impact on the viewer is one of a dreamlike disjuncture between reality and illusion.

E  Why would Manet engage in such deceit? Perhaps for that very reason: to depict two different states of mind or emotion. Manet seems to be conveying his understanding of the modern workplace, a place  – from his perspective – of alienation, where workers felt torn from their ‘true’ selves and forced to assume an artificial working identity. What we see in the mirrored reflection is the barmaid’s working self, busy serving a customer. The front-on view, however, bears witness to how the barmaid truly feels at work: hopeless, adrift, and alone.

F  Ever since its debut at the Paris Salon of 1882, art historians have produced reams of books and journal articles disputing the positioning of the barmaid and patron in A Bar at the Folies. Some have even conducted staged representations of the painting in order to ascertain whether Manet’s seemingly distorted point of view might have been possible after all. Yet while academics are understandably drawn to the compositional enigma of the painting, the layperson is always likely to see the much simpler, more human story beneath. No doubt this is the way Manet would have wanted it.

Question

The Reading Passage has six paragraphs, A–F.

Which paragraph contains the following information?

the writer’s view of the idea that Manet wants to communicate
a statement about the popularity of the painting
a description of how Manet created the painting
examples to show why the bar scene is unrealistic
aspects of the painting that scholars are most interested in

6 / 8

Rock climbing timeline

A  In the early days of mountaineering, questions of safety, standards of practice, and environmental impact were not widely considered. The sport gained traction following the successful 1786 ascent of Mont Blanc, the highest peak in Western Europe, by two French mountaineers, Jacques Balmat and Michel-Gabriel Paccard. This event established the beginning of modern mountaineering, but the sole consideration over the next hundred years was the success or failure of climbers in reaching the summit and claiming the prestige of having made the first ascent.

B  Toward the end of the nineteenth century, however, developments in technology spurred debate regarding climbing practices. Of particular concern in this era was the introduction of pitons (metal spikes that climbers hammer into the rock face for leverage) and the use of belaying techniques. A few, such as Italian climber Guido Ray, supported these methods as ways to render climbing less burdensome and more ‘acrobatic’. Others felt that they were only of value as a safety net if all else failed. Austrian Paul Preuss went so far as to eschew all artificial aids, scaling astonishing heights using only his shoes and his bare hands.  Albert Mummery, a well known British mountaineer and author who climbed the European Alps, and, more famously, the Himalayas, where he died at the age of 39 attempting a notoriously difficult ascent, developed the notion of ‘fair means’ as a kind of informal protocol by which the use of ‘walk-through’ guidebooks and equipment such as ladders and grappling hooks were discouraged.

C  By the 1940s, bolts had begun to replace pitons as the climber’s choice of equipment, and criticism surrounding their use was no less fierce. In 1948, when two American climbers scaled Mount Brussels in the Canadian Rockies using a small number of pitons and bolts, climber Frank Smythe wrote of their efforts: ‘I still regard Mount Brussels as unclimbed, and my feelings are no different from those I should have were I to hear that a helicopter had deposited its passenger on the summit of that mountain just so that he could boast that he had trodden an untrodden mountain top.’

D  Climbing purists aside, it was not until the 1970s that the general tide began to turn against bolting and pitons. The USA, and much of the western world, was waking up to the damage it had been causing to the planet, and environmentalist campaigns and new government policies were becoming widespread. This new awareness and sensitivity to environmental issues spilled over into the rock climbing community. As a result, a stripped-down style of rock climbing known as ‘clean climbing’ became widely adopted. Clean climbing helped preserve rock faces and, compared with older approaches, it was much simpler to practise. This was partly due to the hallmark of clean climbing – the use of nuts – which were favoured over bolts because they could be placed into the rock wall with one hand while climbers maintained their grip on the rock with the other.

E  Not everyone embraced the clean climbing movement, however. A decade later, debates over two more developments were erupting. The first related to the practice of chipping, in which climbers chip away pieces of rock in order to create tiny cracks in which to insert their fingers. The other major point of contention was a process that involves setting bolts in reverse from the top of the climb down. Rappel bolting makes almost any rock face climbable with relative ease, and as a result of this new technique, the sport has lost much of its risk factor and sense of pioneering spirit; indeed, it has become more about muscle power and technical mastery than a psychological trial of fearlessness under pressure. Because of this shift in focus, many amateur climbers have flocked to indoor climbing gyms, where the risk of serious harm is negligible.

F  Given the environmental damage rock climbing can cause, this may be a positive outcome. It is ironic that most rock climbers and mountaineers love the outdoors and have great respect for the majesty of nature and the impressive challenges she poses, but that in the pursuit of their goals they inevitably trample sensitive vegetation, damaging and disturbing delicate flora and lichens which grow on ledges and cliff faces. Two researchers from a Canadian university, Doug Larson and Michelle McMillan, have found that rock faces that are regularly climbed have lost up to 80% of the coverage and diversity of native plant species. If that were not bad enough, non-native species have also been inadvertently introduced, having been carried in on climbers’ boots.

G  This leaves rock climbing with an uncertain future. Climbers are not the only user group that wishes to enjoy the wilderness – hikers, mountain bikers and horseback riders visit the same areas, and more importantly, they are much better organised, with long-established lobby groups protecting their interests. With increased pressure on limited natural resources, it has been suggested that climbers put aside their differences over the ethics of various climbing techniques, and focus on the effect of their practices on the environment and their relationship with other users and landowners.

H  In any event, there can be no doubt that the era of the rock climber as a lone wolf or intrepid pioneer is over. Like many other forms of recreation, rock climbing has increasingly come under the fold of institutional efforts to curb dangerous behaviour and properly manage our natural environments. This may have spoiled the magic, but it has also made the sport safer and more sustainable, and governing bodies would do well to consider heightening such efforts in the future.

belaying: fastening or controlling of a climber’s rope by wrapping it around a metal device or another person

Question

The Reading Passage has eight paragraphs, A–H.

Which paragraph contains the following information?

a reference to a climber who did not use any tools or ropes for assistance
an account of how politics affected rock climbing
a less dangerous alternative to climbing rock faces
examples of different types of people who use the outdoors for recreation
examples of the impact of climbers on ecosystems
a recommendation for better regulation

7 / 8

Rock climbing timeline

A  In the early days of mountaineering, questions of safety, standards of practice, and environmental impact were not widely considered. The sport gained traction following the successful 1786 ascent of Mont Blanc, the highest peak in Western Europe, by two French mountaineers, Jacques Balmat and Michel-Gabriel Paccard. This event established the beginning of modern mountaineering, but the sole consideration over the next hundred years was the success or failure of climbers in reaching the summit and claiming the prestige of having made the first ascent.

B  Toward the end of the nineteenth century, however, developments in technology spurred debate regarding climbing practices. Of particular concern in this era was the introduction of pitons (metal spikes that climbers hammer into the rock face for leverage) and the use of belaying techniques. A few, such as Italian climber Guido Ray, supported these methods as ways to render climbing less burdensome and more ‘acrobatic’. Others felt that they were only of value as a safety net if all else failed. Austrian Paul Preuss went so far as to eschew all artificial aids, scaling astonishing heights using only his shoes and his bare hands.  Albert Mummery, a well known British mountaineer and author who climbed the European Alps, and, more famously, the Himalayas, where he died at the age of 39 attempting a notoriously difficult ascent, developed the notion of ‘fair means’ as a kind of informal protocol by which the use of ‘walk-through’ guidebooks and equipment such as ladders and grappling hooks were discouraged.

C  By the 1940s, bolts had begun to replace pitons as the climber’s choice of equipment, and criticism surrounding their use was no less fierce. In 1948, when two American climbers scaled Mount Brussels in the Canadian Rockies using a small number of pitons and bolts, climber Frank Smythe wrote of their efforts: ‘I still regard Mount Brussels as unclimbed, and my feelings are no different from those I should have were I to hear that a helicopter had deposited its passenger on the summit of that mountain just so that he could boast that he had trodden an untrodden mountain top.’

D  Climbing purists aside, it was not until the 1970s that the general tide began to turn against bolting and pitons. The USA, and much of the western world, was waking up to the damage it had been causing to the planet, and environmentalist campaigns and new government policies were becoming widespread. This new awareness and sensitivity to environmental issues spilled over into the rock climbing community. As a result, a stripped-down style of rock climbing known as ‘clean climbing’ became widely adopted. Clean climbing helped preserve rock faces and, compared with older approaches, it was much simpler to practise. This was partly due to the hallmark of clean climbing – the use of nuts – which were favoured over bolts because they could be placed into the rock wall with one hand while climbers maintained their grip on the rock with the other.

E  Not everyone embraced the clean climbing movement, however. A decade later, debates over two more developments were erupting. The first related to the practice of chipping, in which climbers chip away pieces of rock in order to create tiny cracks in which to insert their fingers. The other major point of contention was a process that involves setting bolts in reverse from the top of the climb down. Rappel bolting makes almost any rock face climbable with relative ease, and as a result of this new technique, the sport has lost much of its risk factor and sense of pioneering spirit; indeed, it has become more about muscle power and technical mastery than a psychological trial of fearlessness under pressure. Because of this shift in focus, many amateur climbers have flocked to indoor climbing gyms, where the risk of serious harm is negligible.

F  Given the environmental damage rock climbing can cause, this may be a positive outcome. It is ironic that most rock climbers and mountaineers love the outdoors and have great respect for the majesty of nature and the impressive challenges she poses, but that in the pursuit of their goals they inevitably trample sensitive vegetation, damaging and disturbing delicate flora and lichens which grow on ledges and cliff faces. Two researchers from a Canadian university, Doug Larson and Michelle McMillan, have found that rock faces that are regularly climbed have lost up to 80% of the coverage and diversity of native plant species. If that were not bad enough, non-native species have also been inadvertently introduced, having been carried in on climbers’ boots.

G  This leaves rock climbing with an uncertain future. Climbers are not the only user group that wishes to enjoy the wilderness – hikers, mountain bikers and horseback riders visit the same areas, and more importantly, they are much better organised, with long-established lobby groups protecting their interests. With increased pressure on limited natural resources, it has been suggested that climbers put aside their differences over the ethics of various climbing techniques, and focus on the effect of their practices on the environment and their relationship with other users and landowners.

H  In any event, there can be no doubt that the era of the rock climber as a lone wolf or intrepid pioneer is over. Like many other forms of recreation, rock climbing has increasingly come under the fold of institutional efforts to curb dangerous behaviour and properly manage our natural environments. This may have spoiled the magic, but it has also made the sport safer and more sustainable, and governing bodies would do well to consider heightening such efforts in the future.

belaying: fastening or controlling of a climber’s rope by wrapping it around a metal device or another person

Question

 

8 / 8

Rock climbing timeline

A  In the early days of mountaineering, questions of safety, standards of practice, and environmental impact were not widely considered. The sport gained traction following the successful 1786 ascent of Mont Blanc, the highest peak in Western Europe, by two French mountaineers, Jacques Balmat and Michel-Gabriel Paccard. This event established the beginning of modern mountaineering, but the sole consideration over the next hundred years was the success or failure of climbers in reaching the summit and claiming the prestige of having made the first ascent.

B  Toward the end of the nineteenth century, however, developments in technology spurred debate regarding climbing practices. Of particular concern in this era was the introduction of pitons (metal spikes that climbers hammer into the rock face for leverage) and the use of belaying techniques. A few, such as Italian climber Guido Ray, supported these methods as ways to render climbing less burdensome and more ‘acrobatic’. Others felt that they were only of value as a safety net if all else failed. Austrian Paul Preuss went so far as to eschew all artificial aids, scaling astonishing heights using only his shoes and his bare hands.  Albert Mummery, a well known British mountaineer and author who climbed the European Alps, and, more famously, the Himalayas, where he died at the age of 39 attempting a notoriously difficult ascent, developed the notion of ‘fair means’ as a kind of informal protocol by which the use of ‘walk-through’ guidebooks and equipment such as ladders and grappling hooks were discouraged.

C  By the 1940s, bolts had begun to replace pitons as the climber’s choice of equipment, and criticism surrounding their use was no less fierce. In 1948, when two American climbers scaled Mount Brussels in the Canadian Rockies using a small number of pitons and bolts, climber Frank Smythe wrote of their efforts: ‘I still regard Mount Brussels as unclimbed, and my feelings are no different from those I should have were I to hear that a helicopter had deposited its passenger on the summit of that mountain just so that he could boast that he had trodden an untrodden mountain top.’

D  Climbing purists aside, it was not until the 1970s that the general tide began to turn against bolting and pitons. The USA, and much of the western world, was waking up to the damage it had been causing to the planet, and environmentalist campaigns and new government policies were becoming widespread. This new awareness and sensitivity to environmental issues spilled over into the rock climbing community. As a result, a stripped-down style of rock climbing known as ‘clean climbing’ became widely adopted. Clean climbing helped preserve rock faces and, compared with older approaches, it was much simpler to practise. This was partly due to the hallmark of clean climbing – the use of nuts – which were favoured over bolts because they could be placed into the rock wall with one hand while climbers maintained their grip on the rock with the other.

E  Not everyone embraced the clean climbing movement, however. A decade later, debates over two more developments were erupting. The first related to the practice of chipping, in which climbers chip away pieces of rock in order to create tiny cracks in which to insert their fingers. The other major point of contention was a process that involves setting bolts in reverse from the top of the climb down. Rappel bolting makes almost any rock face climbable with relative ease, and as a result of this new technique, the sport has lost much of its risk factor and sense of pioneering spirit; indeed, it has become more about muscle power and technical mastery than a psychological trial of fearlessness under pressure. Because of this shift in focus, many amateur climbers have flocked to indoor climbing gyms, where the risk of serious harm is negligible.

F  Given the environmental damage rock climbing can cause, this may be a positive outcome. It is ironic that most rock climbers and mountaineers love the outdoors and have great respect for the majesty of nature and the impressive challenges she poses, but that in the pursuit of their goals they inevitably trample sensitive vegetation, damaging and disturbing delicate flora and lichens which grow on ledges and cliff faces. Two researchers from a Canadian university, Doug Larson and Michelle McMillan, have found that rock faces that are regularly climbed have lost up to 80% of the coverage and diversity of native plant species. If that were not bad enough, non-native species have also been inadvertently introduced, having been carried in on climbers’ boots.

G  This leaves rock climbing with an uncertain future. Climbers are not the only user group that wishes to enjoy the wilderness – hikers, mountain bikers and horseback riders visit the same areas, and more importantly, they are much better organised, with long-established lobby groups protecting their interests. With increased pressure on limited natural resources, it has been suggested that climbers put aside their differences over the ethics of various climbing techniques, and focus on the effect of their practices on the environment and their relationship with other users and landowners.

H  In any event, there can be no doubt that the era of the rock climber as a lone wolf or intrepid pioneer is over. Like many other forms of recreation, rock climbing has increasingly come under the fold of institutional efforts to curb dangerous behaviour and properly manage our natural environments. This may have spoiled the magic, but it has also made the sport safer and more sustainable, and governing bodies would do well to consider heightening such efforts in the future.

belaying: fastening or controlling of a climber’s rope by wrapping it around a metal device or another person

Question

Complete the flow chart below.

Choose NO MORE THAN THREE WORDS from the passage for each answer.

Late 19th century

Some climbers discuss whether pitons and ropes should only be considered

calls for guidelines based on unwritten rules which discourage climbing aids.

1940s

New equipment becomes controversial. Frank Smythe says that Mt Brussels is effectively because of the techniques that were used in order to scale the mountain.

1970s

is more environmentally friendly. are introduced as a climbing aid.

1980s – today

Climbers discuss the merits of new techniques for making hand holds, and also of Many say that climbing is now a test of physical strength and , rather than of courage.

0

IELTS Reading General 1

In this challenge, the questions are set up as they would be in the IELTS exam. 3 sections, with 2/3/4 questions in each section. Each section should take 20 minutes, and you will have 1 hour to answer all the questions.

1 / 8

Emergency procedures
Revised July 2011

This applies to all persons on the school campus:

In cases of emergency (e.g. fire), find the nearest teacher who will send a messenger at full speed to the Office or inform the Office via phone ext. 99.

Procedure for evacuation

  1. Warning of an emergency evacuation will be marked by a number of short bell rings. (In the event of a power failure, this may be a hand-held bell or siren.)
  2. All class work will cease immediately.
  3. Students will leave their bags, books and other possessions where they are.
  4. Teachers will take the class rolls.
  5. Classes will vacate the premises using the nearest staircase. If these stairs are inaccessible, use the nearest alternative staircase. Do not use the lifts. Do not run.
  6. Each class, under the teacher’s supervision, will move in a brisk, orderly fashion to the paved quadrangle area adjacent to the car park.
  7. All support staff will do the same.
  8. The Marshalling Supervisor, Ms Randall, will be wearing a red cap and she will be waiting there with the master timetable and staff list in her possession.
  9. Students assemble in the quad with their teacher at the time of evacuation. The teacher will do a head count and check the roll.
  10. Each teacher sends a student to the Supervisor to report whether all students have been accounted for. After checking, students will sit down (in the event of rain or wet pavement they may remain standing).
  11. The Supervisor will inform the Office when all staff and students have been accounted for.
  12. All students, teaching staff and support personnel remain in the evacuation area until the All Clear signal is given.
  13. The All Clear will be a long bell ring or three blasts on the siren.
  14. Students will return to class in an orderly manner under teacher guidance.
  15. In the event of an emergency occurring during lunch or breaks, students are to assemble in their home-room groups in the quad and await their home-room teacher.

Question

1) In an emergency, a teacher will either phone the office or .
2) The signal for evacuation will normally be several .
3) If possible, students should leave the building by the .
4) They then walk quickly to the .
5) will join the teachers and students in the quad.
6) Each class teacher will count up his or her students and mark .
7) After the , everyone may return to class.
8) If there is an emergency at lunchtime, students gather in the quad in and wait for their teacher.

2 / 8

Read the text below and answer questions.

Community Education

Short Courses:  Business

Business Basics
Gain foundation knowledge for employment in an accounts position with bookkeeping and business basics through to intermediate level; suitable for anyone requiring knowledge from the ground up.
Code B/ED011
16th or 24th April 9am–4pm
Cost $420

Bookkeeping

This course will provide students with a comprehensive understanding of bookkeeping and a great deal of hands-on experience
Code B/ED020
19th April 9am–2.30pm (one session only so advance bookings essential)
Cost $250

New Enterprise Module

Understand company structures, tax rates, deductions, employer obligations, profit and loss statements, GST and budgeting for tax.
Code B/ED030
15th or 27th May 6pm–9pm
Cost $105

Social Networking – the Latest Marketing Tool

This broad overview gives you the opportunity to analyse what web technologies are available and how they can benefit your organisation.
Code B/ED033
1st or 8th or 15th June 6pm–9pm
Cost $95

Communication

Take the fear out of talking to large gatherings of people. Gain the public-speaking experience that will empower you with better communication skills and confidence.
Code B/ED401
12th or 13th or 14th July 6pm–9pm
Cost $90

Question

Do the following statements agree with the information given in the text? Answer True, False or Not given to questions 9–14.

True if the statement agrees with the information
False if the statement contradicts the information
Not given if there is no information on this

 

Business Basics is appropriate for beginners.
Bookkeeping has no practical component.
Bookkeeping is intended for advanced students only.
The New Enterprise Module can help your business become more profitable.
Social Networking focuses on a specific website to help your business succeed.
The Communication class involves speaking in front of an audience.

3 / 8

Read the text below and answer questions.

Workplace dismissals

Before the dismissal
If an employer wants to dismiss an employee, there is a process to be followed. Instances of minor misconduct and poor performance must first be addressed through some preliminary steps.

Firstly, you should be given an improvement note. This will explain the problem, outline any necessary changes and offer some assistance in correcting the situation. Then, if your employer does not think your performance has improved, you may be given a written warning. The last step is called a final written warning which will inform you that you will be dismissed unless there are improvements in performance.  If there is no improvement, your employer can begin the dismissal procedure.

The dismissal procedure begins with a letter from the employer setting out the charges made against the employee. The employee will be invited to a meeting to discuss these accusations. If the employee denies the charges, he is given the opportunity to appear at a formal appeal hearing in front of a different manager. After this, a decision is made as to whether the employee will be let go or not.

Dismissals
Of the various types of dismissal, a fair dismissal is the best kind if an employer wants an employee out of the workplace. A fair dismissal is legally and contractually strong and it means all the necessary procedures have been correctly followed. In cases where an employee’s misconduct has been very serious, however, an employer may not have to follow all of these procedures. If the employer can prove that the employee’s behaviour was illegal, dangerous or severely wrong, the employee can be dismissed immediately: a procedure known as summary dismissal.

Sometimes a dismissal is not considered to have taken place fairly. One of these types is wrongful dismissal and involves a breach of contract by the employer. This could involve dismissing an employee without notice or without following proper disciplinary and dismissal procedures. Another type, unfair dismissal, is when an employee is sacked without good cause.

There is another kind of dismissal, known as constructive dismissal, which is slightly peculiar because the employee is not actually openly dismissed by the employer. In this case the employee is forced into resigning by an employer who tries to make significant changes to the original contract. This could mean an employee might have to work night shifts after originally signing on for day work, or he could be made to work in dangerous conditions.

Question

Complete the sentences below. Choose no more than three words from the text for each answer.

 

22 If an employee receives a , this means he will lose his job if his work does not get better.
23 If an employee does not accept the reasons for his dismissal, a can be arranged.

4 / 8

Read the text below and answer questions.

Workplace dismissals

Before the dismissal
If an employer wants to dismiss an employee, there is a process to be followed. Instances of minor misconduct and poor performance must first be addressed through some preliminary steps.

Firstly, you should be given an improvement note. This will explain the problem, outline any necessary changes and offer some assistance in correcting the situation. Then, if your employer does not think your performance has improved, you may be given a written warning. The last step is called a final written warning which will inform you that you will be dismissed unless there are improvements in performance.  If there is no improvement, your employer can begin the dismissal procedure.

The dismissal procedure begins with a letter from the employer setting out the charges made against the employee. The employee will be invited to a meeting to discuss these accusations. If the employee denies the charges, he is given the opportunity to appear at a formal appeal hearing in front of a different manager. After this, a decision is made as to whether the employee will be let go or not.

Dismissals
Of the various types of dismissal, a fair dismissal is the best kind if an employer wants an employee out of the workplace. A fair dismissal is legally and contractually strong and it means all the necessary procedures have been correctly followed. In cases where an employee’s misconduct has been very serious, however, an employer may not have to follow all of these procedures. If the employer can prove that the employee’s behaviour was illegal, dangerous or severely wrong, the employee can be dismissed immediately: a procedure known as summary dismissal.

Sometimes a dismissal is not considered to have taken place fairly. One of these types is wrongful dismissal and involves a breach of contract by the employer. This could involve dismissing an employee without notice or without following proper disciplinary and dismissal procedures. Another type, unfair dismissal, is when an employee is sacked without good cause.

There is another kind of dismissal, known as constructive dismissal, which is slightly peculiar because the employee is not actually openly dismissed by the employer. In this case the employee is forced into resigning by an employer who tries to make significant changes to the original contract. This could mean an employee might have to work night shifts after originally signing on for day work, or he could be made to work in dangerous conditions.

Question

Look at the following descriptions and the list of terms in the box below. Match each description with the correct term A–E.

The reason for an employee’s dismissal is not considered good enough.
An employee is asked to leave work straight away because he has done something really bad.
The reasons for an employee’s dismissal are acceptable by law and the terms of the employment contract.
An employer gets rid of an employee without keeping to conditions in the contract.
An employee is pressured to leave his job unless he accepts conditions that are very different from those agreed to in the beginning.

5 / 8

Read the text below and answer questions.

Beneficial work practices for the keyboard operator

A) Sensible work practices are an important factor in the prevention of muscular fatigue; discomfort or pain in the arms, neck, hands or back; or eye strain which can be associated with constant or regular work at a keyboard and visual display unit (VDU).

B) It is vital that the employer pays attention to the physical setting such as workplace design, the office environment, and placement of monitors as well as the organisation of the work and individual work habits. Operators must be able to recognise work-related health problems and be given the opportunity to participate in the management of these. Operators should take note of and follow the preventive measures outlined below.

C) The typist must be comfortably accommodated in a chair that is adjustable for height with a back rest that is also easily adjustable both for angle and height. The back rest and sitting ledge (with a curved edge) should preferably be cloth-covered to avoid excessive perspiration.

D) When the keyboard operator is working from a paper file or manuscript, it should be at the same distance from the eyes as the screen. The most convenient position can be found by using some sort of holder. Individual arrangement will vary according to whether the operator spends more time looking at the VDU or the paper – whichever the eyes are focused on for the majority of time should be put directly in front of the operator.

E) While keying, it is advisable to have frequent but short pauses of around thirty to sixty seconds to proofread. When doing this, relax your hands. After you have been keying for sixty minutes, you should have a ten minute change of activity. During this spell it is important that you do not remain seated but stand up or walk around. This period could be profitably used to do filing or collect and deliver documents.

F) Generally, the best position for a VDU is at right angles to the window. If this is not possible then glare from the window can be controlled by blinds, curtains or movable screens. Keep the face of the VDU vertical to avoid glare from overhead lighting.

G) Unsatisfactory work practices or working conditions may result in aches or pain. Symptoms should be reported to your supervisor early on so that the cause of the trouble can be corrected and the operator should seek medical attention.

Question

The text above has seven sections, A–G. Choose the correct heading for each section from the list of headings below.

E
F
A
D
C
B
G

6 / 8

Calisthenics

The world’s oldest form of resistance training

A) From the very first caveman to scale a tree or hang from a cliff face, to the mighty armies of the Greco-Roman empires and the gymnasiums of modern American high schools, calisthenics has endured and thrived because of its simplicity and utility. Unlike strength training which involves weights, machines or resistance bands, calisthenics uses only the body’s own weight for physical development.

B) Calisthenics enters the historical record at around 480 B.C., with Herodotus’ account of the Battle of Thermopolylae. Herodotus reported that, prior to the battle, the god-king Xerxes sent a scout party to spy on his Spartan enemies. The scouts informed Xerxes that the Spartans, under the leadership of King Leonidas, were practicing some kind of bizarre, synchronised movements akin to a tribal dance. Xerxes was greatly amused. His own army was comprised of over 120,000 men, while the Spartans had just 300. Leonidas was informed that he must retreat or face annihilation. The Spartans did not retreat, however, and in the ensuing battle they managed to hold Xerxes’ enormous army at bay for some time until reinforcements arrived. It turns out their tribal dance was not a superstitious ritual but a form of calisthenics by which they were building awe-inspiring physical strength and endurance.

C) The Greeks took calisthenics seriously not only as a form of military discipline and strength, but also as an artistic expression of movement and an aesthetically ideal physique. Indeed, the term calisthenics itself is derived from the Greek words for beauty and strength.  We know from historical records and images from pottery, mosaics and sculptures of the period that the ancient Olympians took calisthenics training seriously. They were greatly admired – and still are, today – for their combination of athleticism and physical beauty. You may have heard a friend whimsically sigh and mention that someone ‘has the body of a Greek god’. This expression has travelled through centuries and continents, and the source of this envy and admiration is the calisthenics method.

D) Calisthenics experienced its second golden age in the 1800s. This century saw the birth of gymnastics, an organised sport that uses a range of bars, rings, vaulting horses and balancing beams to display physical prowess. This period is also when the phenomena of strongmen developed. These were people of astounding physical strength and development who forged nomadic careers by demonstrating outlandish feats of strength to stunned populations. Most of these men trained using hand balancing and horizontal bars, as modern weight machines had not yet been invented.

E) In the 1950s, Angelo Siciliano – who went by the stage name Charles Atlas – was crowned “The World’s Most Perfectly Developed Man”. Atlas’s own approach stemmed from traditional calisthenics, and through a series of mail order comic books he taught these methods to hundreds of thousands of children and young adults through the 1960s and 1970s. But Atlas was the last of a dying breed. The tides were turning, fitness methods were drifting away from calisthenics, and no widely-regarded proponent of the method would ever succeed him.

F) In the 1960s and 1970s calisthenics and the goal of functional strength combined with physical beauty was replaced by an emphasis on huge muscles at any cost. This became the sport of body building. Although body building’s pioneers were drawn from the calisthenics tradition, the sole goal soon became an increase in muscle size. Body building icons, people such as Arnold Schwarzenegger and Sergio Oliva were called mass monsters because of their imposing physiques. Physical development of this nature was only attainable through the use of anabolic steroids, synthetic hormones which boosted muscle development while harming overall health. These body builders also relied on free weights and machines, which allowed them to target and bloat the size of individual muscles rather than develop a naturally proportioned body. Calisthenics, with its emphasis on physical beauty and a balance in proportions, had little to offer the mass monsters.

G) In this “bigger is better” climate, calisthenics was relegated to groups perceived to be vulnerable, such as women, people recuperating from injuries and school students. Although some of the strongest and most physically developed human beings ever to have lived acquired their abilities through the use of sophisticated calisthenics, a great deal of this knowledge was discarded and the method was reduced to nothing more than an easily accessible and readily available activity. Those who mastered the rudimentary skills of calisthenics could expect to graduate to weight training rather than advanced calisthenics.

H) In recent years, however, fitness trends have been shifting back toward the use of calisthenics. Bodybuilding approaches that promote excessive muscle development frequently lead to joint pain, injuries, unbalanced physiques and weak cardiovascular health. As a result, many of the newest and most popular gyms and programmes emphasise calisthenics-based methods instead. Modern practices often combine elements from a number of related traditions such as yoga, Pilates, kettle-ball training, gymnastics and traditional Greco-Roman calisthenics. Many people are keen to recover the original Greek vision of physical beauty and strength and harmony of the mind-body connection.

Question

Which of the following statement is true according to the passage?

7 / 8

Read the text below and answer the questions.

Calisthenics

The world’s oldest form of resistance training

A) From the very first caveman to scale a tree or hang from a cliff face, to the mighty armies of the Greco-Roman empires and the gymnasiums of modern American high schools, calisthenics has endured and thrived because of its simplicity and utility. Unlike strength training which involves weights, machines or resistance bands, calisthenics uses only the body’s own weight for physical development.

B) Calisthenics enters the historical record at around 480 B.C., with Herodotus’ account of the Battle of Thermopolylae. Herodotus reported that, prior to the battle, the god-king Xerxes sent a scout party to spy on his Spartan enemies. The scouts informed Xerxes that the Spartans, under the leadership of King Leonidas, were practicing some kind of bizarre, synchronised movements akin to a tribal dance. Xerxes was greatly amused. His own army was comprised of over 120,000 men, while the Spartans had just 300. Leonidas was informed that he must retreat or face annihilation. The Spartans did not retreat, however, and in the ensuing battle they managed to hold Xerxes’ enormous army at bay for some time until reinforcements arrived. It turns out their tribal dance was not a superstitious ritual but a form of calisthenics by which they were building awe-inspiring physical strength and endurance.

C) The Greeks took calisthenics seriously not only as a form of military discipline and strength, but also as an artistic expression of movement and an aesthetically ideal physique. Indeed, the term calisthenics itself is derived from the Greek words for beauty and strength.  We know from historical records and images from pottery, mosaics and sculptures of the period that the ancient Olympians took calisthenics training seriously. They were greatly admired – and still are, today – for their combination of athleticism and physical beauty. You may have heard a friend whimsically sigh and mention that someone ‘has the body of a Greek god’. This expression has travelled through centuries and continents, and the source of this envy and admiration is the calisthenics method.

D) Calisthenics experienced its second golden age in the 1800s. This century saw the birth of gymnastics, an organised sport that uses a range of bars, rings, vaulting horses and balancing beams to display physical prowess. This period is also when the phenomena of strongmen developed. These were people of astounding physical strength and development who forged nomadic careers by demonstrating outlandish feats of strength to stunned populations. Most of these men trained using hand balancing and horizontal bars, as modern weight machines had not yet been invented.

E) In the 1950s, Angelo Siciliano – who went by the stage name Charles Atlas – was crowned “The World’s Most Perfectly Developed Man”. Atlas’s own approach stemmed from traditional calisthenics, and through a series of mail order comic books he taught these methods to hundreds of thousands of children and young adults through the 1960s and 1970s. But Atlas was the last of a dying breed. The tides were turning, fitness methods were drifting away from calisthenics, and no widely-regarded proponent of the method would ever succeed him.

F) In the 1960s and 1970s calisthenics and the goal of functional strength combined with physical beauty was replaced by an emphasis on huge muscles at any cost. This became the sport of body building. Although body building’s pioneers were drawn from the calisthenics tradition, the sole goal soon became an increase in muscle size. Body building icons, people such as Arnold Schwarzenegger and Sergio Oliva were called mass monsters because of their imposing physiques. Physical development of this nature was only attainable through the use of anabolic steroids, synthetic hormones which boosted muscle development while harming overall health. These body builders also relied on free weights and machines, which allowed them to target and bloat the size of individual muscles rather than develop a naturally proportioned body. Calisthenics, with its emphasis on physical beauty and a balance in proportions, had little to offer the mass monsters.

G) In this “bigger is better” climate, calisthenics was relegated to groups perceived to be vulnerable, such as women, people recuperating from injuries and school students. Although some of the strongest and most physically developed human beings ever to have lived acquired their abilities through the use of sophisticated calisthenics, a great deal of this knowledge was discarded and the method was reduced to nothing more than an easily accessible and readily available activity. Those who mastered the rudimentary skills of calisthenics could expect to graduate to weight training rather than advanced calisthenics.

H) In recent years, however, fitness trends have been shifting back toward the use of calisthenics. Bodybuilding approaches that promote excessive muscle development frequently lead to joint pain, injuries, unbalanced physiques and weak cardiovascular health. As a result, many of the newest and most popular gyms and programmes emphasise calisthenics-based methods instead. Modern practices often combine elements from a number of related traditions such as yoga, Pilates, kettle-ball training, gymnastics and traditional Greco-Roman calisthenics. Many people are keen to recover the original Greek vision of physical beauty and strength and harmony of the mind-body connection.

Question

The text has eight paragraphs, A–H. Which paragraph contains the following information?

the origin of the word ‘calisthenics’
a multidisciplinary approach to all-round health and strength
the first use of calisthenics as a training method
a reference to travelling showmen who displayed their strength for audiences
the last popular supporter of calisthenics
medical substance to increase muscle mass and strength
reasons for the survival of calisthenics throughout the ages

8 / 8

Read the text below and answer the questions.

Calisthenics

The world’s oldest form of resistance training

A) From the very first caveman to scale a tree or hang from a cliff face, to the mighty armies of the Greco-Roman empires and the gymnasiums of modern American high schools, calisthenics has endured and thrived because of its simplicity and utility. Unlike strength training which involves weights, machines or resistance bands, calisthenics uses only the body’s own weight for physical development.

B) Calisthenics enters the historical record at around 480 B.C., with Herodotus’ account of the Battle of Thermopolylae. Herodotus reported that, prior to the battle, the god-king Xerxes sent a scout party to spy on his Spartan enemies. The scouts informed Xerxes that the Spartans, under the leadership of King Leonidas, were practicing some kind of bizarre, synchronised movements akin to a tribal dance. Xerxes was greatly amused. His own army was comprised of over 120,000 men, while the Spartans had just 300. Leonidas was informed that he must retreat or face annihilation. The Spartans did not retreat, however, and in the ensuing battle they managed to hold Xerxes’ enormous army at bay for some time until reinforcements arrived. It turns out their tribal dance was not a superstitious ritual but a form of calisthenics by which they were building awe-inspiring physical strength and endurance.

C) The Greeks took calisthenics seriously not only as a form of military discipline and strength, but also as an artistic expression of movement and an aesthetically ideal physique. Indeed, the term calisthenics itself is derived from the Greek words for beauty and strength.  We know from historical records and images from pottery, mosaics and sculptures of the period that the ancient Olympians took calisthenics training seriously. They were greatly admired – and still are, today – for their combination of athleticism and physical beauty. You may have heard a friend whimsically sigh and mention that someone ‘has the body of a Greek god’. This expression has travelled through centuries and continents, and the source of this envy and admiration is the calisthenics method.

D) Calisthenics experienced its second golden age in the 1800s. This century saw the birth of gymnastics, an organised sport that uses a range of bars, rings, vaulting horses and balancing beams to display physical prowess. This period is also when the phenomena of strongmen developed. These were people of astounding physical strength and development who forged nomadic careers by demonstrating outlandish feats of strength to stunned populations. Most of these men trained using hand balancing and horizontal bars, as modern weight machines had not yet been invented.

E) In the 1950s, Angelo Siciliano – who went by the stage name Charles Atlas – was crowned “The World’s Most Perfectly Developed Man”. Atlas’s own approach stemmed from traditional calisthenics, and through a series of mail order comic books he taught these methods to hundreds of thousands of children and young adults through the 1960s and 1970s. But Atlas was the last of a dying breed. The tides were turning, fitness methods were drifting away from calisthenics, and no widely-regarded proponent of the method would ever succeed him.

F) In the 1960s and 1970s calisthenics and the goal of functional strength combined with physical beauty was replaced by an emphasis on huge muscles at any cost. This became the sport of body building. Although body building’s pioneers were drawn from the calisthenics tradition, the sole goal soon became an increase in muscle size. Body building icons, people such as Arnold Schwarzenegger and Sergio Oliva were called mass monsters because of their imposing physiques. Physical development of this nature was only attainable through the use of anabolic steroids, synthetic hormones which boosted muscle development while harming overall health. These body builders also relied on free weights and machines, which allowed them to target and bloat the size of individual muscles rather than develop a naturally proportioned body. Calisthenics, with its emphasis on physical beauty and a balance in proportions, had little to offer the mass monsters.

G) In this “bigger is better” climate, calisthenics was relegated to groups perceived to be vulnerable, such as women, people recuperating from injuries and school students. Although some of the strongest and most physically developed human beings ever to have lived acquired their abilities through the use of sophisticated calisthenics, a great deal of this knowledge was discarded and the method was reduced to nothing more than an easily accessible and readily available activity. Those who mastered the rudimentary skills of calisthenics could expect to graduate to weight training rather than advanced calisthenics.

H) In recent years, however, fitness trends have been shifting back toward the use of calisthenics. Bodybuilding approaches that promote excessive muscle development frequently lead to joint pain, injuries, unbalanced physiques and weak cardiovascular health. As a result, many of the newest and most popular gyms and programmes emphasise calisthenics-based methods instead. Modern practices often combine elements from a number of related traditions such as yoga, Pilates, kettle-ball training, gymnastics and traditional Greco-Roman calisthenics. Many people are keen to recover the original Greek vision of physical beauty and strength and harmony of the mind-body connection.

Question

During the sixties and seventies, attaining huge muscles became more important than

or having an attractive-looking body. The first people to take up this new sport of body building had a background in calisthenics but the most famous practitioners became known as on account of the impressive size of their muscles. Drugs and mechanical devices were used to develop individual muscles to a monstrous size.

Calisthenics then became the domain of ‘weaker’ people: females, children and those recovering from . Much of the advanced knowledge about calisthenics was lost and the method was subsequently downgraded to the status of a simple, user-friendly activity. Once a person became skilled at this, he would progress to .

Currently a revival of calisthenics is under way as extreme muscle building can harm the body leaving it sore, out of balance, and in poor .

0

IELTS Reading General 2

In this challenge, the questions are set up as they would be in the IELTS exam. 3 sections, with 2/3/4 questions in each section. Each section should take 20 minutes, and you will have 1 hour to answer all the questions.

1 / 7

Making the Cut

When we talk about how films convey meaning we tend to refer to acting, music, dialogue, props and narrative developments, but often forgotten is the visual essence of a film itself, which is the cutting together of moving images – “motion pictures” – each one carefully tailored to meet a particular need or purpose.

Most films and many important scenes within them open with an establishing shot. Typically this shot precedes our introduction to the main characters by presenting us with the locale in which the scene’s action or dialogue is about to occur. Occasionally, however, a director will use an establishing shot with another goal in mind. An opening view of a thousand soldiers parading in synchronized fashion might have little to reveal about the film’s geography, for example, but it does inform the audience that ideas about discipline and conformity are likely to arise in the material that follows. In this way, establishing shots can also introduce a film’s theme.

After an establishing shot, most directors choose a long shot in order to progress the narrative. This type of shot displays the entire human physique in relation to its surroundings, so it is ideal for bridging the narrative divide between location and individual activity. A long shot is therefore often used to centre on a pivotal character in the scene. A film might begin with an establishing shot of bleak, snowy mountains and then cut to a long shot of a lone skier, for example, or a sweeping panorama of a bustling metropolis could segue into a street view of someone entering a building.

From here the door is wide open for directors to choose whichever shots will enhance the narration. Close-up shots are popular in suspense sequences – a handgun being loaded, a doorknob being turned, the startled expression of someone freshly roused from sleep. Confining the visual field in this way adds to the viewer’s apprehension. Dramatic films will probably want to emphasise character interaction. The third-person shot – in which a third of the frame consists of a rear view of a person’s upper torso and head – can be effectively utilised here. This shot encourages us to actually slip into the persona of that character, and vicariously live through their experiences.

A number of special-purpose shots are used quite rarely – once, if at all, in most films. One such type is the money shot. A money shot has no specific technical features or content, but is typically the most expensive element of a film’s production values and comes with a cost massively disproportionate to its screen time (which may be limited to just a brief glimpse). Because of its spectacular, extravagant nature, however, the money shot is a major revenue generator and is widely exploited for use in promotional materials. Money shots are most popular amongst – but not limited to – high visual-impact genres such as action, war, thriller and disaster films.

But more affordable shots can also add an interesting twist to the story. The Dutch tilt can depict a character in a state of psychological unease by shooting them from a jaunty angle. In this way they appear literally and metaphorically unbalanced. A trunk shot often shows a small group of characters peering into the trunk of a vehicle. It is filmed from a perspective within the trunk itself, although frequently to avoid camera damage directors will simply place a detached piece of trunk door in the corner of the frame. This shot was a favourite of Quentin Tarantino and has been used in many crime and gangster films, often as a first-person shot through the eyes of someone who is tied up and lying inside the vehicle. A shot that has gained traction in avant-garde circles is the extreme close-up. This is when a single detail of the subject fills up the entire frame. Alfred Hitchcock famously used an extreme close-up in ‘Psycho’, when he merged a shot of a shower drain into a view of a victim’s eye. It has also been used in Westerns to depict tension between duelling gunmen eyeing each other up before a shoot out.

Not all types of shots are used in order to enhance the narrative. Sometimes financial restrictions or technical limitations are a more pressing concern, especially for low-budget film makers. In the early murder mysteries of the 1920s and 1930s, the American shot – which acquired its name from French critics who referred to a “plan américain” – was used widely for its ability to present complex dialogue scenes without alterations in camera position. Using the American shot, directors have their cast assemble in single file while discussing key plot points. The result is an efficiently produced scene that conveys all relevant information, but the trade off is a natural tone. Because few people in real life would ever associate in such an awkward manner, American shots tend to result in a hammy, stiff feel to the production.

Question

Look at the following descriptions and the list of terms below.

Match each description with the correct term.

Lone pedestrian, walking a city street
A single person, head and shoulders, off-centre angle shot
Two people, only one facing camera, head and shoulders shot
A group of people, full length body shot
Distance shot of central city, from the air
A flaming bus, about to crash

2 / 7

Volunteers

Thank you for volunteering to work one-on-one with some of the students at our school who need extra help.

Smoking policy

Smoking is prohibited by law in the classrooms and anywhere on the school grounds.

Safety and Health

Volunteers are responsible for their own personal safety and should notify the school of any pre-existing medical conditions. Prescription and any other medications that you normally carry with you must be handed in to the school nurse on arrival and collected on departure. If you require them, the nurse will dispense them to you in her office.

Sign-in

A sign-in book is located at office reception. Please sign this register every time you come to the school. This is important for insurance purposes and emergency situations.

After signing the book, collect a Visitor’s badge from the office. This must be worn at all times when you are on school premises. Remember to return the badge afterwards.

Messages

Teachers will communicate with volunteers via telephone, email or messages left at the office. Always ask for messages. You may communicate with teachers in the same way – the preferred method is to leave a memo in the relevant teacher’s pigeonhole.  These can be found at the end of the corridor in the staffroom block.

Work hours

We understand that your time commitment is entirely voluntary and therefore flexible. If your personal schedule should change and this affects your availability, please contact the Co-ordinator for Volunteers at the school on extension 402; alternatively, you could drop in to her office situated in F block.

Role of the Co-ordinator

The Co-ordinator is responsible for matching volunteer tutors with students, organising tutorial rooms, ensuring student attendance and overseeing volunteer tutor training. If you encounter any problems, contact her as above.

Question

Do the following statements agree with the information given in the text on the previous page?

True - if the statement agrees with the information

False - if the statement contradicts the information

Not Given - if there is no information on this

 

As a volunteer, you will be helping students individually.

You may smoke in the playground.

You cannot take any medicine while at the school.

If you forget to sign the register, you won’t be insured for accidents.

The best way of communicating with teachers is in writing.

You can choose your own hours of work.

The co-ordinator keeps student attendance rolls.

3 / 7

Making the Cut

When we talk about how films convey meaning we tend to refer to acting, music, dialogue, props and narrative developments, but often forgotten is the visual essence of a film itself, which is the cutting together of moving images – “motion pictures” – each one carefully tailored to meet a particular need or purpose.

Most films and many important scenes within them open with an establishing shot. Typically this shot precedes our introduction to the main characters by presenting us with the locale in which the scene’s action or dialogue is about to occur. Occasionally, however, a director will use an establishing shot with another goal in mind. An opening view of a thousand soldiers parading in synchronized fashion might have little to reveal about the film’s geography, for example, but it does inform the audience that ideas about discipline and conformity are likely to arise in the material that follows. In this way, establishing shots can also introduce a film’s theme.

After an establishing shot, most directors choose a long shot in order to progress the narrative. This type of shot displays the entire human physique in relation to its surroundings, so it is ideal for bridging the narrative divide between location and individual activity. A long shot is therefore often used to centre on a pivotal character in the scene. A film might begin with an establishing shot of bleak, snowy mountains and then cut to a long shot of a lone skier, for example, or a sweeping panorama of a bustling metropolis could segue into a street view of someone entering a building.

From here the door is wide open for directors to choose whichever shots will enhance the narration. Close-up shots are popular in suspense sequences – a handgun being loaded, a doorknob being turned, the startled expression of someone freshly roused from sleep. Confining the visual field in this way adds to the viewer’s apprehension. Dramatic films will probably want to emphasise character interaction. The third-person shot – in which a third of the frame consists of a rear view of a person’s upper torso and head – can be effectively utilised here. This shot encourages us to actually slip into the persona of that character, and vicariously live through their experiences.

A number of special-purpose shots are used quite rarely – once, if at all, in most films. One such type is the money shot. A money shot has no specific technical features or content, but is typically the most expensive element of a film’s production values and comes with a cost massively disproportionate to its screen time (which may be limited to just a brief glimpse). Because of its spectacular, extravagant nature, however, the money shot is a major revenue generator and is widely exploited for use in promotional materials. Money shots are most popular amongst – but not limited to – high visual-impact genres such as action, war, thriller and disaster films.

But more affordable shots can also add an interesting twist to the story. The Dutch tilt can depict a character in a state of psychological unease by shooting them from a jaunty angle. In this way they appear literally and metaphorically unbalanced. A trunk shot often shows a small group of characters peering into the trunk of a vehicle. It is filmed from a perspective within the trunk itself, although frequently to avoid camera damage directors will simply place a detached piece of trunk door in the corner of the frame. This shot was a favourite of Quentin Tarantino and has been used in many crime and gangster films, often as a first-person shot through the eyes of someone who is tied up and lying inside the vehicle. A shot that has gained traction in avant-garde circles is the extreme close-up. This is when a single detail of the subject fills up the entire frame. Alfred Hitchcock famously used an extreme close-up in ‘Psycho’, when he merged a shot of a shower drain into a view of a victim’s eye. It has also been used in Westerns to depict tension between duelling gunmen eyeing each other up before a shoot out.

Not all types of shots are used in order to enhance the narrative. Sometimes financial restrictions or technical limitations are a more pressing concern, especially for low-budget film makers. In the early murder mysteries of the 1920s and 1930s, the American shot – which acquired its name from French critics who referred to a “plan américain” – was used widely for its ability to present complex dialogue scenes without alterations in camera position. Using the American shot, directors have their cast assemble in single file while discussing key plot points. The result is an efficiently produced scene that conveys all relevant information, but the trade off is a natural tone. Because few people in real life would ever associate in such an awkward manner, American shots tend to result in a hammy, stiff feel to the production.

Answer the questions below:

Choose no more than three words from the passage for each answer.

34. Which two aspects of story can be shown with an establishing shot?

35. What does a long shot focus our attention on?

36. What do close-ups restrict in order to make audiences nervous?

37. What does a third-person shot place importance on?

4 / 7

Making the Cut

When we talk about how films convey meaning we tend to refer to acting, music, dialogue, props and narrative developments, but often forgotten is the visual essence of a film itself, which is the cutting together of moving images – “motion pictures” – each one carefully tailored to meet a particular need or purpose.

Most films and many important scenes within them open with an establishing shot. Typically this shot precedes our introduction to the main characters by presenting us with the locale in which the scene’s action or dialogue is about to occur. Occasionally, however, a director will use an establishing shot with another goal in mind. An opening view of a thousand soldiers parading in synchronized fashion might have little to reveal about the film’s geography, for example, but it does inform the audience that ideas about discipline and conformity are likely to arise in the material that follows. In this way, establishing shots can also introduce a film’s theme.

After an establishing shot, most directors choose a long shot in order to progress the narrative. This type of shot displays the entire human physique in relation to its surroundings, so it is ideal for bridging the narrative divide between location and individual activity. A long shot is therefore often used to centre on a pivotal character in the scene. A film might begin with an establishing shot of bleak, snowy mountains and then cut to a long shot of a lone skier, for example, or a sweeping panorama of a bustling metropolis could segue into a street view of someone entering a building.

From here the door is wide open for directors to choose whichever shots will enhance the narration. Close-up shots are popular in suspense sequences – a handgun being loaded, a doorknob being turned, the startled expression of someone freshly roused from sleep. Confining the visual field in this way adds to the viewer’s apprehension. Dramatic films will probably want to emphasise character interaction. The third-person shot – in which a third of the frame consists of a rear view of a person’s upper torso and head – can be effectively utilised here. This shot encourages us to actually slip into the persona of that character, and vicariously live through their experiences.

A number of special-purpose shots are used quite rarely – once, if at all, in most films. One such type is the money shot. A money shot has no specific technical features or content, but is typically the most expensive element of a film’s production values and comes with a cost massively disproportionate to its screen time (which may be limited to just a brief glimpse). Because of its spectacular, extravagant nature, however, the money shot is a major revenue generator and is widely exploited for use in promotional materials. Money shots are most popular amongst – but not limited to – high visual-impact genres such as action, war, thriller and disaster films.

But more affordable shots can also add an interesting twist to the story. The Dutch tilt can depict a character in a state of psychological unease by shooting them from a jaunty angle. In this way they appear literally and metaphorically unbalanced. A trunk shot often shows a small group of characters peering into the trunk of a vehicle. It is filmed from a perspective within the trunk itself, although frequently to avoid camera damage directors will simply place a detached piece of trunk door in the corner of the frame. This shot was a favourite of Quentin Tarantino and has been used in many crime and gangster films, often as a first-person shot through the eyes of someone who is tied up and lying inside the vehicle. A shot that has gained traction in avant-garde circles is the extreme close-up. This is when a single detail of the subject fills up the entire frame. Alfred Hitchcock famously used an extreme close-up in ‘Psycho’, when he merged a shot of a shower drain into a view of a victim’s eye. It has also been used in Westerns to depict tension between duelling gunmen eyeing each other up before a shoot out.

Not all types of shots are used in order to enhance the narrative. Sometimes financial restrictions or technical limitations are a more pressing concern, especially for low-budget film makers. In the early murder mysteries of the 1920s and 1930s, the American shot – which acquired its name from French critics who referred to a “plan américain” – was used widely for its ability to present complex dialogue scenes without alterations in camera position. Using the American shot, directors have their cast assemble in single file while discussing key plot points. The result is an efficiently produced scene that conveys all relevant information, but the trade off is a natural tone. Because few people in real life would ever associate in such an awkward manner, American shots tend to result in a hammy, stiff feel to the production.

 

Question

Complete the summary below.

Choose no more than two words fom the text for each answer.

 

Some shots are not used very often. Money shots have a high considering that they only last for a few seconds. The money shot brings in a lot of money, however, and is an important part of the film’s Other, less expensive shots can still be fascinating: a character can be made to seem in both mind and body when filmed with a Dutch tilt, for instance.

5 / 7

Writing Effective Emails

Follow these simple rules to make a positive impression and get an appropriate response.

A) Like a headline in a newspaper: it should grab the recipient’s attention and specify what the message is about – use a few well-chosen words. If the email is one of a series e.g. a weekly newsletter, include the date in the subject line. Never leave it blank.

B) If you need to email someone about several different issues, write a separate email for each subject. This allows the recipient to reply to each one individually in a timely manner. For instance, one subject might be dealt with quickly while another could involve some research. If you have several related points, put them all in the same email but present each point in a numbered or bulleted paragraph.

C) Your email should be clear and concise. Sentences should be short and to the point. The purpose of the message should be outlined in the first paragraph and the body should contain all of the relevant information.

D) Be sure to include a ‘call to action’ – a phone call or a follow-up appointment perhaps. To ensure a prompt reply, incorporate your contact information – name, title, company, phone/fax numbers or extensions, even your business address if necessary. Even internal messages must have contact information.

E) Only use this technique for very short messages or reminders where all the relevant information can fit in the subject line. Write EOM at the end of the line to indicate that the recipient doesn’t have to open the email.

F) Emails, even internal ones, should not be too informal – after all, they are written forms of communication. Use your spell-check and avoid slang.

Questions 22–27

The text has six sections, A–F.

Choose the correct heading for each section, A–F, from the list of headings below.

D
C
A
F
B
E

6 / 7

Conditions of employment

Weekly hours of work – 40 hours per week at the ordinary hourly rate of pay for most full-time employees, plus reasonable additional hours (penalty rates  apply). Part-time employees work a regular number of hours and days each week, but fewer hours than full-time workers. Casual employees are employed on an hourly or daily basis.

Entitlements (full-time employees):

Parental leave – up to 12 months’ unpaid leave for maternity, paternity and adoption related leave.

Sick leave – up to 10 days’ paid sick leave per year; more than 4 continuous days requires a medical certificate.

Annual leave – 4 weeks’ paid leave per annum, plus an additional week for shift workers.

Public holidays – a paid day off on a public holiday, except where reasonably requested to work. Employees working on public holidays are entitled to 15% above their normal hourly rate.

Notice of termination – 2 weeks’ notice of termination (3 weeks if the employee is more than 55 years old and has at least 2 years of continuous service)

Note:

The entitlements you receive will depend on whether you are employed on a full-time, part-time or casual basis.

If you work part-time, you should receive all the entitlements of a full-time employee but on a pro-rata or proportional basis.

If you are a casual worker, you do not have rights to any of the above entitlements nor penalty payments. Casual workers have no guarantee of hours to be worked and they do not have to be given advance notice of termination.

1 Penalty rate = a higher rate of pay to compensate for working overtime or outside normal hours e.g. night-time or on public holidays.

Question

Do the following statements agree with the information given in the text?

True - if the statement agrees with the information

False - if the statement contradicts the information

Not Given - if there is no information on this

 

Part-time workers are entitled to a higher rate of pay if they work more than their usual number of hours per week.

Casual workers may be hired by the hour or by the day.

A full-timer who takes a year off to have a baby can return to the same employer.

A full-time worker needs a doctor’s note if he is sick for 4 days in a row.

A full-time night-shift worker is entitled to 5 weeks’ paid holiday each year.

Any workers over 55 are entitled to 3 weeks’ notice of termination.

Casual workers can be dismissed without notice.

7 / 7

Camping in the Bush

Minimal impact bushwalking

Responsible campers observe minimal impact bushwalking practices. This is a code of ethics and behaviour aimed at preserving the natural beauty of bushwalking areas.

Planning 

Good planning is the key to safe and successful camping trips. Obtaining a camping permit in advance of leaving to camp out overnight in a national park is obligatory. Bookings are also compulsory for some parks. There could be limits on group sizes in some parks. Occasionally campsites may be closed owing to bushfire danger or for other reasons. Always obtain permission from the owner prior to crossing private property.

Equipment

As well as your usual bushwalking gear, you will need the right equipment for camping.

A fuel stove and fuel for cooking is essential: not only is it safer, faster and cleaner; but it is easier to use in wet weather. It is recommended that you pitch a free-standing tent which requires few pegs and therefore has less ecological impact. Take a sleeping mat, if you have one, to put your sleeping bag on for a more comfortable night’s sleep. You will also need a hand trowel to bury human waste – for proper sanitation and hygiene.

Campfires

The traditional campfire actually causes a huge amount of environmental damage. If you gather firewood, you are removing the vital habitat of insects, reptiles, birds and small mammals. When campfires lead to bushfires, they create enormous danger to native bush inhabitants and bushwalkers alike and result in destruction of the environment. Under no circumstances should you light a fire in the bush.

Campsites

Erect your tent at an existing site if possible; otherwise try to find a spot where you won’t damage vegetation. Never cut branches or move rocks or disturb the soil unnecessarily. Aim to leave your campsite as you found it or even cleaner.

Rubbish

Remove all rubbish – carry it out with you. Don’t attempt to burn or bury rubbish because this creates a fire hazard and/or disturbs the soil. Animals can dig up buried rubbish and scatter it about. Never feed the local wildlife – carry out all food scraps as these disturb the natural nutrient balance and can create weed problems.

Walk safely

Keep on the track. Wear footwear suitable for the terrain. Take a map.

Question

Classify the following behaviours as something that campers

must do

may do

must not do

 

get the landowner’s consent before walking across his land

use a sleeping mat

make a campfire in the bush

feed the birds

use a free-standing tent

dig a hole to bury rubbish in

get authorisation before setting out to camp in a national park

1

IELTS Reading Academic 3

In this challenge, the questions are set up as they would be in the IELTS exam. 3 sections, with 2/3/4 questions in each section. Each section should take 20 minutes, and you will have 1 hour to answer all the questions.

1 / 10

The Rufous Hare-Wallaby

The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, ‘mala’. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced - indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range - but no other traces were found.

Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first, it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild.

Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young - known as a joey - in her pouch for about 15 weeks, and she can have more than one joey at the same time.

In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first, it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects.

With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles'northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again.

Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the mala’s known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats - a task that had taken two years of hard work.

Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters - only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected.

Today, there are many signs suggesting that the mala population on the island is continuing to do well.

Question

Complete the flow chart below.

Choose NO MORE THAN THREE WORDS AND/OR A NUMBER from the passage for each answer.

The Wild Australian mala

Distant past: total population of up to  in desert and semi-desert regions.

Populations of malas were destroyed by

1964/1976: two surviving colonies were discovered.

Scientists  the colonies.

1987: one of the colonies was completely destroyed.

1991: the other colony was destroyed by

The wild mala was declared

2 / 10

The Rufous Hare-Wallaby

The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, ‘mala’. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced - indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range - but no other traces were found.

Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first, it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild.

Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young - known as a joey - in her pouch for about 15 weeks, and she can have more than one joey at the same time.

In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first, it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects.

With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles'northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again.

Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the mala’s known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats - a task that had taken two years of hard work.

Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters - only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected.

Today, there are many signs suggesting that the mala population on the island is continuing to do well.

Do the following statements agree with the information given in The Reading Passage?

TRUE          if the statement agrees with the information
FALSE         if the statement contradicts the information
NOT GIVEN if there is no information on this

Natural defences were sufficient to protect the area called Mala Paddock.
Scientists eventually gave up their efforts to release captive mala into the unprotected wild.
The mala population which was transferred to Dryandra Woodland quickly increased in size.
Scientists were satisfied with the initial results of the recovery programme.

3 / 10

The Rufous Hare-Wallaby

The Rufous Hare-Wallaby is a species of Australian kangaroo, usually known by its Aboriginal name, ‘mala’. At one time, there may have been as many as ten million of these little animals across the arid and semi-arid landscape of Australia, but their populations, like those of so many other small endemic species, were devastated when cats and foxes were introduced - indeed, during the 1950s it was thought that the mala was extinct. But in 1964, a small colony was found 450 miles northwest of Alice Springs in the Tanami Desert. And 12 years later, a second small colony was found nearby. Very extensive surveys were made throughout historical mala range - but no other traces were found.

Throughout the 1970s and 1980s, scientists from the Parks and Wildlife Commission of the Northern Territory monitored these two populations. At first, it seemed that they were holding their own. Then in late 1987, every one of the individuals of the second and smaller of the wild colonies was killed. From examination of the tracks in the sand, it seemed that just one single fox had been responsible. And then, in October 1991, a wild-fire destroyed the entire area occupied by the remaining colony. Thus the mala was finally pronounced extinct in the wild.

Fortunately, ten years earlier, seven individuals had been captured, and had become the founders of a captive breeding programme at the Arid Zone Research Institute in Alice Springs; and that group had thrived. Part of this success is due to the fact that the female can breed when she is just five months old and can produce up to three young a year. Like other kangaroo species, the mother carries her young - known as a joey - in her pouch for about 15 weeks, and she can have more than one joey at the same time.

In the early 1980s, there were enough mala in the captive population to make it feasible to start a reintroduction programme. But first, it was necessary to discuss this with the leaders of the Yapa people. Traditionally, the mala had been an important animal in their culture, with strong medicinal powers for old people. It had also been an important food source, and there were concerns that any mala returned to the wild would be killed for the pot. And so, in 1980, a group of key Yapa men was invited to visit the proposed reintroduction area. The skills and knowledge of the Yapa would play a significant and enduring role in this and all other mala projects.

With the help of the local Yapa, an electric fence was erected around 250 acres of suitable habitat, about 300 miles'northwest of Alice Springs so that the mala could adapt while protected from predators. By 1992, there were about 150 mala in their enclosure, which became known as the Mala Paddock. However, all attempts to reintroduce mala from the paddocks into the unfenced wild were unsuccessful, so in the end the reintroduction programme was abandoned. The team now faced a situation where mala could be bred, but not released into the wild again.

Thus, in 1993, a Mala Recovery Team was established to boost mala numbers, and goals for a new programme were set: the team concentrated on finding suitable predator-free or predator-controlled conservation sites within the mala’s known range. Finally, in March 1999, twelve adult females, eight adult males, and eight joeys were transferred from the Mala Paddock to Dryandra Woodland in Western Australia. Then, a few months later, a second group was transferred to Trimouille, an island off the coast of western Australia. First, it had been necessary to rid the island of rats and cats - a task that had taken two years of hard work.

Six weeks after their release into this conservation site, a team returned to the island to find out how things were going. Each of the malas had been fitted with a radio collar that transmits for about 14 months, after which it falls off. The team was able to locate 29 out of the 30 transmitters - only one came from the collar of a mala that had died of unknown causes. So far the recovery programme had gone even better than expected.

Today, there are many signs suggesting that the mala population on the island is continuing to do well.

Question

Answer the questions below.

Choose NO MORE THAN THREE WORDS AND/OR A NUMBER from the passage for each answer.

At what age can female malas start breeding?
For about how long do young malas stay inside their mother’s pouch?
Apart from being a food source, what value did malas have for the Yapa people?
What was the Yapa’s lasting contribution to the mala reintroduction programme?

4 / 10

A. In the second half of the seventeenth century, Russian authorities began implementing controls at the borders of their empire to prevent the importation of plague, a highly infectious and dangerous disease. Information on disease outbreak occurring abroad was regularly reported to the tsar’s court through various means, including commercial channels (travelling merchants), military personnel deployed abroad, undercover agents, the network of Imperial Foreign Office embassies and representations abroad, and the customs offices. For instance, the heads of customs offices were instructed to question foreigners entering Russia about possible epidemics of dangerous diseases in their respective countries.

B. If news of an outbreak came from abroad, relations with the affected country were suspended. For instance, foreign vessels were not allowed to dock in Russian ports if there was credible information about the existence of epidemics in countries from whence they had departed. In addition, all foreigners entering Russia from those countries had to undergo quarantine. In 1665, after receiving news about a plague epidemic in England, Tsar Alexei wrote a letter to King Charles II in which he announced the cessation of Russian trade relations with England and other foreign states. These protective measures appeared to have been effective, as the country did not record any cases of plague during that year and in the next three decades. It was not until 1692 that another plague outbreak was recorded in the Russian province of Astrakhan. This epidemic continued for five months and killed 10,383 people, or about 65 percent of the city’s population. By the end of the seventeenth century, preventative measures had been widely introduced in Russia, including the isolation of persons ill with plague, the imposition of quarantines, and the distribution of explanatory public health notices about plague outbreaks.

C. During the eighteenth century, although none of the occurrences was of the same scale as in the past, plague appeared in Russia several times. For instance, from 1703 to 1705, a plague outbreak that had ravaged Istanbul spread to the Podolsk and Kiev provinces in Russia, and then to Poland and Hungary. After defeating the Swedes in the battle of Poltava in 1709, Tsar Peter I (Peter the Great) dispatched part of his army to Poland, where the plague had been raging for two years. Despite preventive measures, the disease spread among the Russian troops. In 1710, the plague reached Riga (then part of Sweden, now the capital of Latvia), where it was active until 1711 and claimed 60,000 lives. During this period, the Russians besieged Riga and, after the Swedes had surrendered the city in 1710, the Russian army lost 9.800 soldiers to the plague. Russian military chronicles of the time note that more soldiers died of the disease after the capture of Riga than from enemy fire during the siege of that city.

D. Tsar Peter, I imposed strict measures to prevent the spread of plague during these conflicts. Soldiers suspected of being infected were isolated and taken to areas far from military camps. In addition, camps were designed to separate divisions, detachments, and smaller units of soldiers. When plague reached Narva (located in present-day Estonia) and threatened to spread to St. Petersburg, the newly built capital of Russia, Tsar Peter I ordered the army to cordon off the entire boundary along the Luga River, including temporarily halting all activity on the river.

In order to prevent the movement of people and goods from Narva to St Petersburg and Novgorod, roadblocks and checkpoints were set up on all roads. The tsar’s orders were rigorously enforced, and those who disobeyed were hung.

E. However, although the Russian authorities applied such methods to contain the spread of the disease and limit the number of victims, all of the measures had a provisional character: they were intended to respond to a specific outbreak, and were not designed as a coherent set of measures to be implemented systematically at the first sign of plague. The advent of such a standard response system came a few years later.

F. The first attempts to organise procedures and carry out proactive steps to control plague date to the aftermath of the 1727- 1728 epidemic in Astrakhan. In response to this, the Russian imperial authorities issued several decrees aimed at controlling the future spread of plague. Among these decrees, the ‘Instructions for Governors and Heads of Townships’ required that all governors immediately inform the Senate - a government body created by Tsar Peter I in 1711 to advise the monarch - if plague cases were detected in their respective provinces.

Furthermore, the decree required that governors ensure the physical examination of all persons suspected of carrying the disease and their subsequent isolation. In addition, it was ordered that sites, where plague victims were found, had to be encircled by checkpoints and isolated for the duration of the outbreak. These checkpoints were to remain operational for at least six weeks.

The houses of infected persons were to be burned along with all of the personal property they contained, including farm animals and cattle. The governors were instructed to inform the neighbouring provinces and cities about every plague case occurring on their territories. Finally, letters brought by couriers were heated above a fire before being copied.

G. The implementation by the authorities of these combined measures demonstrates their intuitive understanding of the importance of the timely isolation of infected people to limit the spread of plague.

Question

The Reading Passage has SEVEN sections, A-G.

Choose the correct heading for sections from the list of headings below.

A
F
B
E
C
D

5 / 10

Measures to combat infectious disease in tsarist Russia

A. In the second half of the seventeenth century, Russian authorities began implementing controls at the borders of their empire to prevent the importation of plague, a highly infectious and dangerous disease. Information on disease outbreak occurring abroad was regularly reported to the tsar’s court through various means, including commercial channels (travelling merchants), military personnel deployed abroad, undercover agents, the network of Imperial Foreign Office embassies and representations abroad, and the customs offices. For instance, the heads of customs offices were instructed to question foreigners entering Russia about possible epidemics of dangerous diseases in their respective countries.

B. If news of an outbreak came from abroad, relations with the affected country were suspended. For instance, foreign vessels were not allowed to dock in Russian ports if there was credible information about the existence of epidemics in countries from whence they had departed. In addition, all foreigners entering Russia from those countries had to undergo quarantine. In 1665, after receiving news about a plague epidemic in England, Tsar Alexei wrote a letter to King Charles II in which he announced the cessation of Russian trade relations with England and other foreign states. These protective measures appeared to have been effective, as the country did not record any cases of plague during that year and in the next three decades. It was not until 1692 that another plague outbreak was recorded in the Russian province of Astrakhan. This epidemic continued for five months and killed 10,383 people, or about 65 percent of the city’s population. By the end of the seventeenth century, preventative measures had been widely introduced in Russia, including the isolation of persons ill with plague, the imposition of quarantines, and the distribution of explanatory public health notices about plague outbreaks.

C. During the eighteenth century, although none of the occurrences was of the same scale as in the past, plague appeared in Russia several times. For instance, from 1703 to 1705, a plague outbreak that had ravaged Istanbul spread to the Podolsk and Kiev provinces in Russia, and then to Poland and Hungary. After defeating the Swedes in the battle of Poltava in 1709, Tsar Peter I (Peter the Great) dispatched part of his army to Poland, where the plague had been raging for two years. Despite preventive measures, the disease spread among the Russian troops. In 1710, the plague reached Riga (then part of Sweden, now the capital of Latvia), where it was active until 1711 and claimed 60,000 lives. During this period, the Russians besieged Riga and, after the Swedes had surrendered the city in 1710, the Russian army lost 9.800 soldiers to the plague. Russian military chronicles of the time note that more soldiers died of the disease after the capture of Riga than from enemy fire during the siege of that city.

D. Tsar Peter, I imposed strict measures to prevent the spread of plague during these conflicts. Soldiers suspected of being infected were isolated and taken to areas far from military camps. In addition, camps were designed to separate divisions, detachments, and smaller units of soldiers. When plague reached Narva (located in present-day Estonia) and threatened to spread to St. Petersburg, the newly built capital of Russia, Tsar Peter I ordered the army to cordon off the entire boundary along the Luga River, including temporarily halting all activity on the river.

In order to prevent the movement of people and goods from Narva to St Petersburg and Novgorod, roadblocks and checkpoints were set up on all roads. The tsar’s orders were rigorously enforced, and those who disobeyed were hung.

E. However, although the Russian authorities applied such methods to contain the spread of the disease and limit the number of victims, all of the measures had a provisional character: they were intended to respond to a specific outbreak, and were not designed as a coherent set of measures to be implemented systematically at the first sign of plague. The advent of such a standard response system came a few years later.

F. The first attempts to organise procedures and carry out proactive steps to control plague date to the aftermath of the 1727- 1728 epidemic in Astrakhan. In response to this, the Russian imperial authorities issued several decrees aimed at controlling the future spread of plague. Among these decrees, the ‘Instructions for Governors and Heads of Townships’ required that all governors immediately inform the Senate - a government body created by Tsar Peter I in 1711 to advise the monarch - if plague cases were detected in their respective provinces.

Furthermore, the decree required that governors ensure the physical examination of all persons suspected of carrying the disease and their subsequent isolation. In addition, it was ordered that sites, where plague victims were found, had to be encircled by checkpoints and isolated for the duration of the outbreak. These checkpoints were to remain operational for at least six weeks.

The houses of infected persons were to be burned along with all of the personal property they contained, including farm animals and cattle. The governors were instructed to inform the neighbouring provinces and cities about every plague case occurring on their territories. Finally, letters brought by couriers were heated above a fire before being copied.

G. The implementation by the authorities of these combined measures demonstrates their intuitive understanding of the importance of the timely isolation of infected people to limit the spread of plague.

Questions 20-21

Choose TWO

Which TWO statements are made about Russia in the early eighteenth century?

6 / 10

Measures to combat infectious disease in tsarist Russia

A. In the second half of the seventeenth century, Russian authorities began implementing controls at the borders of their empire to prevent the importation of plague, a highly infectious and dangerous disease. Information on disease outbreak occurring abroad was regularly reported to the tsar’s court through various means, including commercial channels (travelling merchants), military personnel deployed abroad, undercover agents, the network of Imperial Foreign Office embassies and representations abroad, and the customs offices. For instance, the heads of customs offices were instructed to question foreigners entering Russia about possible epidemics of dangerous diseases in their respective countries.

B. If news of an outbreak came from abroad, relations with the affected country were suspended. For instance, foreign vessels were not allowed to dock in Russian ports if there was credible information about the existence of epidemics in countries from whence they had departed. In addition, all foreigners entering Russia from those countries had to undergo quarantine. In 1665, after receiving news about a plague epidemic in England, Tsar Alexei wrote a letter to King Charles II in which he announced the cessation of Russian trade relations with England and other foreign states. These protective measures appeared to have been effective, as the country did not record any cases of plague during that year and in the next three decades. It was not until 1692 that another plague outbreak was recorded in the Russian province of Astrakhan. This epidemic continued for five months and killed 10,383 people, or about 65 percent of the city’s population. By the end of the seventeenth century, preventative measures had been widely introduced in Russia, including the isolation of persons ill with plague, the imposition of quarantines, and the distribution of explanatory public health notices about plague outbreaks.

C. During the eighteenth century, although none of the occurrences was of the same scale as in the past, plague appeared in Russia several times. For instance, from 1703 to 1705, a plague outbreak that had ravaged Istanbul spread to the Podolsk and Kiev provinces in Russia, and then to Poland and Hungary. After defeating the Swedes in the battle of Poltava in 1709, Tsar Peter I (Peter the Great) dispatched part of his army to Poland, where the plague had been raging for two years. Despite preventive measures, the disease spread among the Russian troops. In 1710, the plague reached Riga (then part of Sweden, now the capital of Latvia), where it was active until 1711 and claimed 60,000 lives. During this period, the Russians besieged Riga and, after the Swedes had surrendered the city in 1710, the Russian army lost 9.800 soldiers to the plague. Russian military chronicles of the time note that more soldiers died of the disease after the capture of Riga than from enemy fire during the siege of that city.

D. Tsar Peter, I imposed strict measures to prevent the spread of plague during these conflicts. Soldiers suspected of being infected were isolated and taken to areas far from military camps. In addition, camps were designed to separate divisions, detachments, and smaller units of soldiers. When plague reached Narva (located in present-day Estonia) and threatened to spread to St. Petersburg, the newly built capital of Russia, Tsar Peter I ordered the army to cordon off the entire boundary along the Luga River, including temporarily halting all activity on the river.

In order to prevent the movement of people and goods from Narva to St Petersburg and Novgorod, roadblocks and checkpoints were set up on all roads. The tsar’s orders were rigorously enforced, and those who disobeyed were hung.

E. However, although the Russian authorities applied such methods to contain the spread of the disease and limit the number of victims, all of the measures had a provisional character: they were intended to respond to a specific outbreak, and were not designed as a coherent set of measures to be implemented systematically at the first sign of plague. The advent of such a standard response system came a few years later.

F. The first attempts to organise procedures and carry out proactive steps to control plague date to the aftermath of the 1727- 1728 epidemic in Astrakhan. In response to this, the Russian imperial authorities issued several decrees aimed at controlling the future spread of plague. Among these decrees, the ‘Instructions for Governors and Heads of Townships’ required that all governors immediately inform the Senate - a government body created by Tsar Peter I in 1711 to advise the monarch - if plague cases were detected in their respective provinces.

Furthermore, the decree required that governors ensure the physical examination of all persons suspected of carrying the disease and their subsequent isolation. In addition, it was ordered that sites, where plague victims were found, had to be encircled by checkpoints and isolated for the duration of the outbreak. These checkpoints were to remain operational for at least six weeks.

The houses of infected persons were to be burned along with all of the personal property they contained, including farm animals and cattle. The governors were instructed to inform the neighbouring provinces and cities about every plague case occurring on their territories. Finally, letters brought by couriers were heated above a fire before being copied.

G. The implementation by the authorities of these combined measures demonstrates their intuitive understanding of the importance of the timely isolation of infected people to limit the spread of plague.

Question

Choose TWO

Which TWO measures did Russia take in the seventeenth century to avoid plague outbreaks?

7 / 10

Measures to combat infectious disease in tsarist Russia

A. In the second half of the seventeenth century, Russian authorities began implementing controls at the borders of their empire to prevent the importation of plague, a highly infectious and dangerous disease. Information on disease outbreak occurring abroad was regularly reported to the tsar’s court through various means, including commercial channels (travelling merchants), military personnel deployed abroad, undercover agents, the network of Imperial Foreign Office embassies and representations abroad, and the customs offices. For instance, the heads of customs offices were instructed to question foreigners entering Russia about possible epidemics of dangerous diseases in their respective countries.

B. If news of an outbreak came from abroad, relations with the affected country were suspended. For instance, foreign vessels were not allowed to dock in Russian ports if there was credible information about the existence of epidemics in countries from whence they had departed. In addition, all foreigners entering Russia from those countries had to undergo quarantine. In 1665, after receiving news about a plague epidemic in England, Tsar Alexei wrote a letter to King Charles II in which he announced the cessation of Russian trade relations with England and other foreign states. These protective measures appeared to have been effective, as the country did not record any cases of plague during that year and in the next three decades. It was not until 1692 that another plague outbreak was recorded in the Russian province of Astrakhan. This epidemic continued for five months and killed 10,383 people, or about 65 percent of the city’s population. By the end of the seventeenth century, preventative measures had been widely introduced in Russia, including the isolation of persons ill with plague, the imposition of quarantines, and the distribution of explanatory public health notices about plague outbreaks.

C. During the eighteenth century, although none of the occurrences was of the same scale as in the past, plague appeared in Russia several times. For instance, from 1703 to 1705, a plague outbreak that had ravaged Istanbul spread to the Podolsk and Kiev provinces in Russia, and then to Poland and Hungary. After defeating the Swedes in the battle of Poltava in 1709, Tsar Peter I (Peter the Great) dispatched part of his army to Poland, where the plague had been raging for two years. Despite preventive measures, the disease spread among the Russian troops. In 1710, the plague reached Riga (then part of Sweden, now the capital of Latvia), where it was active until 1711 and claimed 60,000 lives. During this period, the Russians besieged Riga and, after the Swedes had surrendered the city in 1710, the Russian army lost 9.800 soldiers to the plague. Russian military chronicles of the time note that more soldiers died of the disease after the capture of Riga than from enemy fire during the siege of that city.

D. Tsar Peter, I imposed strict measures to prevent the spread of plague during these conflicts. Soldiers suspected of being infected were isolated and taken to areas far from military camps. In addition, camps were designed to separate divisions, detachments, and smaller units of soldiers. When plague reached Narva (located in present-day Estonia) and threatened to spread to St. Petersburg, the newly built capital of Russia, Tsar Peter I ordered the army to cordon off the entire boundary along the Luga River, including temporarily halting all activity on the river.

In order to prevent the movement of people and goods from Narva to St Petersburg and Novgorod, roadblocks and checkpoints were set up on all roads. The tsar’s orders were rigorously enforced, and those who disobeyed were hung.

E. However, although the Russian authorities applied such methods to contain the spread of the disease and limit the number of victims, all of the measures had a provisional character: they were intended to respond to a specific outbreak, and were not designed as a coherent set of measures to be implemented systematically at the first sign of plague. The advent of such a standard response system came a few years later.

F. The first attempts to organise procedures and carry out proactive steps to control plague date to the aftermath of the 1727- 1728 epidemic in Astrakhan. In response to this, the Russian imperial authorities issued several decrees aimed at controlling the future spread of plague. Among these decrees, the ‘Instructions for Governors and Heads of Townships’ required that all governors immediately inform the Senate - a government body created by Tsar Peter I in 1711 to advise the monarch - if plague cases were detected in their respective provinces.

Furthermore, the decree required that governors ensure the physical examination of all persons suspected of carrying the disease and their subsequent isolation. In addition, it was ordered that sites, where plague victims were found, had to be encircled by checkpoints and isolated for the duration of the outbreak. These checkpoints were to remain operational for at least six weeks.

The houses of infected persons were to be burned along with all of the personal property they contained, including farm animals and cattle. The governors were instructed to inform the neighbouring provinces and cities about every plague case occurring on their territories. Finally, letters brought by couriers were heated above a fire before being copied.

G. The implementation by the authorities of these combined measures demonstrates their intuitive understanding of the importance of the timely isolation of infected people to limit the spread of plague.

Question

Complete the sentences below.

Choose ONE WORD ONLY from the passage for each answer.

An outbreak of plague in  prompted the publication of a coherent preventative strategy.
Provincial governors were ordered to burn the  and possessions of plague victims.
Correspondence was held over a  prior to copying it.

8 / 10

SAVING LANGUAGE

For the first time, linguists have put a price on language. To save a language from extinction isn’t cheap - but more and more people are arguing that the alternative is the death of communities.

There is nothing unusual about a single language dying. Communities have come and gone throughout history, and with them their language. But what is happening today is extraordinary, judged by the standards of the past. It is language extinction on a massive scale. According to the best estimates, there are some 6,000 languages in the world. Of these, about half are going to die out in the course of the next century: that’s 3,000 languages in 1,200 months. On average, there is a language dying out somewhere in the world every two weeks or so.

How do we know? In the course of the past two or three decades, linguists all over the world have been gathering comparative data. If they find a language with just a few speakers left, and nobody is bothering to pass the language on to the children, they conclude that language is bound to die out soon. And we have to draw the same conclusion if a language has less than 100 speakers. It is not likely to last very long. A 1999 survey shows that 97 per cent of the world’s languages are spoken by just four per cent of the people.

It is too late to do anything to help many languages, where the speakers are too few or too old, and where the community is too busy just trying to survive to care about their language. But many languages are not in such a serious position. Often, where languages are seriously endangered, there are things that can be done to give new life to them. It is called revitalisation.

Once a community realises that its language is in danger, it can start to introduce measures which can genuinely revitalise. The community itself must want to save its language. The culture of which it is a part must need to have a respect for minority languages. There needs to be funding, to support courses, materials, and teachers. And there need to be linguists, to get on with the basic task of putting the language down on paper. That’s the bottom line: getting the language documented - recorded, analysed, written down. People must be able to read and write if they and their language are to have a future in an increasingly computer- literate civilisation.

But can we save a few thousand languages, just like that? Yes, if the will and funding were available. It is not cheap, getting linguists into the field, training local analysts, supporting the community with language resources and teachers, compiling grammars and dictionaries, writing materials for use in schools. It takes time, lots of it, to revitalise an endangered language. Conditions vary so much that it is difficult to generalise, but a figure of $ 100,000 a year per language cannot be far from the truth. If we devoted that amount of effort over three years for each of 3,000 languages, we would be talking about some $900 million.

There are some famous cases which illustrate what can be done. Welsh, alone among the Celtic languages, is not only stopping its steady decline towards extinction but showing signs of real growth. Two Language Acts protect the status of Welsh now, and its presence is increasingly in evidence wherever you travel in Wales.

On the other side of the world, Maori in New Zealand has been maintained by a system of so- called ‘language nests’, first introduced in 1982. These are organisations which provide children under five with a domestic setting in which they are intensively exposed to the language. The staff are all Maori speakers from the local community. The hope is that the children will keep their Maori skills alive after leaving the nests, and that as they grow older they will, in turn, become role models to a new generation of young children. There are cases like this all over the world. And when the reviving language is associated with a degree of political autonomy, the growth can be especially striking, as shown by Faroese, spoken in the Faroe Islands, after the Islanders received a measure of autonomy from Denmark.

In Switzerland, Romansch was facing a difficult situation, spoken in five very different dialects, with small and diminishing numbers, as young people left their community for work in the German-speaking cities. The solution here was the creation in the 1980s of a unified written language for all these dialects. Romansch Grischun, as it is now called, has official status in parts of Switzerland, and is being increasingly used in spoken form on radio and television.

A language can be brought back from the very brink of extinction. The Ainu language of Japan, after many years of neglect and repression, had reached a stage where there were only eight fluent speakers left, all elderly. However, new government policies brought fresh attitudes and a positive interest in survival. Several ‘semi­speakers’ - people who had become unwilling to speak Ainu because of the negative attitudes by Japanese speakers - were prompted to become active speakers again. There is fresh interest now and the language is more publicly available than it has been for years.

If good descriptions and materials are available, even extinct languages can be resurrected. Kaurna, from South Australia, is an example. This language had been extinct for about a century, but had been quite well documented. So, when a strong movement grew for its revival, it was possible to reconstruct it. The revised language is not the same as the original, of course. It lacks the range that the original had, and much of the old vocabulary. But it can nonetheless act as a badge of present-day identity for its people. And as long as people continue to value it as a true marker of their identity, and are prepared to keep using it, it will develop new functions and new vocabulary, as any other living language would do.

It is too soon to predict the future of these revived languages, but in some parts of the world they are attracting precisely the range of positive attitudes and grass roots support which are the preconditions for language survival. In such unexpected but heart-warming ways might we see the grand total of languages in the world minimally increased.

Question

Match the languages with the statements below which describe how a language was saved.

Written samples of the language permitted its revitalisation.
People were encouraged to view the language with less prejudice.
A merger of different varieties of the language took place.
The region in which the language was spoken gained increased independence.
Language immersion programmes were set up for sectors of the population.

9 / 10

SAVING LANGUAGE

For the first time, linguists have put a price on language. To save a language from extinction isn’t cheap - but more and more people are arguing that the alternative is the death of communities.

There is nothing unusual about a single language dying. Communities have come and gone throughout history, and with them their language. But what is happening today is extraordinary, judged by the standards of the past. It is language extinction on a massive scale. According to the best estimates, there are some 6,000 languages in the world. Of these, about half are going to die out in the course of the next century: that’s 3,000 languages in 1,200 months. On average, there is a language dying out somewhere in the world every two weeks or so.

How do we know? In the course of the past two or three decades, linguists all over the world have been gathering comparative data. If they find a language with just a few speakers left, and nobody is bothering to pass the language on to the children, they conclude that language is bound to die out soon. And we have to draw the same conclusion if a language has less than 100 speakers. It is not likely to last very long. A 1999 survey shows that 97 per cent of the world’s languages are spoken by just four per cent of the people.

It is too late to do anything to help many languages, where the speakers are too few or too old, and where the community is too busy just trying to survive to care about their language. But many languages are not in such a serious position. Often, where languages are seriously endangered, there are things that can be done to give new life to them. It is called revitalisation.

Once a community realises that its language is in danger, it can start to introduce measures which can genuinely revitalise. The community itself must want to save its language. The culture of which it is a part must need to have a respect for minority languages. There needs to be funding, to support courses, materials, and teachers. And there need to be linguists, to get on with the basic task of putting the language down on paper. That’s the bottom line: getting the language documented - recorded, analysed, written down. People must be able to read and write if they and their language are to have a future in an increasingly computer- literate civilisation.

But can we save a few thousand languages, just like that? Yes, if the will and funding were available. It is not cheap, getting linguists into the field, training local analysts, supporting the community with language resources and teachers, compiling grammars and dictionaries, writing materials for use in schools. It takes time, lots of it, to revitalise an endangered language. Conditions vary so much that it is difficult to generalise, but a figure of $ 100,000 a year per language cannot be far from the truth. If we devoted that amount of effort over three years for each of 3,000 languages, we would be talking about some $900 million.

There are some famous cases which illustrate what can be done. Welsh, alone among the Celtic languages, is not only stopping its steady decline towards extinction but showing signs of real growth. Two Language Acts protect the status of Welsh now, and its presence is increasingly in evidence wherever you travel in Wales.

On the other side of the world, Maori in New Zealand has been maintained by a system of so- called ‘language nests’, first introduced in 1982. These are organisations which provide children under five with a domestic setting in which they are intensively exposed to the language. The staff are all Maori speakers from the local community. The hope is that the children will keep their Maori skills alive after leaving the nests, and that as they grow older they will, in turn, become role models to a new generation of young children. There are cases like this all over the world. And when the reviving language is associated with a degree of political autonomy, the growth can be especially striking, as shown by Faroese, spoken in the Faroe Islands, after the Islanders received a measure of autonomy from Denmark.

In Switzerland, Romansch was facing a difficult situation, spoken in five very different dialects, with small and diminishing numbers, as young people left their community for work in the German-speaking cities. The solution here was the creation in the 1980s of a unified written language for all these dialects. Romansch Grischun, as it is now called, has official status in parts of Switzerland, and is being increasingly used in spoken form on radio and television.

A language can be brought back from the very brink of extinction. The Ainu language of Japan, after many years of neglect and repression, had reached a stage where there were only eight fluent speakers left, all elderly. However, new government policies brought fresh attitudes and a positive interest in survival. Several ‘semi­speakers’ - people who had become unwilling to speak Ainu because of the negative attitudes by Japanese speakers - were prompted to become active speakers again. There is fresh interest now and the language is more publicly available than it has been for years.

If good descriptions and materials are available, even extinct languages can be resurrected. Kaurna, from South Australia, is an example. This language had been extinct for about a century, but had been quite well documented. So, when a strong movement grew for its revival, it was possible to reconstruct it. The revised language is not the same as the original, of course. It lacks the range that the original had, and much of the old vocabulary. But it can nonetheless act as a badge of present-day identity for its people. And as long as people continue to value it as a true marker of their identity, and are prepared to keep using it, it will develop new functions and new vocabulary, as any other living language would do.

It is too soon to predict the future of these revived languages, but in some parts of the world they are attracting precisely the range of positive attitudes and grass roots support which are the preconditions for language survival. In such unexpected but heart-warming ways might we see the grand total of languages in the world minimally increased.

Question

The list below gives some of the factors that are necessary to assist the revitalisation of a language within a community.

Which THREE of the factors are mentioned by the writer of the text?

10 / 10

SAVING LANGUAGE

For the first time, linguists have put a price on language. To save a language from extinction isn’t cheap - but more and more people are arguing that the alternative is the death of communities.

There is nothing unusual about a single language dying. Communities have come and gone throughout history, and with them their language. But what is happening today is extraordinary, judged by the standards of the past. It is language extinction on a massive scale. According to the best estimates, there are some 6,000 languages in the world. Of these, about half are going to die out in the course of the next century: that’s 3,000 languages in 1,200 months. On average, there is a language dying out somewhere in the world every two weeks or so.

How do we know? In the course of the past two or three decades, linguists all over the world have been gathering comparative data. If they find a language with just a few speakers left, and nobody is bothering to pass the language on to the children, they conclude that language is bound to die out soon. And we have to draw the same conclusion if a language has less than 100 speakers. It is not likely to last very long. A 1999 survey shows that 97 per cent of the world’s languages are spoken by just four per cent of the people.

It is too late to do anything to help many languages, where the speakers are too few or too old, and where the community is too busy just trying to survive to care about their language. But many languages are not in such a serious position. Often, where languages are seriously endangered, there are things that can be done to give new life to them. It is called revitalisation.

Once a community realises that its language is in danger, it can start to introduce measures which can genuinely revitalise. The community itself must want to save its language. The culture of which it is a part must need to have a respect for minority languages. There needs to be funding, to support courses, materials, and teachers. And there need to be linguists, to get on with the basic task of putting the language down on paper. That’s the bottom line: getting the language documented - recorded, analysed, written down. People must be able to read and write if they and their language are to have a future in an increasingly computer- literate civilisation.

But can we save a few thousand languages, just like that? Yes, if the will and funding were available. It is not cheap, getting linguists into the field, training local analysts, supporting the community with language resources and teachers, compiling grammars and dictionaries, writing materials for use in schools. It takes time, lots of it, to revitalise an endangered language. Conditions vary so much that it is difficult to generalise, but a figure of $ 100,000 a year per language cannot be far from the truth. If we devoted that amount of effort over three years for each of 3,000 languages, we would be talking about some $900 million.

There are some famous cases which illustrate what can be done. Welsh, alone among the Celtic languages, is not only stopping its steady decline towards extinction but showing signs of real growth. Two Language Acts protect the status of Welsh now, and its presence is increasingly in evidence wherever you travel in Wales.

On the other side of the world, Maori in New Zealand has been maintained by a system of so- called ‘language nests’, first introduced in 1982. These are organisations which provide children under five with a domestic setting in which they are intensively exposed to the language. The staff are all Maori speakers from the local community. The hope is that the children will keep their Maori skills alive after leaving the nests, and that as they grow older they will, in turn, become role models to a new generation of young children. There are cases like this all over the world. And when the reviving language is associated with a degree of political autonomy, the growth can be especially striking, as shown by Faroese, spoken in the Faroe Islands, after the Islanders received a measure of autonomy from Denmark.

In Switzerland, Romansch was facing a difficult situation, spoken in five very different dialects, with small and diminishing numbers, as young people left their community for work in the German-speaking cities. The solution here was the creation in the 1980s of a unified written language for all these dialects. Romansch Grischun, as it is now called, has official status in parts of Switzerland, and is being increasingly used in spoken form on radio and television.

A language can be brought back from the very brink of extinction. The Ainu language of Japan, after many years of neglect and repression, had reached a stage where there were only eight fluent speakers left, all elderly. However, new government policies brought fresh attitudes and a positive interest in survival. Several ‘semi­speakers’ - people who had become unwilling to speak Ainu because of the negative attitudes by Japanese speakers - were prompted to become active speakers again. There is fresh interest now and the language is more publicly available than it has been for years.

If good descriptions and materials are available, even extinct languages can be resurrected. Kaurna, from South Australia, is an example. This language had been extinct for about a century, but had been quite well documented. So, when a strong movement grew for its revival, it was possible to reconstruct it. The revised language is not the same as the original, of course. It lacks the range that the original had, and much of the old vocabulary. But it can nonetheless act as a badge of present-day identity for its people. And as long as people continue to value it as a true marker of their identity, and are prepared to keep using it, it will develop new functions and new vocabulary, as any other living language would do.

It is too soon to predict the future of these revived languages, but in some parts of the world they are attracting precisely the range of positive attitudes and grass roots support which are the preconditions for language survival. In such unexpected but heart-warming ways might we see the grand total of languages in the world minimally increased.

Question

Do the following statements agree with the views of the writer in Reading Passage 278?

In boxes 28-32 on your answer sheet write

YES         if the statement agrees with the writer s views
NO           if the statement contradicts the writer s views
NOT GIVEN if it is impossible to say what the writer thinks about this

The rate at which languages are becoming extinct has increased.

Research on the subject of language extinction began in the 1990s.

In order to survive, a language needs to be spoken by more than 100 people.

Certain parts of the world are more vulnerable than others to language extinction.

Saving language should be the major concern of any small community whose language is under threat.

0

IELTS Reading Academic 4

In this challenge, the questions are set up as they would be in the IELTS exam. 3 sections, with 2/3/4 questions in each section. Each section should take 20 minutes, and you will have 1 hour to answer all the questions.

1 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

The Reading Passage describes a number of cause and effect relationships.

Match each Cause with its Effect.

The researchers publish their findings.
Oil spills into the sea.
Toxic chemicals are abundant in new cars.
Industrialised nations use a lot of energy.
Water is brought to a high temperature.
People fear pollutants in tap water.
Air conditioning systems are inadequate.

2 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

Choose the appropriate letters A-D

As a result of their experiments, Dr Corsi’s team found that .......

3 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

Choose the appropriate letters A-D

The Corsi research team hypothesised that .......

4 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

Choose the appropriate letters A-D

In the first paragraph, the writer argues that pollution .......

5 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

Choose the appropriate letters A-D

The Sydney Harbour oil spill was the result of a .......

6 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

Choose the appropriate letters A-D

In the 3rd paragraph, the writer suggests that .......

7 / 15

Indoor Pollution

Since the early eighties, we have been only too aware of the devastating effects of large-scale environmental pollution. Such pollution is generally the result of poor government planning in many developing nations or the short-sighted, selfish policies of the already industrialised countries which encourage a minority of the world’s population to squander the majority of its natural resources.

While events such as the deforestation of the Amazon jungle or the nuclear disaster in Chernobyl continue to receive high media exposure, as do acts of environmental sabotage, it must be remembered that not all pollution is on this grand scale. A large proportion of the world’s pollution has its source much closer to home. The recent spillage of crude oil from an oil tanker accidentally discharging its cargo straight into Sydney Harbour not only caused serious damage to the harbour foreshores but also created severely toxic fumes which hung over the suburbs for days and left the angry residents wondering how such a disaster could have been allowed to happen.

Avoiding pollution can be a full­time job. Try not to inhale traffic fumes; keep away from chemical plants and building sites; wear a mask when cycling. It is enough to make you want to stay at home. But that, according to a growing body of scientific evidence, would also be a bad idea. Research shows that levels of pollutants such as hazardous gases, particulate matter and other chemical ‘nasties’ are usually higher indoors than out, even in the most polluted cities. Since the average American spends 18 hours indoors for every hour outside, it looks as though many environmentalists may be attacking the wrong target.

The latest study, conducted by two environmental engineers, Richard Corsi and Cynthia Howard-Reed, of the University of Texas in Austin, and published in Environmental Science and Technology, suggests that it is the process of keeping clean that may be making indoor pollution worse. The researchers found that baths, showers, dishwashers and washing machines can all be significant sources of indoor pollution, because they extract trace amounts of chemicals from the water that they use and transfer them to the air.

Nearly all public water supplies contain very low concentrations of toxic chemicals, most of them left over from the otherwise beneficial process of chlorination. Dr. Corsi wondered whether they stay there when water is used, or whether they end up in the air that people breathe. The team conducted a series of experiments in which known quantities of five such chemicals were mixed with water and passed through a dishwasher, a washing machine, a shower head inside a shower stall or a tap in a bath, all inside a specially designed chamber. The levels of chemicals in the effluent water and in the air extracted from the chamber were then measured to see how much of each chemical had been transferred from the water into the air.

The degree to which the most volatile elements could be removed from the water, a process known as chemical stripping, depended on a wide range of factors, including the volatility of the chemical, the temperature of the water and the surface area available for transfer. Dishwashers were found to be particularly effective: the high-temperature spray, splashing against the crockery and cutlery, results in a nasty plume of toxic chemicals that escape when the door is opened at the end of the cycle.

In fact, in many cases, the degree of exposure to toxic chemicals in tap water by inhalation is comparable to the exposure that would result from drinking the stuff. This is significant because many people are so concerned about water-borne pollutants that they drink only bottled water, worldwide sales of which are forecast to reach $72 billion by next year. D. Corsi’s results suggest that they are being exposed to such pollutants anyway simply by breathing at home.

The aim of such research is not, however, to encourage the use of gas masks when unloading the washing. Instead, it is to bring a sense of perspective to the debate about pollution. According to Dr Corsi, disproportionate effort is wasted campaigning against certain forms of outdoor pollution, when there is as much or more cause for concern indoors, right under people’s noses.

Using gas cookers or burning candles, for example, both result in indoor levels of carbon monoxide and particulate matter that are just as high as those to be found outside, amid heavy traffic. Overcrowded classrooms whose ventilation systems were designed for smaller numbers of children frequently contain levels of carbon dioxide that would be regarded as unacceptable on board a submarine. ‘New car smell’ is the result of high levels of toxic chemicals, not cleanliness. Laser printers, computers, carpets and paints all contribute to the noxious indoor mix.

The implications of indoor pollution for health are unclear. But before worrying about the problems caused by large-scale industry, it makes sense to consider the small-scale pollution at home and welcome international debate about this. Scientists investigating indoor pollution will gather next month in Edinburgh at the Indoor Air conference to discuss the problem. Perhaps unwisely, the meeting is being held indoors.

Question

Choose the appropriate letters A-D

Regarding the dangers of pollution, the writer believes that .......

8 / 15

ROBOTS

Since the dawn of human ingenuity, people have devised ever more cunning tools to cope with work that is dangerous, boring, onerous, or just plain nasty. That compulsion has culminated in robotics - the science of conferring various human capabilities on machines.

A. The modern world is increasingly populated by quasi-intelligent gizmos whose presence we barely notice but whose creeping ubiquity has removed much human drudgery. Our factories hum to the rhythm of robot assembly arms. Our banking is done at automated teller terminals that thank us with rote politeness for the transaction. Our subway trains are controlled by tireless robo-drivers. Our mine shafts are dug by automated moles, and our nuclear accidents - such as those at Three Mile Island and Chernobyl - are cleaned up by robotic muckers fit to withstand radiation.

Such is the scope of uses envisioned by Karel Capek, the Czech playwright who coined the term ‘robot’ in 1920 (the word ‘robota’ means ‘forced labor’ in Czech). As progress accelerates, the experimental becomes the exploitable at record pace.

B. Other innovations promise to extend the abilities of human operators. Thanks to the incessant miniaturisation of electronics and micro­mechanics, there are already robot systems that can perform some kinds of brain and bone surgery with submillimeter accuracy - far greater precision than highly skilled physicians can achieve with their hands alone. At the same time, techniques of long-distance control will keep people even farther from hazard. In 1994 a ten- foot-tall NASA robotic explorer called Dante, with video-camera eyes and with spider-like legs, scrambled over the menacing rim of an Alaskan volcano while technicians 2,000 miles away in California watched the scene by satellite and controlled Dante’s descent.

C. But if robots are to reach the next stage of labour-saving utility, they will have to operate with less human supervision and be able to make at least a few decisions for themselves - goals that pose a formidable challenge. ‘While we know how to tell a robot to handle a specific error,’ says one expert, ‘we can’t yet give a robot enough common sense to reliably interact with a dynamic world.’ Indeed the quest for true artificial intelligence (Al) has produced very mixed results. Despite a spasm of initial optimism in the 1960s and 1970s, when it appeared that transistor circuits and microprocessors might be able to perform in the same way as the human brain by the 21st century, researchers lately have extended their forecasts by decades if not centuries.

D. What they found, in attempting to model thought, is that the human brain’s roughly one hundred billion neurons are much more talented - and human perception far more complicated - than previously imagined. They have built robots that can recognise the misalignment of a machine panel by a fraction of a millimeter in a controlled factory environment. But the human mind can glimpse a rapidly changing scene and immediately disregard the 98 per cent that is irrelevant, instantaneously focusing on the woodchuck at the side of a winding forest road or the single suspicious face in a tumultuous crowd. The most advanced computer systems on Earth can’t approach that kind of ability, and neuroscientists still don’t know quite how we do it.

E. Nonetheless, as information theorists, neuroscientists, and computer experts pool their talents, they are finding ways to get some life like intelligence from robots. One method renounces the linear, logical structure of conventional electronic circuits in favour of the messy, ad hoc arrangement of a real brain’s neurons. These ‘neural networks’ do not have to be programmed. They can ‘teach’ themselves by a system of feedback signals that reinforce electrical pathways that produced correct responses and, conversely, wipe out connections that produced errors. Eventually, the net wires itself into a system that can pronounce certain words or distinguish certain shapes.

F. In other areas researchers are struggling to fashion a more natural relationship between people and robots in the expectation that some day machines will take on some tasks now done by humans in, say, nursing homes. This is particularly important in Japan, where the percentage of elderly citizens is rapidly increasing. So experiments at the Science University of Tokyo have created a ‘face robot’ - a life-size, soft plastic model of a female head with a video camera imbedded in the left eye - as a prototype. The researchers’ goal is to create robots that people feel comfortable around. They are concentrating on the face because they believe facial expressions are the most important way to transfer emotional messages. We read those messages by interpreting expressions to decide whether a person is happy, frightened, angry, or nervous. Thus the Japanese robot is designed to detect emotions in the person it is ‘looking at’ by sensing changes in the spatial arrangement of the person’s eyes, nose, eyebrows, and mouth. It compares those configurations with a database of standard facial expressions and guesses the emotion. The robot then uses an ensemble of tiny pressure pads to adjust its plastic face into an appropriate emotional response.

G. Other labs are taking a different approach, one that doesn’t try to mimic human intelligence or emotions. Just as computer design has moved away from one central mainframe in favour of myriad individual workstations - and single processors have been replaced by arrays of smaller units that break a big problem into parts that are solved simultaneously - many experts are now investigating whether swarms of semi-smart robots can generate a collective intelligence that is greater than the sum of its parts. That’s what beehives and ant colonies do, and several teams are betting that legions of mini-critters working together like an ant colony could be sent to explore the climate of planets or to inspect pipes in dangerous industrial situations.

Do the following statements agree with the information given in The Reading Passage?

YES       if the statement agrees with the information
NO         if the statement contradicts the information
NOT GIVEN if there is no information on this in the passage

Karel Capek successfully predicted our current uses for robots.
Lives were saved by the NASA robot, Dante.
Robots are able to make fine visual judgements.
The internal workings of the brain can be replicated by robots.
The Japanese have the most advanced robot systems.

9 / 15

ROBOTS

Since the dawn of human ingenuity, people have devised ever more cunning tools to cope with work that is dangerous, boring, onerous, or just plain nasty. That compulsion has culminated in robotics - the science of conferring various human capabilities on machines.

A. The modern world is increasingly populated by quasi-intelligent gizmos whose presence we barely notice but whose creeping ubiquity has removed much human drudgery. Our factories hum to the rhythm of robot assembly arms. Our banking is done at automated teller terminals that thank us with rote politeness for the transaction. Our subway trains are controlled by tireless robo-drivers. Our mine shafts are dug by automated moles, and our nuclear accidents - such as those at Three Mile Island and Chernobyl - are cleaned up by robotic muckers fit to withstand radiation.

Such is the scope of uses envisioned by Karel Capek, the Czech playwright who coined the term ‘robot’ in 1920 (the word ‘robota’ means ‘forced labor’ in Czech). As progress accelerates, the experimental becomes the exploitable at record pace.

B. Other innovations promise to extend the abilities of human operators. Thanks to the incessant miniaturisation of electronics and micro­mechanics, there are already robot systems that can perform some kinds of brain and bone surgery with submillimeter accuracy - far greater precision than highly skilled physicians can achieve with their hands alone. At the same time, techniques of long-distance control will keep people even farther from hazard. In 1994 a ten- foot-tall NASA robotic explorer called Dante, with video-camera eyes and with spider-like legs, scrambled over the menacing rim of an Alaskan volcano while technicians 2,000 miles away in California watched the scene by satellite and controlled Dante’s descent.

C. But if robots are to reach the next stage of labour-saving utility, they will have to operate with less human supervision and be able to make at least a few decisions for themselves - goals that pose a formidable challenge. ‘While we know how to tell a robot to handle a specific error,’ says one expert, ‘we can’t yet give a robot enough common sense to reliably interact with a dynamic world.’ Indeed the quest for true artificial intelligence (Al) has produced very mixed results. Despite a spasm of initial optimism in the 1960s and 1970s, when it appeared that transistor circuits and microprocessors might be able to perform in the same way as the human brain by the 21st century, researchers lately have extended their forecasts by decades if not centuries.

D. What they found, in attempting to model thought, is that the human brain’s roughly one hundred billion neurons are much more talented - and human perception far more complicated - than previously imagined. They have built robots that can recognise the misalignment of a machine panel by a fraction of a millimeter in a controlled factory environment. But the human mind can glimpse a rapidly changing scene and immediately disregard the 98 per cent that is irrelevant, instantaneously focusing on the woodchuck at the side of a winding forest road or the single suspicious face in a tumultuous crowd. The most advanced computer systems on Earth can’t approach that kind of ability, and neuroscientists still don’t know quite how we do it.

E. Nonetheless, as information theorists, neuroscientists, and computer experts pool their talents, they are finding ways to get some life like intelligence from robots. One method renounces the linear, logical structure of conventional electronic circuits in favour of the messy, ad hoc arrangement of a real brain’s neurons. These ‘neural networks’ do not have to be programmed. They can ‘teach’ themselves by a system of feedback signals that reinforce electrical pathways that produced correct responses and, conversely, wipe out connections that produced errors. Eventually, the net wires itself into a system that can pronounce certain words or distinguish certain shapes.

F. In other areas researchers are struggling to fashion a more natural relationship between people and robots in the expectation that some day machines will take on some tasks now done by humans in, say, nursing homes. This is particularly important in Japan, where the percentage of elderly citizens is rapidly increasing. So experiments at the Science University of Tokyo have created a ‘face robot’ - a life-size, soft plastic model of a female head with a video camera imbedded in the left eye - as a prototype. The researchers’ goal is to create robots that people feel comfortable around. They are concentrating on the face because they believe facial expressions are the most important way to transfer emotional messages. We read those messages by interpreting expressions to decide whether a person is happy, frightened, angry, or nervous. Thus the Japanese robot is designed to detect emotions in the person it is ‘looking at’ by sensing changes in the spatial arrangement of the person’s eyes, nose, eyebrows, and mouth. It compares those configurations with a database of standard facial expressions and guesses the emotion. The robot then uses an ensemble of tiny pressure pads to adjust its plastic face into an appropriate emotional response.

G. Other labs are taking a different approach, one that doesn’t try to mimic human intelligence or emotions. Just as computer design has moved away from one central mainframe in favour of myriad individual workstations - and single processors have been replaced by arrays of smaller units that break a big problem into parts that are solved simultaneously - many experts are now investigating whether swarms of semi-smart robots can generate a collective intelligence that is greater than the sum of its parts. That’s what beehives and ant colonies do, and several teams are betting that legions of mini-critters working together like an ant colony could be sent to explore the climate of planets or to inspect pipes in dangerous industrial situations.

Question

Complete the summary below with words taken from paragraph F.

Use NO MORE THAN THREE WORDS for each answer.

Write your answers in boxes 25-27 on your answer sheet.

The prototype of the Japanese ‘face robot’ observes humans through a which is planted in its head. It then refers to a of typical ‘looks’ that the human face can have, to decide what emotion the person is feeling. To respond to this expression, the robot alters it’s own expression using a number of  .

10 / 15

ROBOTS

Since the dawn of human ingenuity, people have devised ever more cunning tools to cope with work that is dangerous, boring, onerous, or just plain nasty. That compulsion has culminated in robotics - the science of conferring various human capabilities on machines.

A. The modern world is increasingly populated by quasi-intelligent gizmos whose presence we barely notice but whose creeping ubiquity has removed much human drudgery. Our factories hum to the rhythm of robot assembly arms. Our banking is done at automated teller terminals that thank us with rote politeness for the transaction. Our subway trains are controlled by tireless robo-drivers. Our mine shafts are dug by automated moles, and our nuclear accidents - such as those at Three Mile Island and Chernobyl - are cleaned up by robotic muckers fit to withstand radiation.

Such is the scope of uses envisioned by Karel Capek, the Czech playwright who coined the term ‘robot’ in 1920 (the word ‘robota’ means ‘forced labor’ in Czech). As progress accelerates, the experimental becomes the exploitable at record pace.

B. Other innovations promise to extend the abilities of human operators. Thanks to the incessant miniaturisation of electronics and micro­mechanics, there are already robot systems that can perform some kinds of brain and bone surgery with submillimeter accuracy - far greater precision than highly skilled physicians can achieve with their hands alone. At the same time, techniques of long-distance control will keep people even farther from hazard. In 1994 a ten- foot-tall NASA robotic explorer called Dante, with video-camera eyes and with spider-like legs, scrambled over the menacing rim of an Alaskan volcano while technicians 2,000 miles away in California watched the scene by satellite and controlled Dante’s descent.

C. But if robots are to reach the next stage of labour-saving utility, they will have to operate with less human supervision and be able to make at least a few decisions for themselves - goals that pose a formidable challenge. ‘While we know how to tell a robot to handle a specific error,’ says one expert, ‘we can’t yet give a robot enough common sense to reliably interact with a dynamic world.’ Indeed the quest for true artificial intelligence (Al) has produced very mixed results. Despite a spasm of initial optimism in the 1960s and 1970s, when it appeared that transistor circuits and microprocessors might be able to perform in the same way as the human brain by the 21st century, researchers lately have extended their forecasts by decades if not centuries.

D. What they found, in attempting to model thought, is that the human brain’s roughly one hundred billion neurons are much more talented - and human perception far more complicated - than previously imagined. They have built robots that can recognise the misalignment of a machine panel by a fraction of a millimeter in a controlled factory environment. But the human mind can glimpse a rapidly changing scene and immediately disregard the 98 per cent that is irrelevant, instantaneously focusing on the woodchuck at the side of a winding forest road or the single suspicious face in a tumultuous crowd. The most advanced computer systems on Earth can’t approach that kind of ability, and neuroscientists still don’t know quite how we do it.

E. Nonetheless, as information theorists, neuroscientists, and computer experts pool their talents, they are finding ways to get some life like intelligence from robots. One method renounces the linear, logical structure of conventional electronic circuits in favour of the messy, ad hoc arrangement of a real brain’s neurons. These ‘neural networks’ do not have to be programmed. They can ‘teach’ themselves by a system of feedback signals that reinforce electrical pathways that produced correct responses and, conversely, wipe out connections that produced errors. Eventually, the net wires itself into a system that can pronounce certain words or distinguish certain shapes.

F. In other areas researchers are struggling to fashion a more natural relationship between people and robots in the expectation that some day machines will take on some tasks now done by humans in, say, nursing homes. This is particularly important in Japan, where the percentage of elderly citizens is rapidly increasing. So experiments at the Science University of Tokyo have created a ‘face robot’ - a life-size, soft plastic model of a female head with a video camera imbedded in the left eye - as a prototype. The researchers’ goal is to create robots that people feel comfortable around. They are concentrating on the face because they believe facial expressions are the most important way to transfer emotional messages. We read those messages by interpreting expressions to decide whether a person is happy, frightened, angry, or nervous. Thus the Japanese robot is designed to detect emotions in the person it is ‘looking at’ by sensing changes in the spatial arrangement of the person’s eyes, nose, eyebrows, and mouth. It compares those configurations with a database of standard facial expressions and guesses the emotion. The robot then uses an ensemble of tiny pressure pads to adjust its plastic face into an appropriate emotional response.

G. Other labs are taking a different approach, one that doesn’t try to mimic human intelligence or emotions. Just as computer design has moved away from one central mainframe in favour of myriad individual workstations - and single processors have been replaced by arrays of smaller units that break a big problem into parts that are solved simultaneously - many experts are now investigating whether swarms of semi-smart robots can generate a collective intelligence that is greater than the sum of its parts. That’s what beehives and ant colonies do, and several teams are betting that legions of mini-critters working together like an ant colony could be sent to explore the climate of planets or to inspect pipes in dangerous industrial situations.

The Reading Passage has seven paragraphs A-G.

From the list of headings below choose the most suitable heading for each paragraph.

C
F
D
B
E
A

11 / 15

The accidental rainforest

According to ecological theory, rainforests are supposed to develop slowly over millions of years. But now ecologists are being forced to reconsider their ideas

When Peter Osbeck. a Swedish priest, stopped off at the mid-Atlantic island of Ascension in 1752 on his way home from China, he wrote of ‘a heap of ruinous rocks’ with a bare, white mountain in the middle. All it boasted was a couple of dozen species of plant, most of them ferns and some of them unique to the island.

And so it might have remained. But in 1843 British plant collector Joseph Hooker made a brief call on his return from Antarctica. Surveying the bare earth, he concluded that the island had suffered some natural calamity that had denuded it of vegetation and triggered a decline in rainfall that was turning the place into a desert. The British Navy, which by then maintained a garrison on the island, was keen to improve the place and asked Hooker's advice. He suggested an ambitious scheme for planting trees and shrubs that would revive rainfall and stimulate a wider ecological recovery. And, perhaps lacking anything else to do, the sailors set to with a will.

In 1845, a naval transport ship from Argentina delivered a batch of seedlings. In the following years, more than 200 species of plant arrived from South Africa, from England came 700 packets of seeds, including those of two species that especially liked the place: bamboo and prickly pear. With sailors planting several thousand trees a year, the bare white mountain was soon cloaked in green and renamed Green Mountain, and by the early twentieth century, the mountain's slopes were covered with a variety of trees and shrubs from all over the world.

Modern ecologists throw up their hands in horror at what they see as Hookers environmental anarchy. The exotic species wrecked the indigenous ecosystem, squeezing out the islands endemic plants. In fact. Hooker knew well enough what might happen. However, he saw greater benefit in improving rainfall and encouraging more prolific vegetation on the island.

But there is a much deeper issue here than the relative benefits of sparse endemic species versus luxuriant imported ones. And as botanist David Wilkinson of Liverpool John Moores University in the UK pointed out after a recent visit to the island, it goes to the heart of some of the most dearly held tenets of ecology. Conservationists' understandable concern for the fate of Ascension’s handful of unique species has, he says, blinded them to something quite astonishing the fact that the introduced species have been a roaring success.

Today's Green Mountain, says Wilkinson, is ‘a fully functioning man-made tropical cloud forest' that has grown from scratch from a ragbag of species collected more or less at random from all over the planet. But how could it have happened? Conventional ecological theory says that complex ecosystems such as cloud forests can emerge only through evolutionary processes in which each organism develops in concert with others to fill particular niches. Plants eco-evolve with their pollinators and seed dispersers, while microbes in the soil evolve to deal with the leaf litter.

But that’s not what happened on Green Mountain. And the experience suggests that perhaps natural rainforests are constructed far more by chance than by evolution. Species, say some ecologists, don’t so much evolve to create ecosystems as make the best of what they have. ‘The Green Mountain system is a man-made system that has produced a tropical rainforest without any co-evolution between its constituent species,’ says Wilkinson.

Not everyone agrees. Alan Gray, an ecologist at the University of Edinburgh in the UK. argues that the surviving endemic species on Green Mountain, though small in number, may still form the framework of the new' ecosystem. The new arrivals may just be an adornment, with little structural importance for the ecosystem.

But to Wilkinson, this sounds like clutching at straws. And the idea of the instant formation of rainforests sounds increasingly plausible as research reveals that supposedly pristine tropical rainforests from the Amazon to south-east Asia may in places be little more titan the overgrown gardens of past rainforest civilisations.

The most surprising thing of all is that no ecologists have thought to conduct proper research into this human-made rainforest ecosystem. A survey of the island’s flora conducted six years ago by the University of Edinburgh was concerned only with endemic species. They characterised everything else as a threat. And the Ascension authorities are currently turning Green Mountain into a national park where introduced species, at least the invasive ones, are earmarked for culling rather than conservation.

Conservationists have understandable concerns, Wilkinson says. At least four endemic species have gone extinct on Ascension since the exotics started arriving. But in their urgency to protect endemics, ecologists are missing out on the study of a great enigma.

‘As you walk through the forest, you see lots of leaves that have had chunks taken out of them by various insects. There are caterpillars and beetles around.' says Wilkinson. ‘But where did they come from? Are they endemic or alien? If alien, did they come with the plant on which they feed or discover it on arrival?’ Such questions go to the heart of how- rainforests happen.

The Green Mountain forest holds many secrets. And the irony is that the most artificial rainforest in the world could tell us more about rainforest ecology than any number of natural forests.

Question
Choose the correct Answer
According to Wilkinson, studies of insects on the island could demonstrate

12 / 15

The accidental rainforest

According to ecological theory, rainforests are supposed to develop slowly over millions of years. But now ecologists are being forced to reconsider their ideas

When Peter Osbeck. a Swedish priest, stopped off at the mid-Atlantic island of Ascension in 1752 on his way home from China, he wrote of ‘a heap of ruinous rocks’ with a bare, white mountain in the middle. All it boasted was a couple of dozen species of plant, most of them ferns and some of them unique to the island.

And so it might have remained. But in 1843 British plant collector Joseph Hooker made a brief call on his return from Antarctica. Surveying the bare earth, he concluded that the island had suffered some natural calamity that had denuded it of vegetation and triggered a decline in rainfall that was turning the place into a desert. The British Navy, which by then maintained a garrison on the island, was keen to improve the place and asked Hooker's advice. He suggested an ambitious scheme for planting trees and shrubs that would revive rainfall and stimulate a wider ecological recovery. And, perhaps lacking anything else to do, the sailors set to with a will.

In 1845, a naval transport ship from Argentina delivered a batch of seedlings. In the following years, more than 200 species of plant arrived from South Africa, from England came 700 packets of seeds, including those of two species that especially liked the place: bamboo and prickly pear. With sailors planting several thousand trees a year, the bare white mountain was soon cloaked in green and renamed Green Mountain, and by the early twentieth century, the mountain's slopes were covered with a variety of trees and shrubs from all over the world.

Modern ecologists throw up their hands in horror at what they see as Hookers environmental anarchy. The exotic species wrecked the indigenous ecosystem, squeezing out the islands endemic plants. In fact. Hooker knew well enough what might happen. However, he saw greater benefit in improving rainfall and encouraging more prolific vegetation on the island.

But there is a much deeper issue here than the relative benefits of sparse endemic species versus luxuriant imported ones. And as botanist David Wilkinson of Liverpool John Moores University in the UK pointed out after a recent visit to the island, it goes to the heart of some of the most dearly held tenets of ecology. Conservationists' understandable concern for the fate of Ascension’s handful of unique species has, he says, blinded them to something quite astonishing the fact that the introduced species have been a roaring success.

Today's Green Mountain, says Wilkinson, is ‘a fully functioning man-made tropical cloud forest' that has grown from scratch from a ragbag of species collected more or less at random from all over the planet. But how could it have happened? Conventional ecological theory says that complex ecosystems such as cloud forests can emerge only through evolutionary processes in which each organism develops in concert with others to fill particular niches. Plants eco-evolve with their pollinators and seed dispersers, while microbes in the soil evolve to deal with the leaf litter.

But that’s not what happened on Green Mountain. And the experience suggests that perhaps natural rainforests are constructed far more by chance than by evolution. Species, say some ecologists, don’t so much evolve to create ecosystems as make the best of what they have. ‘The Green Mountain system is a man-made system that has produced a tropical rainforest without any co-evolution between its constituent species,’ says Wilkinson.

Not everyone agrees. Alan Gray, an ecologist at the University of Edinburgh in the UK. argues that the surviving endemic species on Green Mountain, though small in number, may still form the framework of the new' ecosystem. The new arrivals may just be an adornment, with little structural importance for the ecosystem.

But to Wilkinson, this sounds like clutching at straws. And the idea of the instant formation of rainforests sounds increasingly plausible as research reveals that supposedly pristine tropical rainforests from the Amazon to south-east Asia may in places be little more titan the overgrown gardens of past rainforest civilisations.

The most surprising thing of all is that no ecologists have thought to conduct proper research into this human-made rainforest ecosystem. A survey of the island’s flora conducted six years ago by the University of Edinburgh was concerned only with endemic species. They characterised everything else as a threat. And the Ascension authorities are currently turning Green Mountain into a national park where introduced species, at least the invasive ones, are earmarked for culling rather than conservation.

Conservationists have understandable concerns, Wilkinson says. At least four endemic species have gone extinct on Ascension since the exotics started arriving. But in their urgency to protect endemics, ecologists are missing out on the study of a great enigma.

‘As you walk through the forest, you see lots of leaves that have had chunks taken out of them by various insects. There are caterpillars and beetles around.' says Wilkinson. ‘But where did they come from? Are they endemic or alien? If alien, did they come with the plant on which they feed or discover it on arrival?’ Such questions go to the heart of how- rainforests happen.

The Green Mountain forest holds many secrets. And the irony is that the most artificial rainforest in the world could tell us more about rainforest ecology than any number of natural forests.

Question
Choose the correct Answer
 Overall, what feature of the Ascension rainforest does the writer stress?

13 / 15

The accidental rainforest

According to ecological theory, rainforests are supposed to develop slowly over millions of years. But now ecologists are being forced to reconsider their ideas

When Peter Osbeck. a Swedish priest, stopped off at the mid-Atlantic island of Ascension in 1752 on his way home from China, he wrote of ‘a heap of ruinous rocks’ with a bare, white mountain in the middle. All it boasted was a couple of dozen species of plant, most of them ferns and some of them unique to the island.

And so it might have remained. But in 1843 British plant collector Joseph Hooker made a brief call on his return from Antarctica. Surveying the bare earth, he concluded that the island had suffered some natural calamity that had denuded it of vegetation and triggered a decline in rainfall that was turning the place into a desert. The British Navy, which by then maintained a garrison on the island, was keen to improve the place and asked Hooker's advice. He suggested an ambitious scheme for planting trees and shrubs that would revive rainfall and stimulate a wider ecological recovery. And, perhaps lacking anything else to do, the sailors set to with a will.

In 1845, a naval transport ship from Argentina delivered a batch of seedlings. In the following years, more than 200 species of plant arrived from South Africa, from England came 700 packets of seeds, including those of two species that especially liked the place: bamboo and prickly pear. With sailors planting several thousand trees a year, the bare white mountain was soon cloaked in green and renamed Green Mountain, and by the early twentieth century, the mountain's slopes were covered with a variety of trees and shrubs from all over the world.

Modern ecologists throw up their hands in horror at what they see as Hookers environmental anarchy. The exotic species wrecked the indigenous ecosystem, squeezing out the islands endemic plants. In fact. Hooker knew well enough what might happen. However, he saw greater benefit in improving rainfall and encouraging more prolific vegetation on the island.

But there is a much deeper issue here than the relative benefits of sparse endemic species versus luxuriant imported ones. And as botanist David Wilkinson of Liverpool John Moores University in the UK pointed out after a recent visit to the island, it goes to the heart of some of the most dearly held tenets of ecology. Conservationists' understandable concern for the fate of Ascension’s handful of unique species has, he says, blinded them to something quite astonishing the fact that the introduced species have been a roaring success.

Today's Green Mountain, says Wilkinson, is ‘a fully functioning man-made tropical cloud forest' that has grown from scratch from a ragbag of species collected more or less at random from all over the planet. But how could it have happened? Conventional ecological theory says that complex ecosystems such as cloud forests can emerge only through evolutionary processes in which each organism develops in concert with others to fill particular niches. Plants eco-evolve with their pollinators and seed dispersers, while microbes in the soil evolve to deal with the leaf litter.

But that’s not what happened on Green Mountain. And the experience suggests that perhaps natural rainforests are constructed far more by chance than by evolution. Species, say some ecologists, don’t so much evolve to create ecosystems as make the best of what they have. ‘The Green Mountain system is a man-made system that has produced a tropical rainforest without any co-evolution between its constituent species,’ says Wilkinson.

Not everyone agrees. Alan Gray, an ecologist at the University of Edinburgh in the UK. argues that the surviving endemic species on Green Mountain, though small in number, may still form the framework of the new' ecosystem. The new arrivals may just be an adornment, with little structural importance for the ecosystem.

But to Wilkinson, this sounds like clutching at straws. And the idea of the instant formation of rainforests sounds increasingly plausible as research reveals that supposedly pristine tropical rainforests from the Amazon to south-east Asia may in places be little more titan the overgrown gardens of past rainforest civilisations.

The most surprising thing of all is that no ecologists have thought to conduct proper research into this human-made rainforest ecosystem. A survey of the island’s flora conducted six years ago by the University of Edinburgh was concerned only with endemic species. They characterised everything else as a threat. And the Ascension authorities are currently turning Green Mountain into a national park where introduced species, at least the invasive ones, are earmarked for culling rather than conservation.

Conservationists have understandable concerns, Wilkinson says. At least four endemic species have gone extinct on Ascension since the exotics started arriving. But in their urgency to protect endemics, ecologists are missing out on the study of a great enigma.

‘As you walk through the forest, you see lots of leaves that have had chunks taken out of them by various insects. There are caterpillars and beetles around.' says Wilkinson. ‘But where did they come from? Are they endemic or alien? If alien, did they come with the plant on which they feed or discover it on arrival?’ Such questions go to the heart of how- rainforests happen.

The Green Mountain forest holds many secrets. And the irony is that the most artificial rainforest in the world could tell us more about rainforest ecology than any number of natural forests.

Question
Choose the correct Answer
 Wilkinson suggests that conservationists' concern about the island is misguided because

14 / 15

The accidental rainforest

According to ecological theory, rainforests are supposed to develop slowly over millions of years. But now ecologists are being forced to reconsider their ideas

When Peter Osbeck. a Swedish priest, stopped off at the mid-Atlantic island of Ascension in 1752 on his way home from China, he wrote of ‘a heap of ruinous rocks’ with a bare, white mountain in the middle. All it boasted was a couple of dozen species of plant, most of them ferns and some of them unique to the island.

And so it might have remained. But in 1843 British plant collector Joseph Hooker made a brief call on his return from Antarctica. Surveying the bare earth, he concluded that the island had suffered some natural calamity that had denuded it of vegetation and triggered a decline in rainfall that was turning the place into a desert. The British Navy, which by then maintained a garrison on the island, was keen to improve the place and asked Hooker's advice. He suggested an ambitious scheme for planting trees and shrubs that would revive rainfall and stimulate a wider ecological recovery. And, perhaps lacking anything else to do, the sailors set to with a will.

In 1845, a naval transport ship from Argentina delivered a batch of seedlings. In the following years, more than 200 species of plant arrived from South Africa, from England came 700 packets of seeds, including those of two species that especially liked the place: bamboo and prickly pear. With sailors planting several thousand trees a year, the bare white mountain was soon cloaked in green and renamed Green Mountain, and by the early twentieth century, the mountain's slopes were covered with a variety of trees and shrubs from all over the world.

Modern ecologists throw up their hands in horror at what they see as Hookers environmental anarchy. The exotic species wrecked the indigenous ecosystem, squeezing out the islands endemic plants. In fact. Hooker knew well enough what might happen. However, he saw greater benefit in improving rainfall and encouraging more prolific vegetation on the island.

But there is a much deeper issue here than the relative benefits of sparse endemic species versus luxuriant imported ones. And as botanist David Wilkinson of Liverpool John Moores University in the UK pointed out after a recent visit to the island, it goes to the heart of some of the most dearly held tenets of ecology. Conservationists' understandable concern for the fate of Ascension’s handful of unique species has, he says, blinded them to something quite astonishing the fact that the introduced species have been a roaring success.

Today's Green Mountain, says Wilkinson, is ‘a fully functioning man-made tropical cloud forest' that has grown from scratch from a ragbag of species collected more or less at random from all over the planet. But how could it have happened? Conventional ecological theory says that complex ecosystems such as cloud forests can emerge only through evolutionary processes in which each organism develops in concert with others to fill particular niches. Plants eco-evolve with their pollinators and seed dispersers, while microbes in the soil evolve to deal with the leaf litter.

But that’s not what happened on Green Mountain. And the experience suggests that perhaps natural rainforests are constructed far more by chance than by evolution. Species, say some ecologists, don’t so much evolve to create ecosystems as make the best of what they have. ‘The Green Mountain system is a man-made system that has produced a tropical rainforest without any co-evolution between its constituent species,’ says Wilkinson.

Not everyone agrees. Alan Gray, an ecologist at the University of Edinburgh in the UK. argues that the surviving endemic species on Green Mountain, though small in number, may still form the framework of the new' ecosystem. The new arrivals may just be an adornment, with little structural importance for the ecosystem.

But to Wilkinson, this sounds like clutching at straws. And the idea of the instant formation of rainforests sounds increasingly plausible as research reveals that supposedly pristine tropical rainforests from the Amazon to south-east Asia may in places be little more titan the overgrown gardens of past rainforest civilisations.

The most surprising thing of all is that no ecologists have thought to conduct proper research into this human-made rainforest ecosystem. A survey of the island’s flora conducted six years ago by the University of Edinburgh was concerned only with endemic species. They characterised everything else as a threat. And the Ascension authorities are currently turning Green Mountain into a national park where introduced species, at least the invasive ones, are earmarked for culling rather than conservation.

Conservationists have understandable concerns, Wilkinson says. At least four endemic species have gone extinct on Ascension since the exotics started arriving. But in their urgency to protect endemics, ecologists are missing out on the study of a great enigma.

‘As you walk through the forest, you see lots of leaves that have had chunks taken out of them by various insects. There are caterpillars and beetles around.' says Wilkinson. ‘But where did they come from? Are they endemic or alien? If alien, did they come with the plant on which they feed or discover it on arrival?’ Such questions go to the heart of how- rainforests happen.

The Green Mountain forest holds many secrets. And the irony is that the most artificial rainforest in the world could tell us more about rainforest ecology than any number of natural forests.

Question
Complete each sentence with the correct ending A-G from the box below.
David Wilkinson says the creation of the rainforest in Ascension is important because it shows that
Additional support for Wilkinson's theory comes from findings that
Alan Gray questions Wilkinson’s theory, claiming that
Wilkinson says the existence of Ascension’s rainforest challenges the theory that
The reason for modern conservationists’ concern over Hooker's tree planting programme is that

15 / 15

The accidental rainforest

According to ecological theory, rainforests are supposed to develop slowly over millions of years. But now ecologists are being forced to reconsider their ideas

When Peter Osbeck. a Swedish priest, stopped off at the mid-Atlantic island of Ascension in 1752 on his way home from China, he wrote of ‘a heap of ruinous rocks’ with a bare, white mountain in the middle. All it boasted was a couple of dozen species of plant, most of them ferns and some of them unique to the island.

And so it might have remained. But in 1843 British plant collector Joseph Hooker made a brief call on his return from Antarctica. Surveying the bare earth, he concluded that the island had suffered some natural calamity that had denuded it of vegetation and triggered a decline in rainfall that was turning the place into a desert. The British Navy, which by then maintained a garrison on the island, was keen to improve the place and asked Hooker's advice. He suggested an ambitious scheme for planting trees and shrubs that would revive rainfall and stimulate a wider ecological recovery. And, perhaps lacking anything else to do, the sailors set to with a will.

In 1845, a naval transport ship from Argentina delivered a batch of seedlings. In the following years, more than 200 species of plant arrived from South Africa, from England came 700 packets of seeds, including those of two species that especially liked the place: bamboo and prickly pear. With sailors planting several thousand trees a year, the bare white mountain was soon cloaked in green and renamed Green Mountain, and by the early twentieth century, the mountain's slopes were covered with a variety of trees and shrubs from all over the world.

Modern ecologists throw up their hands in horror at what they see as Hookers environmental anarchy. The exotic species wrecked the indigenous ecosystem, squeezing out the islands endemic plants. In fact. Hooker knew well enough what might happen. However, he saw greater benefit in improving rainfall and encouraging more prolific vegetation on the island.

But there is a much deeper issue here than the relative benefits of sparse endemic species versus luxuriant imported ones. And as botanist David Wilkinson of Liverpool John Moores University in the UK pointed out after a recent visit to the island, it goes to the heart of some of the most dearly held tenets of ecology. Conservationists' understandable concern for the fate of Ascension’s handful of unique species has, he says, blinded them to something quite astonishing the fact that the introduced species have been a roaring success.

Today's Green Mountain, says Wilkinson, is ‘a fully functioning man-made tropical cloud forest' that has grown from scratch from a ragbag of species collected more or less at random from all over the planet. But how could it have happened? Conventional ecological theory says that complex ecosystems such as cloud forests can emerge only through evolutionary processes in which each organism develops in concert with others to fill particular niches. Plants eco-evolve with their pollinators and seed dispersers, while microbes in the soil evolve to deal with the leaf litter.

But that’s not what happened on Green Mountain. And the experience suggests that perhaps natural rainforests are constructed far more by chance than by evolution. Species, say some ecologists, don’t so much evolve to create ecosystems as make the best of what they have. ‘The Green Mountain system is a man-made system that has produced a tropical rainforest without any co-evolution between its constituent species,’ says Wilkinson.

Not everyone agrees. Alan Gray, an ecologist at the University of Edinburgh in the UK. argues that the surviving endemic species on Green Mountain, though small in number, may still form the framework of the new' ecosystem. The new arrivals may just be an adornment, with little structural importance for the ecosystem.

But to Wilkinson, this sounds like clutching at straws. And the idea of the instant formation of rainforests sounds increasingly plausible as research reveals that supposedly pristine tropical rainforests from the Amazon to south-east Asia may in places be little more titan the overgrown gardens of past rainforest civilisations.

The most surprising thing of all is that no ecologists have thought to conduct proper research into this human-made rainforest ecosystem. A survey of the island’s flora conducted six years ago by the University of Edinburgh was concerned only with endemic species. They characterised everything else as a threat. And the Ascension authorities are currently turning Green Mountain into a national park where introduced species, at least the invasive ones, are earmarked for culling rather than conservation.

Conservationists have understandable concerns, Wilkinson says. At least four endemic species have gone extinct on Ascension since the exotics started arriving. But in their urgency to protect endemics, ecologists are missing out on the study of a great enigma.

‘As you walk through the forest, you see lots of leaves that have had chunks taken out of them by various insects. There are caterpillars and beetles around.' says Wilkinson. ‘But where did they come from? Are they endemic or alien? If alien, did they come with the plant on which they feed or discover it on arrival?’ Such questions go to the heart of how- rainforests happen.

The Green Mountain forest holds many secrets. And the irony is that the most artificial rainforest in the world could tell us more about rainforest ecology than any number of natural forests.

Question
Do the following statements agree with the information given in The Reading Passage?

TRUE         if the statement agrees with the information
FALSE       if the statement contradicts the information
NOT GIVEN  if there is no information on this

When Peter Osbeck visited Ascension, he found no inhabitants on the island.

The natural vegetation on the island contained some species which were found nowhere else.

 Joseph Hooker assumed that human activity had caused the decline in the island's plant life.

British sailors on the island took part in a major tree planting project.

Hooker sent details of his planting scheme to a number of different countries.

The bamboo and prickly pear seeds sent from England were unsuitable for Ascension.