TIME natural disaster

Landslides May Be Inevitable, But They’re Not Yet Predictable

A massive landslide killed dozens in Washington
A massive landslide near Oso, Washington killed at least 16 people, with far more still missing Photo by David Ryder/Getty Images

There was plenty of warning before the deadly Washington landslide. Why didn't it help?

There was the rain. The tiny town of Oso in northwestern Washington state is used to wet weather—rain falls every other day on average—but the past few months have been positively biblical, with precipitation as much as 200% above normal. There was the geography: steep terrain composed of glacial sediment, which is a loose mix of sand, silt and boulders, the geological equivalent of a banana peel. And there was the history. Mudslides have hit the land around Oso numerous times over the past few decades, including as recently as 2006. There’s a reason that some residents used to call the area “Slide Hill.”

Yet when the earth gave way on the morning on the morning of Mar. 22, no one was ready for the scale of devastation. More than 15 million cu. yards (11.5 million cu. m), equivalent to three million dump truck loads, came tumbling down, burying nearly 50 homes in a hilly area 60 miles (97 km) northeast of Seattle. At least 16 people have died in the landslide, which covered more than a square mile (2.6 sq. km) and more than 170 people are listed as missing, even as hope of finding survivors dwindles. Even if the number of missing comes down, as officials have predicted, this will go down as one of the deadliest landslides in U.S. history.

There was no shortage of warnings. As the Seattle Times reported earlier this week, a study by outside consultants had been filed with the U.S. Army Corps of Engineers in 1999 warning of “the potential for a large catastrophic failure” on the very hill that collapsed on Mar. 22. A 2000 study by the engineer and geomorphologist Tracy Drury warned that future landslides would take an increasing toll because “human development of the floodplain in this area has steadily increased.” Yet while local officials claimed that residents knew of the landslide risks, there’s little evidence that much was done to try to mitigate those risks. A 1,300 ft. (396 m) “crib wall” of boom logs anchored by 9,000 lb. (4,082 kg) concrete blocks every 50 ft. (15 m) was built after the 2006 landslide. But it was helpless against the landslide. “The place was set up to be unstable,” says David Montgomery, a geomorphologist at the University of Washington.

But despite all that, it’s not surprising that Oso wasn’t ready when the earth collapsed. Even though they kill more than 25 Americans and cause more than $2 billion in damages each year on average, landslides are the “underappreciated natural hazard,” as Montgomery puts it. But as Andrew Freedman points out on Mashable, that’s in part because there’s no uniform, national monitoring system:

Instead, the USGS, working with the National Weather Service (NWS) and state and local agencies, has put together a “patchwork quilt” of monitoring and experimental warning programs, based upon rainfall and soil moisture and pressure measurements. One such program has been in place near Puget Sound, but did not cover the area where the March 22 landslide occurred.

This is despite the fact that landslides are the most geographically dispersed natural hazard—all 50 states face at least some mudslide risk. But the widespread nature of landslide risk is part of the reason why there is no uniform warning system, although the USGS has put together a national map that identifies high-risk zones. (Unsurprisingly, they tend to be mountainous regions like the Appalachians, the Rockies and the Pacific Coastal ranges.) While landslides as a whole are common, they occur only rarely at any given location—even places as inherently unstable as the hills above Oso can go decades between slides. And while decades of study—and a national network of radar stations—has enabled meteorologists to predict hurricanes, tornadoes and other extreme weather with increasing precision, it is still incredibly difficult to identify when a landslide-prone hill will finally crumble. Heavy rainfall obviously plays a role, allowing water to infiltrate and loosen soil, but slides can also be triggered by earthquakes or erosion. “We can identify hazard zones, the places where you can expect a high probability of failure,” says Montgomery. “But it’s hard to say this slope will go on this particular day. We just don’t have enough data about the internal plumbing of the hillside.”

And it’s not just mountain towns that are at risk of landslides. Oregon state geologists have said that as much as 30% of metro Portland is in a high-risk zone for landslides, and a 2013 study by the University of Washington found that Seattle has some 8,000 buildings are at risk of an earthquake-induced landslide. Internationally, the danger is far greater: a 2o12 study in Geology estimated that rainfall-induced landslides alone—like the one near Oso—killed more than 32,000 people between 2004 and 2010, a massive toll, even though mudslides tend to get far less attention than earthquakes, hurricanes or tornadoes. Homes with a view come with danger attached, even if it’s one most people don’t know. Changing that fact might be the best way to ensure that the next major landslide is nowhere near as deadly.

TIME energy

The Afterlife of Oil Spills

Exxon Valdez oil spill cleanup
Nearly 11 million gallons of oil spilled into Prince William Sound after the 1989 Exxon Valdez spill Chris Wilkins—AFP/Getty Images

Twenty-five years after the Exxon Valdez oil spill, scientists are still reckoning with the ecological cost

On a shelf at my home, I have a small jar that contains a smear of crude oil. I dug it up on the shore of a small island in Alaska’s Prince William Sound in May of 2009, on a reporting trip for a story about the legacy of the Exxon Valdez oil spill. That crude oil is more than 25 years old now, and its existence is a reminder of just how long lived the effects of a major oil accident can be. Years after the spill has been stopped, after the press has gone home, the crude oil released into a river or a sea will affect the biology of almost anything it touches—just as it continues to weigh on the people who live and work in the area fouled by crude.

That’s worth remembering as we observe the 25th anniversary of the Exxon Valdez spill today. On Mar. 25, 1989, a tanker captained by Joseph Hazelwood ran aground on Alaska’s Bligh Reef, spilling nearly 11 million gallons (42 million liters) of crude oil into Alaska’s near-pristine Prince William Sound. The oil spread out to more than 1,300 miles (2,100 km) of coastline, choking bird and sea life, and permanently damaging the region’s ecology. Even now, you can still find some of that oil on remote beaches in the Sound, preserved by the cold. As of 2010, just 12 of the 32 monitored wildlife populations, habitats and resource services affected by the spill were considered fully recovered or very likely recovered. The once-prosperous Pacific herring fishery still remains closed after the population of the fish crashed in the years following the spill. While much of the Sound has rebounded, it will never be the same—even a quarter century later.

The Exxon Valdez disaster was the biggest oil spill in U.S. history—until April 2010, when BP’s Deepwater Horizon drilling rig was destroyed in a well blowout, leading to an oil gusher that lasted 87 days and resulted in more than 200 million gallons (757 million liters) of crude flowing into the Gulf of Mexico. While much of the oil was either cleaned up in a response operation that cost billions of dollars or was broken down by bacteria in the warm Gulf waters, the ecological damage from the spill was major, and almost four yeas later, scientists are only beginning to gauge the cost to marine life.

Here’s one example: in a new study published in the Proceedings of the National Academy of Sciences, researchers from the National Oceanic and Atmospheric Administration (NOAA) and several universities assessed the impact of Deepwater Horizon oil on developing embryos of bluefin tuna, yellowfin tuna and amberjack, all commercially important fish species that spawn near the site of the accident. The research team exposed embryos taken from breeding facilities to polycyclic aromatic hydrocarbons (PAHs), a toxic agent released by crude oil. In each tested species, PAH exposure—at levels the researchers said was realistic for the Gulf spill—was linked to abnormalities in heart function and defects in heart development. As the paper concluded:

Losses of early life stages were therefore likely for Gulf populations of tunas, amberjack, swordfish, billfish, and other large predators that spawned in oiled surface habitats.

The PNAS study isn’t the first to blame the BP oil spill for lingering problems with Gulf marine life; a study published earlier this month linked the spill to dwindling numbers of bottlenose dolphins Louisiana’s Barataria Ba. Nor will it be the last. But that hasn’t slowed the rush to keep drilling going in the Gulf of Mexico, a rush that BP has now been allowed to rejoin after initially being barred from participation in lease sales in the region. The British company won 24 out of 31 bids entered in an Interior Department offshore drilling lease sale held last week, paying more than $41 million for the right to explore oil and gas in the region. Altogether 1.7 million acres (.69 million hectares) off the coast of Louisiana, Mississippi and Alabama were opened up for new drilling. Despite evidence of the risks, nothing seems likely to stop operations in the Gulf.

As long as there is offshore drilling and marine transport of oil, the risks of accidents will exist. Just two days before the 25th anniversary of the Exxon Valdez spill, at least 168,000 gallons (636,000 liters) of oil spilled from a barge in Galveston Bay in Texas. The spill is blocking the bustling Houston Ship Channel, one of the busiest seaways in the U.S., and threatens an environmentally sensitive bird sanctuary nearby. Given the small size of the spill, it won’t have the kind of major aftereffects seen in the Valdez and the BP dissters. But it’s one more reminder that as long as our economy remains so dependent on oil, there will always be the risk of another catastrophe that could linger on and on.

[Update: BP sent along a statement in response to the PNAS study—I'm including it below:

The paper provides no evidence to suggest a population-level impact on tuna, amberjack or other pelagic fish species in the Gulf of Mexico. The oil concentrations used in these lab experiments were rarely seen in the Gulf during or after the Deepwater Horizon accident. In addition, the authors themselves note that it is nearly impossible to determine the early life impact to these species. To overcome this challenge, it would take more information than what’s presented in this paper.

It's worth noting that the researchers mention in the paper how difficult it is to sample live but fragile yolksac larvae of big pelagic species like the bluefin tuna in the wild, which is the embryos used in the study were collected from breeding stations on land, not the Gulf itself.]

TIME Human Body

Your Nose Can Smell at Least 1 Trillion Scents

A new study demonstrates that your sense of smell is far more sensitive than you think and the world of scents is infinitely more varied. Scientists previously thought humans could smell around 10,000 different odors

Human beings tend to think of themselves as visual first, auditory second, then touch and taste. Down at the bottom of the five senses is smell—at least when it comes to how often we’re aware of it. And while we all know how pungent a bad smell can be, and how memorable a good smell is, we probably don’t think our olfactory sense is all that sensitive, at least compared to the rest of our senses—or to the keen sense of smells exhibited in the animal world (Sharks can’t literally smell fear, but they can distinguish the smell of fish even if they make up only one part for every 10 billion parts in the water).

While scientists estimate that human beings can discriminate between several million different colors and almost half a million different sounds, they have long assumed that we can distinguish perhaps 10,000 different odors. Most of the time humans are barely aware they’re smelling anything at all.

But in reality, our noses are incredibly sensitive—and a new study published in Science provides evidence of just how amazing our sniffers are. Researchers at Rockefeller University and the Howard Hughes Medical Institute (HHMI) tested volunteers’ sense of smell using precisely crafted mixtures of odor molecules. After extrapolating the results, the researchers estimated that the average human being can distinguish between 1 trillion different odors, if not more, which makes our noses far more sensitive than any other organ in the body.

“The message here is that we have more sensitivity in our sense of smell than for which we give ourselves credit,” said Andreas Keller, a research associate at Rockefeller’s Laboratory of Neurogenetics and Behavior and the lead author on the Science study, in a statement. “We just don’t pay attention to it and we don’t use it in everyday life.”

The idea that human beings could only distinguish between 10,000 smells has been around since a 1927 study that posited four elementary odors that people are able to distinguish on a nine-point scale. Do the math and you get 6,651 discernible olfactory sensations, a number that was later rounded up to 10,000. Although that value was widely cited, most scientists were skeptical—after all, the human eye uses just three light receptors to see millions of colors, while the typical nose has 400 different olfactory receptors. But as Leslie Vosshall of HHMI and another study co-author noted: “For smell, nobody ever took the time to test.”

Obviously the researchers weren’t going to try to test each smell individually—that would take forever. Instead, they used 128 different odorant molecules to create smell mixtures, using 10, 20 and 30 different components. The molecules themselves evoked familiar smells like cut grass, but when combined in random mixtures of as many as 30 different types, the smells became unfamiliar. That didn’t matter—the study subjects weren’t supposed to identify the smells. Instead, the researchers would present them with three vials of scents—two that were identical, and one that was unique—and asked them to indicate which scent was different than the others. Each of the 26 subjects made 264 comparisons.

Keller and his colleagues found that their study subjects could generally tell the difference between mixtures containing as much as 51.17% of the same components. Much higher than that, and they were unable to distinguish the smells—though it’s worth noting that some subjects could distinguish between smell mixtures that were as much as 90% similar. The researchers then extrapolated the total number of mixtures possible in each of their three categories. Since the majority of their study subjects could distinguish between mixtures that were 51.17% similar or less, they estimate that the average human can discriminate more than 1 trillion separate smells.

That is a vast number of scents, and it’s almost certainly too low, because there are many more odor molecules in the real world that could be mixed in nearly uncountable ways. So it’s not just that human beings have sensitive olfactory systems—though not that sensitive, otherwise more people would be able to distinguish smells that were more than 50% similar. It’s that the world offers a near infinite variety of smells. If human beings think their sense of smell isn’t that important, it has more to do with the fact that we’ve done our best to eliminate smells through refrigeration, air filtration, and yes, daily showers. As Vosshall put it:

The world is always changing. Plants are evolving new smells. Perfume companies are making new scents. You might move to some part of the world where you’ve never encountered the fruits and vegetables and flowers that grow there. But your nose is ready. With a sensory system that is that complex, we are fully ready for anything.

The nose, as it turns out, really does know.

TIME climate change

The End of Spring in a Warming World

Climate change affects flowering
Climate change is altering the timing and duration of wildflower blooming Photo by Auscape/UIG via Getty Images

As the planet warms, wildflower blooming and other signs of spring are moving earlier and earlier—altering our idea of what the seasons mean, and creating an unpredictable ripple effect

The first day of spring is finally here, even if it doesn’t feel that way in much of the still frigid East. Of course, the official beginning of spring has less to do with the weather than it does with Earth’s orbit around the sun—the vernal equinox is the day when the tilt of the planet’s axis is inclined neither toward nor away from the sun. (This also happens during the autumnal equinox at the beginning of fall, and of course the dates are reversed for the Southern Hemisphere.) Wherever you are in the world on Mar. 20, it’s all equinox.

But while the calendar stays the same, the seasons seem to be changing. As the planet warms, spring has been springing earlier. A 2009 study in Nature estimated that spring now comes about 1.7 days earlier than it did during the first half of the 20th century. Decades of data collected around Henry David Thoreau’s Walden Pond and Aldo Leopold’s plot of land in Wisconsin indicates that spring flowers have been blooming earlier and earlier in the year, responding to warmer temperatures. And a study published this week in the Proceedings of the National Academy of Sciences (PNAS) that used 39 years of data concludes that wildflowers in the Colorado Rocky Mountains are blooming weeks earlier than they once did and producing their last blooms later. The bloom season, which used to run from late May to early September, now lasts from late April to late September, some 35 days longer.

An earlier spring, a longer blooming season—are these bad things? A lot of climate change skeptics don’t think so:

I can sympathize—after the winter we’ve had in the East, an earlier and longer spring sounds ideal. But the fact that warming seems to be changing the timing of the seasons should concern us, as any phenologist could tell you. Phenology is the study of periodic animal and plant lifecycles, and looks at how the regular variations in the climate that we call seasons affect life. It’s a rich subject area because nearly every form of life runs on recurring cycles governed by the external cues of the environment. This is how Leopold, one of the foremost conservationists in American history and a keen observer of the seasons, put it:

Many of the events of the annual cycle recur year after year in a regular order. A year-to-year record of this order is a record of the rates at which solar energy flows to and through living things. They are the arteries of the land. By tracing their response to the sun, phenology may eventually shed some light on that ultimate enigma, the land’s inner workings.

Take that wildflower study I cite above. Paul CaraDonna, a graduate student at the University of Arizona in Tucson and the lead author of the study, was drawn to the research in part because of his interest in the native bees and other pollinators in the Rocky Mountain Biological Laboratory, 9,500-ft. above sea level near Crested Butte, Colo. Bees depend on flowers for nutrition, so when the bloom season shifts, it’s going to affect the bees. Despite the longer blooming season, plants aren’t producing more flowers. With the same number of flowers blooming over a longer period of time, bees could face a situation where there are fewer in bloom at any given time. “The competition can go up between pollinators for these resources because there’s going to be lesser availability over a greater period of time,” says CaraDonna. And if bees experience repeated population loss, that can in turn impact the very plants that depend on the insects for pollination.

A earlier blooming season can also place wildflowers in danger if they’re hit by a late frost. The meadows the team studies can experience frosts as late as mid-June. “If the snow melts in mid-April, the flowers can have a month and a half before they get zapped by frost in a fragile state,” says CaraDonna. “If you rely on the flowers and they get hit that way, you’ll have no food.” That’s precisely what happened in 2012, when a frost in mid-May wiped out a huge number of flowers that had bloomed early in the season.

Change the timing of spring, and there’s no telling what can happen—although as Amy Iler, another co-author on the study, points out: “It would be very surprising if everything turns out perfectly fine.” Iler and her colleagues are only beginning to piece together how a shifting blooming season will change the environment of the Rocky Mountain meadows—and it will be even more difficult for ecologists to predict the response elsewhere, in places that lack 39 years of minutely recorded data. (The research was begun in 1974 by David Inoyue, a biologist now at the University of Maryland, and over the years Inoyue and his collaborators have counted more than 2 million separate flowers.) The land is still “an enigma,” as Leopold put it, but the X factor of climate change will only make the mystery of the natural world that much more complex. As we add more and more greenhouse gases into the atmosphere, we’re putting ourselves on the path towards an ever more uncertain future, one where even the seasons themselves become unmoored from the calendar.

But if nature’s response to the phenological changes of global warming still remain to be fully discovered, our emotional response is already being felt, as the novelist Zadie Smith wrote recently in the New York Review of Books:

There is the scientific and ideological language for what is happening to the weather, but there are hardly any intimate words. Is that surprising? People in mourning tend to use euphemism; likewise the guilty and ashamed. The most melancholy of all the euphemisms: “The new normal.” “It’s the new normal,” I think, as a beloved pear tree, half-drowned, loses its grip on the earth and falls over. The train line to Cornwall washes away—the new normal. We can’t even say the word “abnormal” to each other out loud: it reminds us of what came before. Better to forget what once was normal, the way season followed season, with a temperate charm only the poets appreciated.

Smith isn’t quite right—as spring becomes a moving target, there is no normal any longer, new or old. March 20 will remain the equinox. The rest remains to be seen.

TIME climate science

Scientists Sound the Alarm on Global Warming, But Americans Sleep In

Climate change impacts Arctic ice
Rapid loss of Arctic sea ice is one of many risks of unchecked climate change Photo by Joe Raedle/Getty Images

A new report from the country's preeminent scientific body warns of the danger of unchecked global warming

Gallup released the results of a new poll on Americans’ opinions about climate change earlier this month. For those concerned about global warming, the data was not promising. On one hand, about two-thirds of Americans believed that global warming is happening or will happen during their lifetime—which, incidentally, happens to be the correct answer. But only about 36% of Americans said they believe that global warming will pose a “serious threat to their way of life” during their lifetimes. Climate change is also very low on the priority list for most Americans—51% of those surveyed said they worry about climate change very little or not at all. And 42% of Americans said they believe the seriousness of global warming is “generally exaggerated” in the news.

This is just one survey, and there is no shortage of methodological problems with most opinion polls. But the results make a sobering backdrop to yet another new report from a scientific organization that practically screams about the imminent dangers of global warming. This one is from the American Association for the Advancement of Science (AAAS), the preeminent scientific organization in the U.S., and it focuses starkly on the risks posed by unchecked climate change—both the more modest dangers that scientists are virtually certain about, and the catastrophic threats that might not materialize, but which would wreck havoc on the planet if they do:

The evidence is overwhelming: levels of greenhouse gases in the atmosphere are rising. Temperatures are going up. Springs are arriving earlier. Ice sheets are melting. Sea level is rising. The patterns of rainfall and drought are changing. Heat waves are getting worse as is extreme precipitation. The oceans are acidifying. The science linking human activities to climate change is analogous to the science linking smoking to lung and cardiovascular diseases.

That last bit is important. It took decades to establish a firm scientific connection between smoking and cancer—and it was an effort that tobacco companies fought every inch of the way, using doubt-stoking tactics that would later be taken up by the fossil fuel industry as it tried to make the public skeptical about man-made climate change. No doctor can tell you exactly how much smoking increases your risk of cancer and cardiovascular disease. But the science is strong enough—and lung cancer is a scary enough danger—that smoking rates have fallen drastically over the past few decades, reaching an all-time low this year.

That’s what the AAAS is trying to do with this report—cement the connection between man-made climate change and environmental and social disaster, to the point where the public will support steps to reduce carbon emissions. “This project is to make clear to the public and to policymakers what we know,” said Alan Lensher, the head of the AAAS. “The earth is warming and human behavior is heavily responsible for it. We need to do something now.”

Of course, this message has been repeated over and over again in scores of similar scientific reports, all sounding the same warning notes. Later this month the Intergovernmental Panel on Climate Change (IPCC) will come out with the next chapter of its latest assessment on climate science—this time focusing on the expected impacts of global warming—and you can expect a similar message. But as Justin Gillis of the New York Times writes, one difference with the AAAS report is its lead author, Mario Molina, who was part of a group of researchers in the 1970s who found a connection between chlorofluorocarbons (CFCs) and ozone depletion:

At a Fort Lauderdale, Fla., conference in 1972, a California scientist namedF. Sherwood Rowland learned that [CFCs] were accumulating in the air. What, he wondered, would happen to them? He eventually put a young researcher in his laboratory, Dr. Molina, onto the question.

To their own shock, the team figured out that the chemicals would break down the ozone layer, a blanket of gas high above the ground that protects the world from devastating levels of ultraviolet radiation. As the scientific evidence of a risk accumulated, the public demanded action — and eventually got it, in the form of a treaty phasing out the compounds.

That’s what climate campaigners would like to see with global warming: The science is established, it shifts public opinion, which then drives policy change to solve the problem. But climate change is a far more complex problem than ozone depletion, and carbon is far more central to the global economy that CFCs ever were. (And as Roger Pielke Jr. pointed out in this 2012 piece for the Breakthrough Institute, the story of science and the ozone layer is a little more complicated than it seems—solid technological alternatives to CFCs were developed well before talk of a global ban gained steam.) It’s hard to see one more blue-ribbon report moving public opinion on climate change, even one that carries the imprimatur of the AAAS.

Which is no reason not to try. (Although it may help to review the large body of social science around climate change inaction.) If you write about global warming day in and day out like I do, it’s easy to become immured to all the warnings, all the dire predictions. But the reality is that we are very much in the early stages of what the scientist Ken Caldeira has called “the great climate experiment,” with the planet is projected to warm faster over the next century than it has in 65 million years. Right now the CO2 concentration in the planet’s atmosphere has passed 400 ppm—it was 280 ppm before we began mass burning fossil fuels in the 19th century—and it’s likely to stay that high for the rest of the month, and eventually, forever.

It’s possible that we may get lucky and escape the most catastrophic risks posed by climate change—just as it’s possible you can smoke a pack a day and live until 90. It’s just not likely, and every day that passes without any real effort to curb carbon emissions makes disaster more certain. Sooner or later.

TIME Agriculture

Climate Change Could Cause the Next Great Famine

Climate change impacts crop yields
A warmer climate could reduce the yield of staple crops like maize Photo by John Moore/Getty Images

A new study finds that as the planet warms, yields for important staple crops like wheat could decline sharply.

It’s St. Patrick’s Day, which means the 100 million or so people of Irish descent around the world get the opportunity to celebrate their heritage with song, food and increasingly controversial parades. The sheer size of the Irish diaspora is what has made St. Patrick’s Day an international event—after all, there are only 6.4 million Irish people in Ireland. But it’s also a reflection of the waves of emigration that marked Ireland’s history until recently—emigration that was fueled in part by the great famine of the 1840s. Triggered by a disease that wiped out the potato, Ireland’s staple crop, the Great Famine—an Gorta Mor in Irish—led to the death of a million people and caused another million to flee the country. Without the potato blight, that Irish diaspora—and your local St. Patrick’s Day festivities—might be significantly smaller.

The Great Famine is a reminder of the way failures in agriculture can drive lasting historical change—while leading to immense human suffering. That’s a useful backdrop of a new analysis on the impact global warming will have on crop yields, just published in Nature Climate Change. The news isn’t good: the research, based on a new set of data created by the combination of 1,700 previously published studies, found that global warming of only 2º C (3.6º F) will likely reduce yields of staple crops like rice and maize as early as the 2030s. And as the globe keeps warming, crop yields will keep shriveling unless drastic steps are taken to adapt to a changing climate. As Andy Challinor, a professor of climate impacts at the University of Leeds and the lead author of the study, put it in a statement:

Our research shows that crop yields will be negatively affected by climate change much earlier than expected…Furthermore, the impact of climate change on crops will vary both from year-to-year and from place-to-place—with the variability becoming greater as the weather becomes increasingly erratic.

The effect that warming will have on crop yields is one of the most vital areas of climate research—and one of the most vexing. Warming will have different impacts on different kinds of crops in different parts of the world. Warmer temperatures—and the higher levels of carbon dioxide that come with them—may enhance yields in the short-term, but as the climate gets hotter and hotter, crops could wilt, especially in the tropics. Changes in precipitation—both prolonged droughts and bigger storms—will hit farmers hard as well. And with a 842 million hungry people around the world—and another 2 billion or so who will need to be fed by mid-century as global population grows—accurately nailing down the impact climate change will have on crop yields could make the difference between life and death for vast numbers of people.

The last assessment from the Intergovernmental Panel on Climate Change (IPCC) from 2007 found that temperate regions like Europe would be able to deal with moderate, 2º C warming without much of an impact on crop yields. But the newer research used in the Nature Climate Change study indicates that that conclusion might have been too optimistic, especially as the climate gets warmer and warmer towards the century’s end. Farmers in the tropics will have it particularly difficult—yields from maize could drop by 20% or more if temperatures increase by more than 3º C (5.4º F). And those reductions in yield could hide much bigger year-to-year swings, if the weather gets more extreme. “Climate change means a less predictable harvest, with different countries winning and losing in different years,” said Challinor. “The overall picture remains negative.”

We should have a better sense of where climate research stands on crop impacts later this month, when the IPCC comes out with the next chapter in its newest climate science assessment. And farmers—especially in developed nations—can and likely will adapt to what global warming will throw at them, whether by changing crop planting schedules, shifting to more efficient irrigation or taking advantage of biotechnology. But there’s no guarantee that poor farmers—who already produce less per acre—will be able to keep up. The Great Famine was triggered by the potato blight, but it was intensified by cruel policy on the part of Ireland’s British masters, who ensured that rich stores of grain and livestock were exported out of the country even as Irish citizens starved to death in the streets. As a warming climate makes the difficult task of keeping the world fed even tougher, we can only hope that wiser policy prevents the next famine.

TIME big data

Google’s Flu Project Shows the Failings of Big Data

Google flu trends
GEORGES GOBET/AFP/Getty Images

A new study shows that using big data to predict the future isn't as easy as it looks—and that raises questions about how Internet companies gather and use information

Big data: as buzzwords go, it’s inescapable. Gigantic corporations like SAS and IBM tout their big data analytics, while experts promise that big data—our exponentially growing ability to collect and analyze information about anything at all—will transform everything from business to sports to cooking. Big data was—no surprise—one of the major themes coming out of this month’s SXSW Interactive conference. It’s inescapable.

One of the most conspicuous examples of big data in action is Google’s data-aggregating tool Google Flu Trends (GFT). The program is designed to provide real-time monitoring of flu cases around the world based on Google searches that match terms for flu-related activity. Here’s how Google explains it:

We have found a close relationship between how many people search for flu-related topics and how many people actually have flu symptoms. Of course, not every person who searches for “flu” is actually sick, but a pattern emerges when all the flu-related search queries are added together. We compared our query counts with traditional flu surveillance systems and found that many search queries tend to be popular exactly when flu season is happening. By counting how often we see these search queries, we can estimate how much flu is circulating in different countries and regions around the world.

Seems like a perfect use of the 500 million plus Google searches made each day. There’s a reason GFT became the symbol of big data in action, in books like Kenneth Cukier and Viktor Mayer-Schonberger’s Big Data: A Revolution That Will Transform How We Live, Work and Think. But there’s just one problem: as a new article in Science shows, when you compare its results to the real world, GFT doesn’t really work.

GFT overestimated the prevalence of flu in the 2012-2013 and 2011-2012 seasons by more than 50%. From August 2011 to September 2013, GFT over-predicted the prevalence of the flu in 100 out 108 weeks. During the peak flu season last winter, GFT would have had us believe that 11% of the U.S. had influenza, nearly double the CDC numbers of 6%. If you wanted to project current flu prevalence, you would have done much better basing your models off of 3-week-old data on cases from the CDC than you would have been using GFT’s sophisticated big data methods. “It’s a Dewey beats Truman moment for big data,” says David Lazer, a professor of computer science and politics at Northeastern University and one of the authors of the Science article.

Just as the editors of the Chicago Tribune believed it could predict the winner of the close 1948 Presidential election—they were wrong—Google believed that its big data methods alone were capable of producing a more accurate picture of real-time flu trends than old methods of prediction from past data. That’s a form of “automated arrogance,” or big data hubris, and it can be seen in a lot of the hype around big data today. Just because companies like Google can amass an astounding amount of information about the world doesn’t mean they’re always capable of processing that information to produce an accurate picture of what’s going on—especially if turns out they’re gathering the wrong information. Not only did the search terms picked by GFT often not reflect incidences of actual illness—thus repeatedly overestimating just how sick the American public was—it also completely missed unexpected events like the nonseasonal 2009 H1N1-A flu pandemic. “A number of associations in the model were really problematic,” says Lazer. “It was doomed to fail.”

Nor did help that GFT was dependent on Google’s top-secret and always changing search algorithm. Google modifies its search algorithm to provide more accurate results, but also to increase advertising revenue. Recommended searches, based on what other users have searched, can throw off the results for flu trends. While GFT assumes that the relative search volume for different flu terms is based in reality—the more of us are sick, the more of us will search for info about flu as we sniffle above our keyboards—in fact Google itself alters search behavior through that ever-shifting algorithim. If the data isn’t reflecting the world, how can it predict what will happen?

GFT and other big data methods can be useful, but only if they’re paired with what the Science researchers call “small data”—traditional forms of information collection. Put the two together, and you can get an excellent model of the world as it actually is. Of course, if big data is really just one tool of many, not an all-purpose path to omniscience, that would puncture the hype just a bit. You won’t get a SXSW panel with that kind of modesty.

A bigger concern, though, is that much of the data being gathered in “big data”—and the formulas used to analyze it—is controlled by private companies that can be positively opaque. Google has never made the search terms used in GFT public, and there’s no way for researchers to replicate how GFT works. There’s Google Correlate, which allows anyone to find search patterns that purport to map real-life trends, but as the Science researchers wryly note: “Clicking the link titled ‘match the pattern of actual flu actvity (this is how we built Google Flu Trends!)’ will not, ironically, produce a replication of the GFT search terms.” Even in the academic papers on GFT written by Google researchers, there’s no clear contact information, other than a generic Google email address. (Academic papers almost always contain direct contact information for lead authors.)

At its best, science is an open, cooperative and cumulative effort. If companies like Google keep their big data to themselves, they’ll miss out on the chance to improve their models, and make big data worthy of the hype. “To harness the research community, they need to be more transparent,” says Lazer. “The models for collaboration around big data haven’t been built.” It’s scary enough to think that private companies are gathering endless amounts of data on us. It’d be even worse if the conclusions they reach from that data aren’t even right.

TIME climate change

A Tale of Two Winters

Winter ice on Lake Michigan on Chicago
The winter was brutal in Midwestern cities like Chicago Scott Olson/Getty Images

If you lived east of the Rockies, you froze this winter. But the other side of the country experienced unusual warmth—and sometimes record-high temperatures

As I write this in New York, it’s 25 degrees Fahrenheit (-3.9 Celsius)—about 21 F degrees below normal for Mar. 13—and frankly, we’re all sick of this. For much of the eastern half of the country, 2013-14 has been the winter that will never end. And now the numbers are in from the National Oceanic and Atmospheric Administration (NOAA), and we’re mostly right: It’s been very cold. But probably not as cold as you think.

The average temperature for the continental U.S. from December to February was 31.3 F (-0.4 C), 1.0 F (0.55 C) below the 20th century norm. That’s hardly record-breaking—it’s only the 34th coldest winter in recorded U.S. history—but it’s a lot colder than last winter, where the average temperature was 34.3 F (1.3 C), which helps explain why it felt so frigid. Even so, the continental U.S. experienced a colder winter as recently as 2009-2010, well before anyone had heard of the term “polar vortex,” and back when only hurricanes—not snowstorms—were given names.

How cold you were this winter depended largely on where you were in the U.S. If you lived east of the Rockies—home to significantly more than half the U.S. population and sometimes, it seems, virtually all the U.S. media—you experienced below-average temperatures. Midwesterners had it particularly bad—most of the area north of the Ohio River was 7 to 15 F (4 to 8 C) below normal, which helps explain why at their peak in early March 91% of the Great Lakes were frozen over. It was nasty for the Northeast as well, where temperatures were largely cooler than normal, especially in the western regions near the lakes (pity the citizens of Erie, Pennsylvania, where temperatures were nearly 5 F, or 2.75 C, below normal for the winter.) From Washington D.C. to Caribou, Maine, it seems that not a single town in the Northeast had above-normal temperatures this winter.

That wasn’t the case in the West, though. California—already in an incredibly severe drought—had the warmest winter on record, with average temperatures of 48.0 F (8.9 C), some 4.4 F (2.2 C) above the 20th century average and nearly 1 F (0.55 C) hotter than the previous warmest winter, in 1980-81. That’s a reminder of just how big the U.S. is, and how variable weather can be—which brings us to climate change. Scientists are going to have fun figuring out just what was behind phenomena like the polar vortex (one theory is that higher temperatures in the Arctic could impact the jet stream, allowing colder Canadian air to sweep down to the East). But a nasty winter in New York City no more disproves climate change than an all-time hot winter in California clinches the case for global warming. Climate change is a global phenomenon and a long-term one, which is why icy temperatures along the East Coast in January are a lot less important than the fact that the global land and ocean surface average temperature for January was 1.17 F (0.65 C) above the 20th century norm, which made it the fourth-warmest January on record globally.

Barring even weirder weather, winter should finally be giving way to spring even in the coldest states in the U.S.—finally. But with scientists warning of a possible El Nino later this year—which usually brings hotter temperatures—we may end up looking back on the polar vortex with fondness as the dog days of August drag on. Maybe.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser