TIME animals

Study: Chimps Learn How to Use New Tools From Other Chimps

ICOAST-ANIMAL-ZOO
A chimpanzee holds a lettuce at the zoo in Abidjan on June 12, 2014. Sia Kambou—AFP/Getty Images

This means chimps have a prerequisite for human culture

A new study from PLOS Biology found that chimpanzees can learn group-specific behavioral traits from each other, widely considered a prerequisite for human-style culture. The results suggest the foundations of human culture can be traced back to our common ancestry with apes.

Researchers in Uganda noticed that a few chimps in a group started using new kinds of sponges to drink water. Usually, chimps use clumps of leaves to extract the water, but the team observed one chimp using moss instead. Once the other chimps saw him using moss, seven other chimps made and used moss sponges over a six-day period. There was also another variation on the leaf-sponge (re-using an old leaf sponge) that also spread through the group.

“Basically, if you saw it done, you learned how to do it, and if you didn’t you didn’t,” lead researcher Dr. Catherine Hobaiter told the BBC. “It was just this wonderfully clear example of social learning that no one had [witnessed] in the wild before.”

TIME Biology

Meet the Fish That Can’t Get Jet-Lagged

Who cares about the time? A blind fish needs no internal clock
Who cares about the time? A blind fish needs no internal clock Reinhard Dirscherl; Getty Images/WaterFrame RM

There's a reason you get sleepy at night: because it's dark out. Now a little blind fish helps explain all that

Birds have ‘em. Bees have ‘em. Even bacteria have circadian rhythms, the ramping up and slowing down of internal functions that signals organisms to be more or less active, depending on the time of day. Humans have circadian rhythms too—and when they’re disrupted by time-zone changes, lack of sleep or working the night shift, the result can be an increased risk of heart attacks, depression, diabetes, weight gain and more.

For eyeless Mexican cave fish, however, no problem, says a new study in the journal PLOS ONE reports. “Some organisms have stronger circadian rhythms, and some weaker,” says lead author Damian Moran, of the private company Plant and Food Research, based in New Zealand. “But these fish have none at all.”

The finding, says Moran, “just fell into our laps.” He and his colleagues were actually studying the energy costs of vision—that is, how much of the body’s resources evolution thinks it’s worth devoting to having the advantage of being able to see. The Mexican tetra fish is especially useful for such studies because it comes in both a surface-dwelling subspecies and several versions that live in caves, in perpetual darkness (the latter, says Moran, “look a little like Gollum“).

In order to measure the energy cost of having vision, the scientists put both versions of tetra into a kind of fish treadmill, where they could swim constantly upstream while instruments measured their oxygen intake, a gauge of their energy use. To cover all their bases, the scientists tested both types of fish under their most familiar conditions—with a day-night cycle, and in total darkness.

The scientists were looking to measure the differences in energy use between the fish with eyes and those without—but they noticed something else as well. “The surface-dwellers,” says Moran, “had a typical increase of oxygen use during the day, and a decrease during the night. Whereas the cave fish showed a flat line day and night.”

It makes sense: an animal that lives in changing conditions of light and darkness needs to be more active when its food sources are more active, whereas a creature that never sees the light of day probably doesn’t care. Even so, since many organisms that live in utter darkness are descended from surface-dwellers, they maintain at least a weak circadian rhythm. But the cave-dwelling tetra have none, and because they don’t have to ramp their metabolism up and down, they use 27% less energy overall than their daytime-nighttime cousins.

While this is the first such animal ever found, says Moran, the eyeless tetra might actually be just the tip of a gigantic biological iceberg. “Most of the Earth’s biomass lives in areas that never see light at all. I suspect that when we look in the deepest part of the sea or deep underground,” he continues, “we’ll find many organisms that have no circadian rhythms.”

Because after all, what’s the point?

TIME Science

If Synthetic Biology Lets Us Play God, We Need Rules

165563077
MOLEKUUL—Brand X/Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

How can we prevent these technologies from falling into the wrong hands?

Synthetic biology has been called “genetic engineering on steroids.” It’s also been described as so difficult to pin down that five scientists would give you six different definitions. No matter how this emerging field is characterized, one thing is clear: the ability to synthesize and sequence DNA is driving scientific research in brand-new and exciting directions.

In California, scientists have created a breakthrough antimalarial drug—baker’s yeast made in a lab that contains the genetic material of the opium poppy. The drug has the potential to save millions of lives—and to ensure drug production that independent of poppy flowers. At MIT, researchers are working on a way for plants to “fix” their own nitrogen, so farmers will no longer need to use artificial fertilizers. And, in the far future, scientists and NASA researchers are looking to create a “digital biological teleporter” to bring to Earth life forms detected on Mars via a sort of biological fax.

What should we worrying about in this moment of tremendous, and potentially cataclysmic, scientific discovery? In advance of the Zócalo/Arizona State University event “How Will Synthetic Biology Change the Way We Live?, we asked experts the following question: Soon we’ll be able to program DNA with the same ease we program computers. What new responsibilities will be imposed on us?

1) Stepping ahead of technology to imagine the world we want to live in

Synthetic biology sees life as an engineering project— a repertoire of processes that can be reprogrammed to produce technologies and products. It envisions powerful new tools for constructing biological parts. Many in synthetic biology celebrate technologies like automated DNA synthesis as agents of “democratization,” potentially allowing easy and widespread access to custom-made DNA. According to their vision, these technologies will enable bioengineers to freely experiment with living systems, accelerating progress in innovation and producing enormous benefits for society.

But there are risks. The question is often raised: How can we prevent these technologies from falling into the wrong hands? DNA synthesis machines cannot distinguish between tinkerers and terrorists. Though this question is crucially important, it is revealing for what it leaves unasked. Why are synthetic biology’s tinkerers presumed to be the safe hands for shaping the technological future? Why do we defer to their visions and judgments over those that we collectively develop?

We tend to focus governance not on projects of innovation, but on how resulting technologies might be used in society. By attending primarily to technology’s “misuses,” “impacts,” and “consequences,” we confine ourselves to waiting until new problems—and responsibilities—are imposed upon us. Science is empowered to act, but society only to react. This leaves unexamined the question of who gets to imagine the future and, therefore, who has the authority to declare what benefits lie ahead, what risks are realistic, and what worries are reasonable and warrant public deliberation?

Our imaginations of the future shape our priorities in the present. It is a task of democracy, not science, to imagine the world we want to live in. Genuine democratization demands that we embrace this difficult task as our own, rather than wait to react to the responsibilities that emerging technologies impose upon us.

Benjamin Hurlbut is an assistant professor of biology and society in the School of Life Sciences at Arizona State University. Trained as a historian of science, he studies the intersection of science, politics, and ethics, with a particular focus on governance of emerging biotechnologies in the United States.

2) Addressing the gap between scientific innovation and human need

When it comes to programming DNA, the greatest challenge we face isn’t how to do it but rather for what purpose. How will we use the molecular tools we develop? The much-heralded promise is that genetic technologies will reveal clues to more effective treatment of disease. A serious challenge to making good on this promise is recognizing the social context—the values, beliefs, and structure in which these tools are called into being— that informs how scientists, policymakers, and the public prioritize their use.

We can start by asking why cutting edge biotechnologies have yet to solve our most intractable and dire global health problems. We assume that these new tools can be used to identify molecular targets to develop vaccines for neglected diseases disproportionately affecting low resource countries. And yet, a 10/90 gap persists in which a mere 10 percent of research is devoted to 90 percent of disease burden worldwide. In a market where male baldness and cellulite reduction take precedence over diarrheal diseases, malaria, and tuberculosis, we need to creative economic solutions to bridge the widening expanse between scientific innovation and human need.

Our social agenda will inform not only what is programmed into DNA, but also who will ultimately benefit from this new technology. Will our efforts bolster advantage among the select few or alleviate the suffering of the invisible many? The answer to that question depends upon whether we decide to leverage our shiny, new tools to address head-on the very old and obstinate problem of inequity.

Sandra Soo-Jin Lee, Ph.D., is a medical anthropologist and senior research scholar at the Center for Biomedical Ethics at Stanford University School of Medicine. Her current book project is entitled American DNA: Race, Justice and the New Genetic Sciences.

3) Rethinking DNA as a building tool

For much of the late 20th century, scientists, writers, and the general public imagined DNA as information. It was code in the form of a chemical, a molecule that directed our development and determined our destiny. This discourse served to organize, guide, and inform the research agenda of scientists for decades.

DNA, as any high school science student knows, exists as a double helix. Its structure is made of four different types of nucleotide subunits—adenine, cytosine, guanine, and thymine. The exact sequence of an organism’s DNA is determined by what scientists call complementary base pairing: adenine always pairs with thymine; guanine connects with cytosine. This predictability allows scientists to synthesize strands of artificial DNA—a technique perfected in the 1980s—which, when properly treated in the lab, can link up to form the desired structure.

Today, a community of scientists has adopted a different way of thinking about DNA. No longer just an information-containing biomolecule, DNA is now used as a building material by chemists, computer scientists, and molecular biologists. Starting with simple two-dimensional geometric shapes, DNA nanotechnologies can now fabricate complex three-dimensional objects capable of performing elementary mechanical functions and computations.

DNA nanotechnology is one part of the growing field of synthetic biology. What scientists will be able to do with the rapidly increasing capabilities is hard to project. To date, successes with DNA nanotechnology have included the construction of increasingly complex three-dimensional shapes, carrying out massively parallel computations, and building “DNA walkers” that can traverse a substrate and deliver “cargoes” of nanoscale particles.

For a historian of science, what is fascinating about this evolving field is this new perspective of DNA. We can no longer see it as just a blueprint for life … we now must also think of it as a building material. What kind of future will we build?

Patrick McCray is a professor in the history department at the University of California, Santa Barbara and the author, most recently, of The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future.

4) Ensuring careful consideration of potential impacts

In the decades just before the turn of the 20th century, there was great hope among researchers, lawmakers, and the public that our (then) new understanding of genetics could help to alleviate disease. It was from this promise that the world witnessed the emergence of—and later the horrors of—institutionalized eugenics. Synthetic biology offers similar promise and requires vigilance on the part of those developing the technology to ensure its careful implementation.

Scientists and policymakers have a responsibility to think holistically about how synthetic biology could affect individuals as well as populations, societies, and the human species as a whole. If synthetic biology is carelessly used to create genetic homogeneity as a means to cure genetic disorders, it could be detrimental. From an evolutionary perspective, genetic diversity has been key to the success of our species as it offers alternate solutions to environmental stressors. Alternatively, synthetic biology could also be used as a tool to create new types of genetic variations that, in the right environment, could ensure the survival of our species.

The development of this technology should be driven by the same ethical tenets that drive all current scientific research: respect, beneficence, and justice. The promise of synthetic biology rekindles hope in the discovery of a kind of genetic panacea. But the advent of this technology should, at the very least, solidify our resolve not to repeat the errors of the eugenicists of the past.

Jada Benn Torres is an assistant professor of anthropology at the University of Notre Dame. As a genetic anthropologist, her research interests include genetic ancestry, human variation, and women’s health.

5) Developing governance as innovative as our science and technology

Synthetic biology will present us with an ever-growing number of choices. Choices about what we eat. Medicines we take. Fuels we use. Products we buy. Clothes we wear. Pets we own. Enhancements to our bodies and minds. These new choices will provide us with many important benefits—but they will also confront us with challenging dilemmas.

Some choices made possible by synthetic biology will affect only individuals and their families, while others will have a much wider reach. For example, buying goods made by synthetic biology may displace workers in other nations who make the same products using older technologies or raw materials. When we enhance our own capabilities using synthetic biology, we put pressure on others to make similar enhancements or risk being left behind.

There may also be safety and health risks from the individual choices we make. If people create new organisms in their garage or basement using DIY biology, they may inadvertently create pathogens that put others at risk in their neighborhoods, cities, or even beyond.

Individual choices empowered by synthetic biology with the potential to adversely affect others will put additional burdens and pressures on our societal institutions to make more (and better) governance decisions.

And that is where the problem and danger really lies: At the very moment that new technologies like synthetic biology require us to make more complex decisions, our societal decision-making institutions have never been more broken. Our regulatory agencies are overwhelmed, under-funded, and ossified, our legislatures are gridlocked by partisan bickering and too much information and issues, and our courts are glacial and lacking scientific competency. We urgently need new innovations in institutions and governance to match the rapid new innovations in the science and technology of synthetic biology.

Gary Marchant is Lincoln Professor of Emerging Technologies, Law and Ethics and faculty director of the Center for Law, Science & Innovation at Arizona State University. He teaches and researches governance of emerging technologies.

This originally appeared on Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Research

Quiz: Can You Answer 5th-Grade Science Questions?

Most Americans lack a basic understanding of science

A new survey on scientific literacy from the Center for Accountability in Science found that most respondents failed to correctly answer questions designed for a fifth-grade science class.

“Most Americans are not armed with the basic facts about science,” said Dr. Joseph Perrone, chief science officer at the Center for Accountability in Science, in a statement. “This alarming lack of scientific literacy makes it easier for the public to be duped by the scary headlines and junk science.” You can get the results of the survey here.

Take our quiz to see if you can answer fifth-grade-level science questions.

TIME animals

Your New Favorite Winter Activity: Squirrel Juggling

178331144
Grey Squirrel Robert Trevis-Smith—Getty Images/Flickr RF

For the record, don't juggle squirrels

Tired of this unending winter and ever-elusive spring? Well, here a new way to wile away those winter hours: Squirrel juggling.

A recent article in The Atlantic Cities explained that certain squirrels hibernate so deeply that you could take a page out of Alice in Wonderland and play croquet with the little knocked out fur balls. It’s a fact proven by science. Specifically, a scientist named Hilary Srere. “I did juggle three of them for my niece and nephew when they were younger, because they are just little fluffy balls!” Srere said. “They don’t wake up, they just don’t.”

To be clear, we do not recommend, encourage or condone juggling hibernating squirrels, it’s just interesting to note, scientifically speaking, that one could (if one was a budding squirrel-hating sociopath).

If one was going to juggle a squirrel (again: don’t), there are a few things to note: First, the grey squirrels that you see digging up nuts in winter do not hibernate. So if you see one that is unconscious, it’s either been KO’d by a mini Manny Pacquiao or it’s just asleep. Sleeping squirrels wake up and will bite. Hibernating squirrels are ground squirrels that live in the Central and Western United States who can go into a coma-like state for up to six months. Second, if you’re considering taking up squirrel juggling as a party trick, be aware that a surprising number of squirrels have the bubonic plague.

Not much of a party trick now is it?

[Via Atlantic Cities]

MORE: College Students Go Nuts over Squirrels

MORE: Orphaned Squirrel Now Lives in This Girl’s Ponytail

TIME Biology

Virus Resurrected After Chilling in Siberia for 30,000 years

The chances of "frozen viruses" reactivating is possible thanks to climate change, according to experts.
The chances of "frozen viruses" reactivating is possible thanks to climate change, according to experts. Frank Cezus—Getty Images

Experts say contagion does not present danger to humans or animals

French scientists are celebrating after successfully revitalizing an ancient virus that had been lying dormant for 30,000 years in Siberian permafrost, according to the BBC.

Measuring 1.5 micrometers in length, the Pithovirus sibericum strain is the largest virus to ever be discovered.

“This is the first time we’ve seen a virus that’s still infectious after this length of time,” said Professor Jean-Michel Claverie, from the National Centre of Scientific Research (CNRS) at the University of Aix-Marseille in France.

Researchers say the contagion does not pose a danger to humans or animals; rather, it specializes in attacking single-cell amoebas.

“It comes into the cell, multiplies and finally kills the cell. It is able to kill the amoeba — but it won’t infect a human cell,” said CNRS’s Dr. Chantal Abergel.

However, experts admit other potentially harmful viruses could reactivate and spread if more frozen ground becomes exposed from increasing global temperatures.

[BBC]

TIME Biology

How to Know If Someone’s Really Dead

Walter Williams in the hospital.
Walter Williams in the hospital in early March. Courtesy of Eddie Hester

A close call at a funeral home has anyone who plans to die one day worrying

Correction added Feb. 28, 2014

Dead is dead—except when it isn’t. That’s the lesson 78-year old Walter Williams of Holmes County, Miss., learned late Wednesday night when he woke up in a body bag on an embalmer’s table, a wee bit more alive than the coroner had declared him to be. Williams, by all accounts, was the victim of bad luck, a sputtering pacemaker and a coronor who maybe hadn’t read the How To Know Someone’s Really Dead chapter when the rest of the class was studying it.

So how often does this happen and what are the odds that you will ever find yourself Zip-Locked for freshness when you’ve still got a bit of life in you?

Pronouncing someone dead has always been an inexact art. The tradition of the wake—or at least a day or two’s mourning period before the funeral—began as a way to give a body a fighting chance to show if it was alive. “The point was to make sure the dead guy is indeed a dead guy,” says Thomas Lynch, a funeral director and best-selling author of The Undertaking: Life Studies From the Dismal Trade, upon which the TV series Six Feet Under was based. “The living have been getting mistaken for the dead for a long time.”

But that was then (OK, if you’re Walter Williams, that was Wednesday) and methods have improved. When someone dies in a hospital, attending physicians do what’s known as “running a tape,” hooking the suspected deceased up to equipment that reads brain waves, heartbeat and respiration. When things go flat line—and stay that way—you’ve probably got yourself a body. Paramedics and other first responders have portable equipment that does the same thing, with the results getting beamed back to a hospital for confirmation.

Further tests make things more certain still. Bedside ultrasound can confirm lack of heart activity, says Dr. Robert Glatter of the department of emergency medicine at New York’s Lenox Hill Hospital. Brain death can be confirmed by the absence of brainstem reflexes, among other things, as well as the “doll’s eye test,” in which the head is moved from side to side with the eyes open. When the brain is dead, the eyes will not fix on the person in front of them, and will instead simply move with the head.

So what went wrong in Williams’ case? Everything. After he appeared to have suffered heart failure, the local coroner was duly called, and, according to Sheriff Willie March, did a less exacting job than he might have. “The coroner checked for wounds, didn’t get a pulse, and declared he had crossed over,” says March.

In some respects the rules were obeyed, since laws in all 50 states forbid a funeral home to take possession of a body until an authorized medical officer certifies the death. The problem is, not every state has the same definition of what such a person is.

“A coroner is not a medical officer,” says Lynch. “Often it’s just the local undertaker or the local favorite of whoever is in charge.” That may well not have been the case in the current mix-up, but the betting is that the standards will be tightened in the future. Until then, if you must die—and, says Lynch, “the numbers are right around 100% on that”—at least do it outside of Holmes County.

The reassuring news for most of us: The chances of a mix-up happening are exceedingly slim.

-with reporting from Charlotte Alter

An earlier version of this story misspelled the name of the Lenox Hill Hospital emergency care physician. He is Dr. Robert Glatter, not Glattner.

TIME

How Life Began: New Clues From New Worlds

Europa, a moon of Jupiter, appears as a thick crescent in this enhanced-color image from NASA's Galileo spacecraft, which has been orbiting Jupiter since 1995.
Europa, a moon of Jupiter, from NASA's Galileo spacecraft, which has been orbiting Jupiter since 1995. Universal History Archive / Getty Images

How simple chemistry turned into biology has always been a mystery, but some smart people have some intriguing ideas

The odds that the universe is bursting with life seem to be getting better all the time. Astronomers recently announced that there could be an astonishing 20 billion Earthlike planets in the Milky Way—and that’s if you’re limiting the pool to planets orbiting stars like the Sun. If you add the small, reddish stars known as M-dwarfs, which also harbor planets, the number is even greater. Within our own Solar System, meanwhile, the evidence for a plausibly life-friendly ocean on Jupiter’s moon Europa is stronger than ever, and the Curiosity rover has confirmed that some kinds of bacteria could have thrived in Mars’s Gale Crater billions of years ago. On a more universal scale, scientists know for a fact that two of the essentials for life—water and carbon—can be found literally everywhere.

How abundant life actually is, however, hinges on one crucial factor: given the right conditions and the right raw materials, what is the mathematical likelihood that life will actually would arise? If it’s a 50-50 proposition, then given the vast amount of available real estate, biology would have to be popping up all over the place. But if it’s a one-in-a-trillion shot, we could indeed be all alone in the vastness of space. To date, despite more than a half-century of trying, nobody has managed to figure how life on Earth began. Without knowing the mechanism by which inanimate chemistry assembled and bestirred itself, admits Andrew Ellington, of the Center for Systems and Synthetic Biology at the University of Texas, Austin, “I can’t tell you what the probability is. It’s a chapter of the story that’s pretty much blank.”

Given that rather bleak-sounding assessment, it may be surprising to learn that Ellington is actually pretty upbeat. But that’s how he and two colleagues come across in a paper in the latest Science. The crucial step from nonliving stuff to a live cell is still a mystery, they acknowledge, but the number of pathways a mix of inanimate chemicals could have taken to reach the threshold of the living turns out to be many and varied. “It’s difficult to say exactly how things did occur,” says Ellington. “But there are many ways it could have occurred.

(MORE: Unknown Force Kicks Stars Out of Milky Way. Really.)

The first stab at answering the question came all the way back in the 1950s, when chemists Stanley Miller and Harold Urey passed an electrical spark through a beaker containing methane, ammonia, water vapor and hydrogen, thought at the time to represent Earth’s primordial atmosphere. When they looked inside, they found they’d created amino acids, the building blocks of proteins.

That experiment is now considered a dead end, since the atmosphere probably didn’t look like that after all, and also since the steps from amino acids to life turned out to be hellishly difficult to reproduce—so difficult that it’s never been done. But Ellington sees things differently. “It was a monster achievement,” he says, “and since then we’ve learned a truckload.”

Scientists have learned so much, in fact, that the number of places life might have begun has grown to include such disparate locations as the hydrothermal vents at the bottom of the ocean; beds of clay; the billowing clouds of gas emerging from volcanoes; and the spaces in between ice crystals.

(MORE: Why Dust is the Most Important Stuff in the Universe)

The number of ideas about how the key step from organic chemicals to living organisms might have been taken has multiplied as well: there’s the “RNA world hypothesis” and the “lipid world hypothesis” and the “iron-sulfur world hypothesis” and more, all of them dependent on a particular set of chemical circumstances and a particular set of dynamics and all highly speculative.

Worse, however things played out, all evidence of the process has long since vanished: the very first cells have left no traces, and the environments that nurtured them have disappeared as well. The best scientists can hope to do is to create a proto-cell in the lab.

“Maybe when they do,” says Ellington, “we’ll all do a face-plant because it turns out to be so obvious in retrospect.” But even if they succeed, it will only prove that a manufactured cell could represent the earliest life forms, not that it actually does. “It will be a story about what we think might have happened, but it will still be a story.”

The story Ellington and his colleagues have been able to tell already, however, is a reason for optimism. We still don’t know the odds that life will arise under the right conditions. But the underlying biochemistry is abundantly, ubiquitously available—and it would take an awfully perverse universe to take things so far only to shut them down at the last moment.

(IMAGES: Space Photos: 45-Year-Old Footprints and More)

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser