TIME Technologizer

This Animated GIF of a 3D Bear Has a Secret

Spoiler: He's not as digital as he looks

I’ve become obsessed with the below animated GIF, which I discovered over at Amid Amidi’s Cartoon Brew. Stare at it, and you might be obsessed, too, at least for 30 seconds or so.

Bear Walking
Blue Zoo

It looks like something I might have seen as part of a 3D animation demonstration by a computer scientist when I attended the SIGGRAPH conference back in 1989. But here’s the remarkable thing: It isn’t computer animation. That bear may be made out of polygons, but he isn’t made out of bits. He’s a physical object–or, more precisely, 50 of them.

Two London-based companies, DBLG and Blue Zoo, created the animation, Bears on Stairs, which did begin with a computer-designed ursine protagonist. But rather than just rendering a bunch of frames, the companies printed out the sequence as 50 models. Then they photographed them as a stop-motion sequence, using the same basic technique studios such as Rankin/Bass used long before computers had anything to do with animation.

Here’s a behind-the-scenes video:

As Amid points out, the idea of using 3D printing to meld computer and stop-motion animation isn’t new. Laika (the studio behind Coraline and the upcoming Boxtrolls) is already doing it. But normally, the goal is for it all to be so seamless that the viewer doesn’t know or care that computers were used. What’s clever about “Bears on Stairs” is that it evocatively flaunts its use of computers–so much so that almost anybody would assume that it was a purely digital production.

TIME Technologizer

This 1981 Computer Magazine Cover Explains Why We’re So Bad at Tech Predictions

BYTE cover
Robert Tinney's cover for the April 1981 issue of Byte magazine Internet Archive

Thirty-three years later, artist Robert Tinney's concept smartwatch is worth at least a thousand words

If you were passionate about personal computers between the mid-1970s and mid-1980s, the odds were high that you were a reader of Byte magazine. And if you read Byte, you were surely a fan of Robert Tinney, the artist whose cover paintings were one of the magazine’s signature features for years.

Tinney’s work was imaginative, technically superb (he is a master of the airbrush) and, sometimes, very funny. Byte lost a little bit of its soul when the publication started phasing out his work in favor of standard-issue photos of standard-issue computers.

While rummaging around the web last week looking for something else, I came across his cover for Byte‘s April 1981 issue at the Internet Archive. I immediately shared it on Twitter, where it got about as enthusiastic a response as anything I’ve ever tweeted. There it is at the top of this post, with the artist’s permission.

This is, obviously, an amusing image. The notion that a wrist computer might have a floppy-disk drive, a QWERTY keyboard and a tiny text-based interface was a good joke in 1981, and an even better one when seen through the lens of nostalgia. (If you’re tempted to assume that the image was actually a serious depiction of what a future wrist computer might look like–well, no. Inside the magazine, which only had a brief editiorial about future computers, the editors pointed out that it wasn’t a coincidence that it happened to be the April issue of Byte.)

But I also find this art–which Tinney still offers as a limited-edition print–to be quite profound, on multiple levels. Here’s why.

First, it reminds us that the smartwatch is not a new idea. Even in 1981, tech companies had been trying to build them for awhile: Tinney’s creation is a pseudo-logical extension of ideas expressed in real devices such as HP’s HP-01, a “personal information assistant” introduced in 1977. (Of course, people have been obssessed with the notion of strapping advanced communications gadgetry to their wrists since at least 1946, when Dick Tracy got his wrist radio.)

Here we are in the 21st century. The tech industry has lately made progress on this smartwatch idea, but it’s still not a problem that anyone’s completely solved, which is why it still isn’t part of everyday life. You could do a “Future Computers” cover today and put a concept smartwatch on it, just as Byte did in 1981.

 Steel
The Pebble Steel smartwatch Pebble

Second, for all the ways technology has radically improved in the past 33 years, the current crop of smartwatches actually have a lot in common with Tinney’s concept. The industry is still struggling with questions of display technology, input and storage, and one of the best efforts so far, the Pebble Steel, even looks eerily like the Tinney watch, sans QWERTY.

But most of all, the Tinney watch is a wonderful visual explanation of why human beings–most of us, anyhow–aren’t very good at predicting the future of technology. We tend to think that new products will be a lot like the ones we know. We shoehorn existing concepts where they don’t belong. Oftentimes, we don’t dream big enough.

(One classic example: When it became clear that Apple was working on an “iPhone,” almost all the speculation involved something that was either a lot like an iPod, or a lot like other phones of the time. As far as I know, nobody expected anything remotely like the epoch-shifting device Apple released.)

Tinney’s painting is a gag, but it’s not that far removed from what a serious futurist might have predicted in 1981. It’s a PC of the era, downsized to fit the wrist.

Back then, a pundit who started talking about gigabytes of storage or high-resolution color screens or instant access to computers around the world or built-in cameras and music players would have been accused of indulging in science fiction. Even though some of the earliest ancestors of modern interfaces existed in laboratories in places such as Xerox’s Palo Alto Research Center, I don’t know if it would have even occurred to anyone to envision them being built into a watch.

And today? Much of the thinking about smartwatches involves devices that look suspiciously like shrunken smartphones. That’s what we know. But I won’t be the least bit surprised if the first transcendently important wearable device of our era–the iPhone of its category–turns out to have only slightly more in common with a 2014 smartphone than it does with a 1981 computer.

Bonus material: Here’s a 1986 Robert Tinney interview by my friend Benj Edwards, illustrated with additional fabulous Byte covers.

TIME

This Is How You Resurrect America’s Dying Malls

+ READ ARTICLE

The American mall of the future may look a lot like the kinds of public markets traditionally found in towns and cities in the developing world.

The recession and the rise of e-commerce have left many U.S. shopping malls nearly vacant or completely dead. A new mall hasn’t been built in the United States since 2006, and growth in brick-and-mortar shopping centers has slowed to a crawl. Business owners and mall managers are looking for ways to bring their properties back to life — and they increasingly they see Hispanics as a vital part of the solution.

One in every six Americans is Hispanic, up from one in sixteen in 1980. The Hispanic population in the U.S. today is over 52 million and counting. And with a buying power of $1.2 trillion, Hispanic consumers are fast becoming a valued prize to be won by American businesses.

But some question whether Hispanic consumers are really the answer. The children of immigrants are assimilating fast, breaking free of their parents’ old-world values. Many don’t want to shop in Hispanic malls or listen to mariachi music. Will Hispanics lose their economic clout as their children evaporate into the American cultural cloud, or will the Hispanic consumer become the new American consumer?

TIME Innovation

Atom-Photon ‘Switch’ Heralds Quantum Networking Advances

Scientists have developed a new method of trapping rubidium atoms in a lattice of light, which could help the development of quantum computing. Christine Daniloff / MIT

Researchers at MIT and Harvard have managed to "trap" individual atoms using a lattice of light, and it could be a major step in the direction of quantum computing networks.

Vetting quantum computing “breakthroughs” tends to be, as TIME’s Lev Grossman sagely notes, a bit like quantum computing itself: maybe yes, maybe no, or maybe yes and no simultaneously.

That notion of things existing in multiple states at once is called quantum superposition (the prefix “super” meant in its “above, over, beyond” sense), and it’s the foundation upon which quantum computing’s promise of insanely fast, classical computer-trouncing probability engines rests. It involves post-digital units of quantum information known as qubits, or quantum bits, which instead of existing in a digital on or off state, can be in superposition, or both states at the same time. If that sounds weird, it’s because it is. Like anything else scrutinized at nanoscopic levels, it defies encapsulation.

Traditional digital computers convey the illusion of binary multitasking by switching rapidly between computational states. Chipmakers like Intel and AMD and IBM have been ratcheting up the stakes in that seesaw-dance for decades, to the point that today’s fastest supercomputer, the $2.4 billion Tianhe-2 located in China and employing Intel’s multicore “3D” processor technology, can crunch an unfathomable 33.86 petaflops, putting it somewhere in the vicinity of what researchers reportedly estimated just a few years ago would be necessary to simulate the human brain.

Quantum computers would be able to perform exponentially greater computational feats still by abandoning on-off calculative constraints as a matter of form — a kind of ultimate parallelism, if you’ll forgive that back-of-the-envelope reduction. Imagine cloning yourself for the sake of performing a task and thus being able to do more than one thing at once, then scale that way up in computational terms. True, practically implementable and sustainable quantum computing is probably where the rubber behind concepts like Ray Kurzweil’s singularity — our looming self-aware machine overlords — meets the road.

But given the challenges involved in getting even the most primitive sort of turtle-slow, experimental quantum computing device up and running — and you can read about some of those challenges here — it’s hard to know what you’re looking at when you see headlines about so-called quantum computing breakthroughs.

Take the latest laboratory advance with potential quantum computing ramifications from two of the foremost quantum-fiddling suspects — MIT and Harvard University — involving light, or more specifically a lattice of photons designed to ensnare atoms and create joint particle “switches” that could, in theory, facilitate quantum computing operations down the road.

Down the road would be the operative phrase here, since the discovery, just published in Nature, sounds more like a stepwise accomplishment in an unfurling cosmology of quantum computing components, some or all of which may (or may not) be instrumental in guiding hands toward fantasy future notions of smartphone- or watch-sized or physiology-embedded computers more powerful and versatile than our own gray matter.

According to MIT News, the MIT/Harvard solution involves pairing a rubidium atom (a metal) with a photon, allowing either particle to affect the quantum state of the other. Call it a “quantum optical switch,” because that’s what the authors of the Nature paper do:

By analogy to transistors in classical electronic circuits, quantum optical switches are important elements of quantum circuits and quantum networks. Operated at the fundamental limit where a single quantum of light or matter controls another field or material system, such a switch may enable applications such as long-distance quantum communication, distributed quantum information processing and metrology [the scientific study of measurement], and the exploration of novel quantum states of matter. Here, by strongly coupling a photon to a single atom trapped in the near field of a nanoscale photonic crystal cavity, we realize a system in which a single atom switches the phase of a photon and a single photon modifies the atom’s phase.

The researchers says that the techniques they’ve been able to experimentally demonstrate in this instance could “pave the way to integrated quantum nanophotonic networks involving multiple atomic nodes connected by guided light.”

“This is a major advance of this system,” MIT professor and paper co-author Vladan Vuletić told MIT News. “We have demonstrated basically an atom can switch the phase of a photon. And the photon can switch the phase of an atom.”

Vuletić envisions placing all sorts of atoms in this system to create devices “only a few hundred nanometers thick, 1,000 times thinner than a human hair” which would then exchange information.

“The idea is to combine different things that have different strengths and weaknesses in such a way to generate something new,” said Vuletić (again, speaking to MIT News), though he appends the following cautionary disclaimer, which could serve for all quantum computing advances at this point: “This is an advance in technology. Of course, whether this will be the technology remains to be seen.”

TIME Technologizer

Yes, Smartphones Are Plateauing — and That’s O.K.

JK Shin
Samsung president and CEO J.K. Shin announces the Galaxy S5 smartphone at Mobile World Congress in Barcelona on February 24, 2014 David Ramos / Getty Images

Eras of wild technological innovation are all very well. But so are periods of quiet refinement

After posting my reasonably favorable review of Samsung’s new Galaxy S5 smartphone last night, I started checking out some of the other early evaluations. The diversity of opinion is fascinating.

Nobody thinks that this phone is a massive embarrassment, or a landmark. But the reviews break down into two types: those by people who think the phone is half full, and those who think it’s half empty.

Here’s the Wall Street Journal‘s Geoffrey Fowler, whose piece accentuates the negative:

Samsung may market the Galaxy S5 as a significant upgrade, but it is best seen as a refinement. Smartphone technology may be reaching a plateau where core elements like the processor, screen and sensors no longer matter as much as the software that helps you use them. And that is an area where Samsung still trails.

I agree with everything Fowler says in those three sentences, but I still have a more favorable overall impression of the Galaxy S5 than he does, and I’m not bothered by the possibility that the whole category has plateaued.

Here’s why:

Plateaus are natural, and this one isn’t new. No product category gets better at a breakneck pace forever. In the case of smartphones, the period of wild innovation that began when Apple shipped the first iPhone in 2007 ended at least a couple of years ago. (At least I can’t think of any feature anybody’s introduced lately that’s the least bit transformative.)

They invite refinement. With technology, evolution is as important as revolution. When a company such as Samsung or Apple isn’t adding all-new capabilities, it can polish up those that a product already has. Samsung did that with features such as the Galaxy S5’s camera. And though Fowler is right that Samsung still has considerable catching up to do when it comes to software, I’d rather see it invest its energy in thoughtful usability tweaks than wacky stuff such as controlling your phone by waving your hand around.

You don’t want to upgrade your phone every year anyhow. Recode’s Walt Mossberg says that he wouldn’t recommend the S5 to anyone who already has a Galaxy S4 or a current iPhone. That’s a sensible stance, but it’s not a knock on the S5, particularly. For one thing, it’s rare that any tech product improves on its immediate predecessor by awe-inspiring leaps and bounds. For another, it makes no economic sense to buy a new smartphone every year, especially if you’ve committed to a two-year contract — so the most relevant question about a new model is whether it improves meaningfully on phones from two or three years ago. (The GS5 does.)

You never know what will happen next. Smartphones have plateaued before. A decade or so ago, in the heyday of the BlackBerry and Palm Treo, the category was improving only incrementally, and it wasn’t clear what would change that dynamic. Then the first iPhone arrived, and it was suddenly obvious that radical improvement on the status quo was possible.

I’m lousy at predictions, so any guesses I hazard to make about future smartphones such as the iPhone 6 or Samsung Galaxy S6 are likely to be utterly wrong. But if those phones turn out to be just a little bit better than the models they replace, it won’t be surprising, or a sign that the industry is broken. Even though Apple and Samsung will never, ever use the word “plateau” when describing new products they’d like you to buy.

TIME Innovation

StoreDot: Another Promising, Far-Off Answer to Smartphone Battery Problems

"Nanodot" technology could charge your phone in 30 seconds, but is years away from the mass market.

+ READ ARTICLE

Every so often, we hear about new technology that’s supposed to save smartphone battery life. But most of these advances are still in the lab stage, unfit for public demonstration.

StoreDot is a little different. The Tel Aviv-based startup isn’t claiming to increase smartphone battery life, but instead says it can charge a dying phone in less than a minute. And for the skeptics, StoreDot demonstrated the technology on a Samsung Galaxy S4 on Monday during Microsoft’s Think Next symposium.

Keep in mind that StoreDot’s real advances are in the battery, not the charger. StoreDot is using a new battery chemistry that features “nanodots” derived from bio-organic material. These nanodots are used in both the electrode, which stores the battery’s energy, and the electrolyte, which transfers energy between the battery’s anode and cathode ends. StoreDot says the electrical properties of these nanodots allow the electrode to charge much faster, while still discharging at a rate similar to conventional lithium-ion batteries. And because the technology is based on naturally occurring organic compounds, it’s supposedly cheap to produce.

Although the demo is impressive, it will face some hurdles on the road to commercialization. In the current demo, StoreDot’s battery is physically larger than the one inside Samsung’s Galaxy S4, but its capacity is smaller. So while it can charge much faster, it won’t last as long on a charge. StoreDot says it’s working on the capacity issue and hopes to reach its goal of matching conventional batteries within a year. The charger is much larger as well–though StoreDot says it’s working on reducing the size–and it’ll be roughly twice as expensive as a normal charger. Finally, the phone itself needs to be modified to accommodate a high current during charging, but again, StoreDot says it’s hoping that users could eventually drop the battery into existing phones.

There’s also the issue of raising money and mass-producing a product. StoreDot says it has a “large Asian smartphone manufacturer” as a strategic investor, and the company has recently raised $6 million according to The Next Web. Still, StoreDot isn’t planning to begin mass production until late 2016. As I’ve written before, the testing phase for the safety and longevity of new battery technology can take a long time, and that’s a big reason so many solutions are still years away.

In other words, StoreDot is yet another company facing big challenges as it tries to revolutionize smartphone batteries. But it has a working demo and a timeline for commercialization, and that’s got to be worth something.

TIME Foreign Policy

USAID Using Technology to Fight Poverty

The USAID has plans to end extreme poverty by 2030, and it wants to use technology and science to make it happen

Former Secretary of State Hillary Clinton and the U.S. Agency for International Development (USAID) will announce Thursday a new high-tech program to fight poverty across the globe.

The program, called the U.S. Global Development Lab, is a partnership between USAID and 31 universities, corporations and foundations that will support and develop solutions to global problems using science and technology. Its goal is to eradicate extreme poverty by 2030.

For USAID administrator Dr. Rajiv Shah, the project has been a long time coming. Since taking the helm at USAID—and before, when he served as undersecretary for the U.S. Department of Agriculture—Shah has worked to develop solutions to solve the world’s problems through science, often alongside Clinton.

Shah was at the USDA finding ways to improve agriculture through science while Secretary Clinton was constructing a global food initiative. Shah says he proposed marrying the efforts to take a meaningful jab at ending world hunger.

“I said, look, if we could get and invent new seeds, new mobile technology and open new data centers to help farmers connect their crop prices and understand weather variability we can do something transformational against hunger,” says Shah. “And not just reach a small percentage of the people that are hungry with food.”

By using a strategy based in science and technology to approach the myriad issues faced by poor communities across the globe, Shah says America can lead the effort to end poverty. Any change, however, won’t happen overnight. The USAID has spent the past four years cutting programs and reallocating funds so the Lab would have the resources necessary to launch. In 2008, the USAID spent only $127 million on scientific developments. In 2013, they spent closer to $800 million. They’re expecting as much as $30 billion in individual investment over the course of the project with the help of their partners, including The University of California at Berkeley, Coca-Cola, and the Gates Foundation.

Those partners are developing products that marry cost-effective strategies with science and technology, often creating simple strategies to tackle problems ranging from hunger to disease to literacy in the process. A group of Stanford University graduates are shopping a low-cost, environmentally friendly home lighting product that set out to reach 22 million people in Africa who currently rely on kerosene lamps to light their homes at night. USAID partners at Berkeley created a mobile application that can detect water borne diseases using an iPhone camera and parts built from a 3-D printer. And by working together, USAID hopes the solutions will reach a higher number of people at a faster pace.

“We see this as a transformation in how you do development,” said Lona Stoll of USAID. “By tapping into things that really make America what it is, which is our entrepreneurial spirit, our scientific expertise, and our real commitment to help people, you have a real ability to accelerate our impact.”

TIME Innovation

FCC Vote Clears the Way for Faster Wi-Fi

The Federal Communications Commission voted unanimously this week to allow Wi-Fi networks greater access to the public airwaves, officially clearing the way for faster Wi-Fi data connections.

Currently, Wi-Fi networks are allowed to communicate using two different public frequency bands: 2.4GHz and 5GHz. The new FCC rule expands the amount of the 5GHz band dedicated for use in Wi-Fi communications by 100MHz, opening up the possibility of ultra-high-speed Gigabit Wi-Fi in our homes.

“This is a big win for consumers who will be able to enjoy faster connections and less congestion, as more spectrum will be available to handle Wi-Fi traffic,” explains FCC Chairman Tom Wheeler. “It will make it easier to get online wirelessly in public places like airports and convention centers, as well as in your living room.”

It’s not yet known how long it will take for router companies to release products that take advantage of the extra bandwidth.

This article was written by Fox Van Allen and originally appeared on Techlicious.

More from Techlicious:

TIME Innovation

Raph Koster on Facebook-Oculus: You’re Just Another Avatar in Someone Else’s MMO

A gamer uses an Oculus virtual reality headset at the Eurogamer Expo 2013 in London.
A gamer uses an Oculus virtual reality headset at the Eurogamer Expo 2013 in London, September 26, 2013. Leon Neal—AFP/Getty Images

The Facebook-Oculus deal, for all the good it might do, requires that we all start paying much closer attention to ownership and control of virtual spaces.

Former Ultima Online and Star Wars Galaxies lead Raph Koster has the most insightful and incisive piece I’ve yet seen on the Facebook/Oculus VR deal. Instead of worrying about Mark Zuckerberg’s gaming cred or the integrity of Oculus’ Kickstarter or whether Google should have swooped in first or what $2 billion means relative to anyone else’s VR war chest, Koster zooms out to offer a perceptive overview of the underlying currents defining near and future computing trends, and the problematic artifacts that accompany those trends.

In Koster’s view, computing’s near-future is essentially “wearable” versus “annotated.” You’re either plugging stuff into your person to augment (or simulate) your reality, or carrying stuff around that places interpretive brackets around it. The difference between the two notions is academic, of course, and Koster says both camps — currently shaped by competing commercial visions that have as much to do with molding consumer interest as tapping it — can’t escape the black hole tug that’ll eventually draw them together.

About this, Koster says:

One is the mouth, the other the ears. One is the poke, the other the skin. And then we’re in a cyberpunk dream of ads that float next to us as we walk, getting between us and the other people, our every movement mined for Big Data.

What does it mean when companies as vast as Facebook or Google or Apple have this level of access to and control over the way we interface with anything, conventional notions of reality or otherwise? It means…well, trouble, because it’s already causing trouble via the pre-VR, pre-“presence” social network-driven personal desire assimilation engines that live in our cars, houses, workspaces and pockets.

I’m not a libertarian privacy-at-all-costs wingnut committed to a wildly idealistic impossibility. I see the philosophical cracks in some of these very old, culturally bound presumptions about what privacy ought to be, as if humans were self-sustaining islands in some mythic state of equilibrium capable of inhabiting this planet without imposition of any sort on another (ultimate privacy is, in fact, another way of describing a form of sociopathy). Mark Zuckerberg isn’t wrong when he’s said that privacy as we know it (or ideally expect it) has to change, and that that’s symptomatic of a technology-fueled (which is to say fundamentally us-driven) paradigm shift.

But the most important question in this barrier-cracking worldview, where we inject all that we are into someone’s calculating server farm, is this: Who has ultimate ownership of that technology?

In an ideal world, virtual reality would probably be open source, broadly distributed, and all this looming virtual turf would be owned (or data-mined, or suffused with overt or subliminal ads) by no one. But suggest as much and you’re basically ringing a bell for arguments about the so-called risk-takers and venture capitalists and entrepreneurial geniuses necessary to make all that looming virtu-topia possible, because true or no, that narrative’s drawn from as old and deeply embedded a cultural playbook as exists.

That question’s at the crux of the issue Koster’s getting at when he says the Facebook/Oculus deal isn’t about rendering (that is, geeky cool visual stuff) so much as it is about “placeness.” It’s about ownership, specifically ownership of cloud-space.

Virtual reality in that sense is going to be as boundless as a processor farm’s prowess and a design team’s imagination. It’s perhaps most poignantly the vision Tad Williams shares in his Otherland series, but it’s also there in Neal Stephenson and William Gibson and Bruce Sterling and all the countless others, in particular post-1980s-VR artists and thinkers, who’ve grappled with the question in one form or another. It’s a vision of the future in which extremely powerful, functionally opaque institutions compete for our attention in unfathomably vast virtual emporiums that, yes, may well start with something as innocuous-sounding as mountain climbing and concert-going (say in Facebook’s case). But how quickly does that move on to wish fulfillment (which is where it risks becoming narcotic), where it’s simultaneously mining our hopes, dreams, desires and eventually every measurable detail of our lives?

“It’s about who owns the servers,” says Koster. “The servers that store your metrics. The servers that shout the ads. The servers that transmit your chat. The servers that geofence your every movement.”

And then:

It’s time to wake up to the fact that you’re just another avatar in someone else’s MMO. Worse. From where they stand, all-powerful Big Data analysts that they are, you look an awful lot like a bot.

Paranoia about what companies are doing with your data today may be overstated, in that I’m pretty sure no one cares what I say on the phone or send through email in the here-and-now. But healthy paranoia, if such a thing exists, involves educated hypothesizing (that is, extrapolating based on historical precedent). There’s certainly precedent for virtual reality, since the latter’s still going to be constrained by our imaginations. In this 21st century pre-singularity moment, we’re still as human as we’ve ever been. The problems we’ll have to deal with when we strap things on our faces and start to reify what we’re already capable of doing when we close our eyes and dream are going to be the same problems we’ve been dealing with for millennia, however amplified or fetishized or distorted.

Grappling with something as far flung (and yet simultaneously present) as global warming isn’t about solving those problems today, it’s about considering a tomorrow many of us won’t see. It’s about understanding the scale involved with addressing those problems, about thinking longterm instead of excusing inaction based on human ephemeralness. The kinds of things Koster worries about won’t happen overnight, but gradually — so gradually that the shifts can be imperceptible. The dystopian futures that seem so reprehensible in the best speculative fiction don’t arrive like fleets of hostile aliens, galvanizing us to action, and Koster’s future in which we’re an avatar in someone else’s MMO is already partly here. In a 2007 interview about his book Spook Country, William Gibson said “it’s hard to write science fiction anymore when reality is so unbelievable.”

I’m excited about Oculus VR’s tech. I can’t wait for my devkit to arrive this summer. But as Koster puts it, “I’m a lot more worried about whose EULA is going to govern my life.”

Me too.

TIME Innovation

Researchers Create a Disposable Battery That Melts Inside You

Researchers at the University of Illinois at Urbana-Champaign have created a powerful, tiny battery capable of safely being absorbed into the human body, the journal Nature is reporting.

The battery is made of magnesium foil, saline solution, biodegradable polymers and other non-toxic materials. Scientists say it’s designed to power tiny biodegradable electronic sensors that could be implanted deep inside tissue or under bone. These sensors can wirelessly relay data about temperature or mechanical strain for about a day before being safely absorbed into the body.

The University of Illinois researchers were not the first to design a biodegradable battery, but they may be the first to design a truly useful one. That’s because it has an unusually high power density – an incredibly tiny battery with a surface area of 0.25 cm2 and a thickness of 1 micrometer could power a wireless sensor for up to a day.

“This is a really major advance,” noted biomedical engineer Jeffrey Borenstein told Nature. “Until recently, there has not been a lot of progress in this area.”

This article was written by Fox Van Allen and originally appeared on Techlicious.

More from Techlicious:

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser