TIME technology

Why the Computer Mouse’s Inventor Isn’t the Big Cheese

First Mouse
Rue des Archives / APIC / Getty Images A prototype of the first mouse, from 1968

April 27, 1981: The computer mouse makes its debut

For an innovation meant to make it easier to use a computer, its name was surprisingly unwieldy: “X-Y position indicator for a display system.” The word “mouse” was much catchier, and that’s what the device was eventually called when it debuted as part of a personal computer station, first sold by the Xerox Corporation on this day, April 27, in 1981.

Credit for the invention itself goes to Douglas Engelbart, who first developed the computer mouse in 1963, per TIME. By the time the mouse became commercially available, however, Engelbart’s patent had expired, and he never earned royalties for his work.

The personal computer that introduced the mouse to the world — with a similarly unwieldy name, the Xerox 8010 Star Information System, and the clunky look common to early personal computers, including a keyboard about the size of a toaster — revolutionized computing in other ways, too: It was the first with a graphical user interface, navigated by clicking icons rather than typing commands, and the first to incorporate folders, file servers and email, according to WIRED.

But like Engelbart, Xerox failed to profit significantly from its innovations. Its failure was twofold, according to the lore of the technology world, as reported by the New Yorker: Its executives didn’t realize the scope of what they’d achieved in the Star workstation — and they let Steve Jobs see it.

In exchange for shares of Apple, Xerox granted Jobs access to its innovation arm, Xerox PARC (short for Palo Alto Research Center) while it was working on the Star system in 1979. Jobs returned to Apple headquarters determined to improve upon the project.

Telling an industrial designer how to build a better mouse, he explained, per the New Yorker, “[The Xerox mouse] is a mouse that cost three hundred dollars to build and it breaks within two weeks. Here’s your design spec: Our mouse needs to be manufacturable for less than fifteen bucks. It needs to not fail for a couple of years, and I want to be able to use it on Formica and my bluejeans.”

Xerox — better known for making copies than computers — ultimately dropped the PC from its portfolio, mouse and all. And in the years that followed, its profits languished while Apple’s continued to rise. In 2000, faced with billion-dollar losses, it even implied that it might put the research center up for sale.

Two years later, however, PARC incorporated as an independent subsidiary of Xerox. Its researchers continue to innovate today — motivated by the center’s immense prestige, if not its history of profit.

As TIME put it in 2000: “The PARC has a pretty good track record when it comes to radical new visions, even if its record of holding onto them has been spotty at best. The mouse, the GUI (graphical user interface, like Windows) and arguably the PC itself were all born in this hothouse of Silicon Valley R. and D.; they ended up making a lot of money for Apple and Microsoft.”

Read more about Xerox, here in the TIME archives: Team Xerox

TIME technology

This 50-Year-Old Prediction About Computers Will Make You Sad

April 2, 1965, cover of TIME
Cover Credit: BORIS ARTZYBASHEFF The April 2, 1965, cover of TIME

TIME's 1965 hopes for automation were high

Correction appended: April 2, 2015, 9:45 a.m.

Fifty years ago, when TIME made computers the cover subject for the April 2, 1965, issue, it seemed like the technology had already grown beyond the bounds of human imagination.

A little more than a decade earlier, the magazine reported, the United States had been home to a mere 100 computers. By 1965, there were 22,500 of them. (A 2013 Census report found that 83.8% of households had a computer in the U.S., and that’s not even counting businesses or government offices.) The smallest model available weighed a now-whopping 59 lbs. The government was spending a billion dollars a year on its computers — that’s about $7.4 billion today — and 650,000 Americans were employed making or selling computers, as others in many industries lost their jobs to automation.

They had irreversibly changed the speed of life across the country, making the impossible possible. By TIME’s calculations, “To process without computers the flood of checks that will be circulating in the U.S. by 1970, banks would have to hire all the American women between 21 and 45.”

And, some experts told TIME, those changes would only continue:

Men such as IBM Economist Joseph Froomkin feel that automation will eventually bring about a 20-hour work week, perhaps within a century, thus creating a mass leisure class. Some of the more radical prophets foresee the time when as little as 2% of the work force will be employed, warn that the whole concept of people as producers of goods and services will become obsolete as automation advances. Even the most moderate estimates of automation’s progress show that millions of people will have to adjust to leisurely, “nonfunctional” lives, a switch that will entail both an economic wrench and a severe test of the deeply ingrained ethic that work is the good and necessary calling of man.

Though the economy would have to adjust, it wouldn’t be all bad. “Many scientists hope that in time the computer will allow man to return to the Hellenic concept of leisure, in which the Greeks had time to cultivate their minds and improve their environment while slaves did all the labor,” the article continued. “The slaves, in modern Hellenism, would be the computers.”

The full century during which this change was predicted is only half-way done, but at this point the chances that we’ll live to see a life of Hellenic leisure seem pretty dim. In fact, as a whole, Americans are working more than we were before computers came along to help out. (That change takes into account the entry into the workforce of many women; among men only, the average hours worked per week is slightly down, but not by as much as was predicted in 1965.)

Points for accuracy should go to the 1965 story’s dissenters, who argued even then that society has always adjusted to whatever changes technology may bring, eventually creating more work for people to do when the old jobs get displaced. Sorry, 1965 readers who were looking forward to a life of computers doing all the annoying or difficult stuff.

And, even though computers aren’t doing all the work, some of the 1965 story’s predictions did come true. For example, Computers, TIME accurately predicted, “will eventually become as close to everyday life as the telephone—a sort of public utility of information.”

Read the full 1965 cover story, here in the TIME Vault: The Cybernated Generation

Correction: The original version of this story misstated the proportion of U.S. households that had a computer in 2013. It was 83.8%.

TIME Innovation

Five Best Ideas of the Day: February 27

The Aspen Institute is an educational and policy studies organization based in Washington, D.C.

1. Hollywood is less diverse than its audiences — and it might be hurting the bottom line.

By Austin Siegemund-Broka in the Hollywood Reporter

2. Facebook’s new suicide prevention tools finally get it right.

By Ashley Feinberg in Gizmodo

3. How will we understand the power of the bacteria in our bodies? Meet the crowdsourced American Gut project.

By American Gut

4. The road to artificial intelligence begins with computers mastering video games like a human being in the 80s.

By Rebecca Morelle at BBC News

5. Salting roads and plowing snow is inefficient and costly. A smart algorithm can save cities millions.

By Marcus Woo in Wired

The Aspen Institute is an educational and policy studies organization based in Washington, D.C.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME technology

Did Deep Blue Beat Kasparov Because of a System Glitch?

Garry Kasparov
Ted Thai—The LIFE Picture Collection/Gett Chess champion Gary Kasparov contemplating a board in 1997, training for his May rematch with a smarter version of Deep Blue

Feb. 17, 1996: Chess champion Garry Kasparov beats the IBM supercomputer “Deep Blue,” winning a six-game chess match

Garry Kasparov was not afraid of a computer. When the world chess champion agreed to play a match against Deep Blue, the IBM supercomputer designed to beat him, he was so confident that, according to TIME, he scoffed at an offer to split the $500,000 purse 60-40 between winner and loser. He preferred all or nothing.

While Kasparov won the match on this day, Feb. 17, in 1996, victory didn’t come as easily as he had predicted. In fact, Deep Blue won the first game they played. It was “a shattering experience” for Kasparov, as his coach told TIME. And he wasn’t the only one reeling. Luddites everywhere were on notice: here was a machine better than humankind’s best at a game that depended as much on gut instinct as sheer calculation. Surely the Cylons were on their way.

But after rallying to beat Deep Blue, winning three matches and drawing two after his initial loss, Kasparov wasn’t ready to give up on the human race — or himself. He later explained, in an essay for TIME, that Deep Blue flummoxed him in that first game by making a move with no immediate material advantage; nudging a pawn into a position where it could be easily captured.

“It was a wonderful and extremely human move,” Kasparov noted, and this apparent humanness threw him for a loop. “I had played a lot of computers but had never experienced anything like this. I could feel — I could smell — a new kind of intelligence across the table.”

Later, he discovered the truth: Deep Blue’s calculation speed was so advanced that, unlike other computers Kasparov had battled before, this one could see the material advantage of losing a pawn even if the advantage came many moves later.

Knowing that it was still basically a calculating machine gave Kasparov his edge back. He boasted, “In the end, that may have been my biggest advantage: I could figure out its priorities and adjust my play. It couldn’t do the same to me. So although I think I did see some signs of intelligence, it’s a weird kind, an inefficient, inflexible kind that makes me think I have a few years left.”

He did not, as it turned out. The next year, he played against a new and improved Deep Blue and lost the match. Once again, the psychological toll of facing off against an inscrutable opponent played a key role. Although he easily won the first game, Deep Blue dominated the second. Kasparov, according to NPR, was visibly perturbed — sighing and rubbing his face — before he abruptly stood and walked away, forfeiting the match.

He later said he was again riled by a move the computer made that was so surprising, so un-machine-like, that he was sure the IBM team had cheated. What it may have been, in fact, was a glitch in Deep Blue’s programming: Faced with too many options and no clear preference, the computer chose a move at random. According to Wired, the move that threw Kasparov off his game and changed the momentum of the match was not a feature, but a bug.

Read TIME’s original analysis of the 1996 face-off, here in the archives: Can Machines Think?

TIME Innovation

Raspberry Pi Unveils Newer, Faster $35 Computer

The credit-card sized computer will also get Windows 10 at no additional cost

Technology and bargain lovers alike are in for some good news: The Raspberry Pi Foundation announced the release of a newer and faster model Monday called the Raspberry Pi 2. And like the original, it only costs $35.

For the uninitiated, the Raspberry Pi is a credit-card sized, single-board computer that be hooked up to a monitor, keyboard, and mouse.

While the original model helped children learn basic programming skills, CEO Eben Upton told The Register that the Raspberry Pi 2 is “a usable PC now. It was always the case that you could use a Raspberry Pi 1 as a PC but you had to say ‘this is a great PC in so far as it cost me 35 bucks.’ We’ve removed the caveat that you had to be a bit forgiving with it. Now it’s just good.”

The British charity behind the technology says the new model is six times more powerful than the previous version. It has 1GB of RAM, double that of the prior model. Its new 900MHz quad-core processor means it’s faster, too.

But one of the cooler announcements is that Microsoft’s upcoming Windows 10 operating system is coming to the Raspberry Pi 2 at no additional cost. It will be shipped to makers free of charge once the software is available.

TIME Tech

Intel Pledges $300 Million to Increase Workforce Diversity

Inside The 2015 Consumer Electronics Show
Bloomberg—Bloomberg via Getty Images Brian Krzanich, chief executive officer of Intel Corp., during the 2015 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Tuesday, Jan. 6, 2015. (Patrick T. Fallon--Bloomberg via Getty Images)

Only 24% of 2013 employees were female

Intel CEO Brian Krzanich pledged $300 million to increase the company’s workforce diversity during his keynote address at the Computer Electronics Show (CES) Tuesday.

“It’s time to step up and do more,” Krzanich said, acknowledging that the task of achieving “full representation” of women and minorities by 2020 will be “difficult to achieve.” Seventy-six percent of Intel’s employee were male in 2013. And the company’s diversity filings from the same year showed Intel’s workforce was only 24% female, 8% Hispanic and 4% black, according to the Wall Street Journal.

“It’s not good enough to say we value diversity and then underrepresent women and minorities, Krzanich said, “Intel wants to lead by example.”

Krzanich did not set any specific quotas, but noted that the money would be used to fund programs that could help get more diverse candidates into jobs at Intel, while attracting talented and diverse job candidates.

Silicon Valley has long been considered a boy’s club, with major tech companies like Twitter and Google revealing demographics that skew toward white, male workers.

TIME technology

Remember Y2K? Here’s How We Prepped for the Non-Disaster

The Jan. 18, 1999, cover of TIME
TIME The Jan. 18, 1999, cover of TIME

When a calendar change spooked fears for the end of the world

On Dec. 31, 1999 — exactly 15 years ago — TIME was prepared for the worst.

“As police throughout the world secured emergency bunkers for themselves, the TIME magazine and Time Inc. information-technology staff set up a generator-powered “war room” in the basement of the Time & Life Building, filled with computers and equipment ready to produce the magazine in case of a catastrophic breakdown of electricity and communications,” explained then-assistant managing editor Howard Chua-Eoan, in a note that appeared in the magazine’s commemorative 1/1/00 issue.

Of course, it wasn’t long before it became clear that all the fears associated with the turn of the millennium were for naught. (The day’s most historic moment, Chua-Eoan noted, was actually the resignation of Russian President Boris Yeltsin.) In the same issue, reporting on the folks who had holed up in bunkers in preparation for the new year, Joel Stein found that at least one family was left with 12 cans of Spam they swore to eat, “disaster or not.”

The Y2K fuss began years ahead of the date in question. Back in 1998, the phrase “Y2K” (that’s for “year two thousand,” for those of you who have repressed the memory) first appeared in TIME in a story that explained the problem thusly: “The bug at the center of the Year 2000 mess is fairly simple. In what’s proving to be a ludicrously shortsighted shortcut, many system programmers set aside only two digits to denote the year in dates, as in 06/15/98 rather than 06/15/1998. Trouble is, when the computer’s clock strikes 2000, the math can get screwy. Date-based equations like 98 – 97 = 1 become 00 – 97 = -97. That can prompt some computers to do the wrong thing and stop others from doing anything at all.” That issue also featured an item about John Koskinen, the man responsible for getting the U.S. government ready for the approaching millennium. At that point, though the Pentagon had already begun repairing programs that used two digits to express the year, not much progress was being made.

Though most prognosticators cautioned that the necessary programs would get fixed in time, the fear of a fallout was still scary enough for TIME put the hysteria on its cover in January of 1999 under the headline The End of the World!?! (To be fair, the story’s conclusion was that such an end was unlikely to come. In fact, that cover story introduced the very family that, a few months later, would be stuck holding the Spam.) Y2K-problem lawsuits began to be filed. Wilderness-survival bootcamps suddenly got more popular. NBC made a made-for-TV movie about the coming disaster.

So it’s no wonder people were worried. Though we might make fun of them a little, looking back, we can also sympathize — and, even 15 years later, TIME’s suggestion for how to spend New Year’s Eve 1999 sounds pretty good. “After a season of Y2K anxiety and millenarian doomsaying, condensed history and holiday hype,” wrote James Poniewozik, “we should all be so lucky as to have another boring New Year’s.”

Read TIME’s commemorative Jan. 1, 2000, issue, here in the TIME Vault: Welcome to a New Century

TIME technology

It Took Microsoft 3 Tries Before Windows Was Successful

Microsoft Windows 1.0
AP Microsoft Windows 1.0

Windows 1.0 wasn't exactly a huge win — even with Microsoft Paint helping out

The first version of Microsoft Windows will be knocking on the door of its third decade Thursday when it turns the ripe old age of 29 — well past retirement in software years, given that Microsoft officially put it out to pasture in December of 2001. Still, looking back at Windows 1.0 offers exactly what its name implies: A window into how things used to be, and, in a way, how little has changed.

First announced in 1983, Microsoft Windows 1.0 wouldn’t make it to the consumer market for another two years — making it one of the first pieces of software to be dismissed as “vaporware,” a term actually coined by a Microsoft engineer a year before the Windows announcement, as a disparaging title bestowed upon a product that’s announced but never sees the light of day.

Windows 1.0’s big selling point was its Graphical User Interface (GUI), intended to replace MS-DOS-style command prompts (C:/DOS/RUN) with a computing style that looked much more like the multitasking, mouse-click-based computing most of us use today. It also came with software intended to show off its new graphical computing environment with what we’d now call “apps” like “Calendar,” “Clock,” and yes, of course, “Paint.”

Windows wasn’t the first operating system with a GUI as its primary feature. Microsoft rival Apple, for example, beat Windows to that punch by about a year when its Macintosh hit the market in 1984, and other “desktop”-style graphical interfaces were floating around before that. (Late Apple CEO Steve Jobs is said to have gotten a nudge towards the Apple desktop interface after visiting a Xerox facility in 1979.) But Windows 1.0 was marketed as an upgrade for people already running MS-DOS — and, in fact, it ran on top of MS-DOS, so anybody who wanted Windows had to have MS-DOS installed first.

So did Windows 1.0 fly off the shelves? Not exactly. Early reviews panned the product for running far too slowly — not the last time the tech press has made that particular critique. The New York Times wrote that “running Windows on a PC with 512K of memory is akin to pouring molasses in the Arctic.” Many reviews said the speed slowdown only got worse when users ran more than one application at a time — an ability that had been intended as a primary draw. And that weird mouse thing Microsoft insisted Windows users embrace? Lots of people hated it.

Despite those early hiccups, Microsoft didn’t just give up and close Windows — a smart move, given that computers running Windows operating systems now make up about 90% of the market. But not even Windows 2.0, released in 1987, set Windows on its path to world dominance. That spark didn’t come until Windows 3.0, released in 1990 to critical acclaim and widespread adoption, thanks to a redesigned interface and speed improvements. As TIME put it in the June 4 issue of that year, “Microsoft seems to have got it right this time.”

TIME Innovation

Five Best Ideas of the Day: November 14

The Aspen Institute is an educational and policy studies organization based in Washington, D.C.

1. Superfast quantum computers could drastically change the future, and Microsoft might build the first one.

By Tom Simonite in MIT Technology Review

2. Water-smart urban design can reimagine life in Western cities suffering the worst drought in decades.

By Reed Karaim in JSTOR Daily

3. The new censorship: How intimidation, mass surveillance, and shrinking resources are making the press less free.

By George Packer in the New Yorker

4. A new approach to housing for families at risk that includes intensive, wrap-around services is showing early success.

By Mary Cunningham, Maeve Gearing, Michael Pergamit, Simone Zhang, Marla McDaniel, Brent Howell at the Urban Institute

5. Our best bet in the fight against Boko Haram might be sharing lessons on intelligence gathering.

By Jesse Sloman at Africa in Transition

The Aspen Institute is an educational and policy studies organization based in Washington, D.C.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME technology

4 Things You Might Not Have Known About the World Wide Web’s Inventor

Tim Berners-Lee
Carl Court—AFP/Getty Images Tim Berners-Lee at The Royal Society in London on Sept. 28, 2010

Tim Berners-Lee proposed the idea on Nov. 12, 1990

If you’ve ever used a hyperlink — a bit of typically underlined online text like this that, when clicked, helpfully takes you to another website or document — you should thank Sir Tim Berners-Lee, a Briton who on this date 24 years ago first proposed an idea he called at the time “WorldWideWeb.”

“HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will,” Berners-Lee and Belgian computer scientist Robert Cailliau wrote in the Nov. 12, 1990, proposal for what would become the World Wide Web.

The Web has since become such a dominant means of sharing information over the Internet that many people don’t know there’s a difference between the two. (That difference? The Internet is a network of networks, a way for a handful of computers connected to one another to share data with billions of other such networks worldwide, while the Web is a hypertext-based information-sharing system that runs atop the Internet, literally and figuratively linking websites to one another.)

It took TIME seven years after Berners-Lee first proposed the web to write a profile of him. Here are four fun facts from that May 19, 1997, piece:

1. He credits his status as “inventor of the World Wide Web” to random chance. “I happened to be in the right place at the right time, and I happened to have the right combination of background,” Berners-Lee said of the reasons he wrote the proposal, which he made while working at Switzerland’s CERN nuclear research facility and trying to connect the organization’s resources.

2. Those 404 “Website Not Found” pages are a necessary evil. Earlier hypertext arrangements kept a record of every single link in the system to avoid “dangling links” — links pointing to nothing. But creating the Web at scale meant users would have to be able to delete documents without telling every single other user about the deletion, even if that document was being linked to from elsewhere. Berners-Lee “realized that this dangling-link thing may be a problem, but you have to accept it.”

3. He also hated how hard it once was to write on the Web. “The Web . . . is this thing where you click around to read,” Berners-Lee said, but if you want to write, “you have to go through this procedure.” That’s much less true now in 2014, with services like WordPress, Blogspot and social media making it dead simple to share your writing and other creativity online.

4. He played with the idea of starting a company to make a browser, a move that would’ve set him up to compete with the likes of Mosaic and perhaps make him rich. But he feared sparking a war between incompatible browsers and permanently dividing the web. “The world is full of moments when one might be other things,” Berners-Lee said. “One is the decisions one’s taken.” Meanwhile, Marc Andreessen, coauthor of the Mosaic browser, later cofounded Netscape and has since become a wealthy and outspoken tech investor.

Your browser is out of date. Please update your browser at http://update.microsoft.com