TIME movies

The History Behind Benedict Cumberbatch’s The Imitation Game

Alan Turing wasn't the only one who suffered

The new movie The Imitation game is bringing fresh attention to a dark period in early 20th century, when homosexuals in the U.S. and the U.K. were criminally prosecuted because of their sexuality.

The movie, starring Benedict Cumberbatch, depicts the life of Alan Turing—a mathematician, computer scientist and code breaker known as a key architect of the modern computer and an instrumental figure whose skill for breaking Nazi codes helped the allies win World War II.

Despite his genius, Turing was prosecuted in England in 1952 for engaging in a homosexual relationship with a man. In lieu of prison, he was sentenced to take estrogen treatments to reduce his libido, a practice dubbed “chemical castration.” In 1954, he killed himself by cyanide poisoning at the age of 41.

The film depicts the Turing’s unjust prosecution and punishment for homosexuality, though slightly inaccurately (for more information, the Guardian did a helpful analysis of the film’s facts).

What happened to Turing was not uncommon in the United Kingdom and the United States during his lifetime in the 1930s, 40s, and 50s. In the U.S., it was “the worst time to be queer because you are not being ignored, you are actively searched for and persecuted,” said John D’Emilio, a professor of gay and lesbian studies at the University of Illinois at Chicago. “The nice thing about the movie is that it is calling attention to this bit of history that people don’t know anything about.”

MORE: The price of genius

In Britain—where America’s own sodomy laws originated—the story begins in 1533, during the reign of Henry the VIII. That year, the Buggery Act made male sex a capital offense in Britain, punishable by death, usually by hanging. That remained the law until 1861, when the sentence was changed from death to prison, usually with hard labor. In 1885, the law was broadened to criminalize “gross indecency” a vague, catch-all term used to prosecute anything considered to be deviant sexual behavior outside of sodomy, mostly between men. In 1895, the playwright Oscar Wilde was convicted of gross indecency and sentenced to two years of prison and hard labor, about which he penned a poem called “The Balad of Reading Gaol.”

During Alan Turing’s life, public concern over the possibility that homosexuals serving in the military or aiding in the war effort could be blackmailed by enemies intensified the stigma of homosexuality in Britain. After Turing was convicted in 1952, the British government took away his security clearance. Turing was exposed after he reported a petty theft to the police, involving his lover. Their relationship was discovered by the police through his reporting of the crime. He pleaded guilty and opted for hormone treatments, known as chemical castration, instead of prison time. He tragically killed himself with cyanide in 1954.

The 1950s, was the beginning of the end for Britain’s laws against homosexual sex, as the prosecution of prominent people stoked a public backlash against the laws. In 1954, a well known journalist, Peter Wildeblood was convicted of homosexual acts with two prominent and wealthy men, Lord Montagu and Michael Pitt-Rivers, in a public trial that resulted in prison time for all of the men and public opposition to laws against homosexual sex. The trial lead to the creation of the Wolfenden committee of government representatives, ministers, educators, and psychiatrists, which in 1957, published a report recommending the discontinuation of laws against homosexuality.

The report eventually led to the 1967 Sexual Offenses Act, which ended the criminalization of homosexual sex between consenting men over the age of 21 in Britain and Wales. In 1994, the age was lowered to 18, and in 2003, it was lowered to 16, the same age for consenting heterosexual sex.

The U.S. history is slightly different from Britain’s. The fervent prosecution of gay sex didn’t start to happen in earnest until the very period during which Turing lived. The U.S. had anti-sodomy laws inherited from the English settlers, but it wasn’t until the late 1930s, 40s and 50s, during a period coinciding with the World Wars and a strong strain of Christian morality, that police in the U.S. made it a priority to enforce laws against homosexuals.

As in England, concerns that homosexuals could be blackmailed by Communist spies—an idea popularized by Senator Joe McCarthy—drove some of the fervor against homosexuals during that period. In the U.S.—more so than in Britain, it seems—the period was marked by increased police enforcement of the laws. Police officers went undercover in public parks where homosexuals went to meet each other for sexual encounters, in order to uncover them. It was a period of fear for homosexuals in America unparalleled before or since. “This is the height of what I call the homosexual terror in America,” said William Eskridge, a professor at Yale Law School and the author of Dishonorable Passions: Sodomy Laws in America.

MORE: TIME reviews The Imitation Game

During this same period, Eskridge said, states began to pass laws that allowed courts to institutionalize gay people indefinitely in mental institutions for having “psychotic personalities,” where were experimented on, lobotomized, and given shock therapy.

As was the case with Turing, the prosecution of gays also denied the U.S. some very bright minds who, but for their homosexuality, might have been allowed to contribute more to society. In the late 1950′s, Frank Kameny, an astronomer with a Ph.D. from Harvard, was kicked out of the Army Map Service and barred from serving in theUS government because he was a homosexual.

“One of geniuses of 20th century, the father of modern computers who helped win World War II, who was a lovely person, was destroyed by the anti homosexual terror,” Eskridge said of Turing.

TIME movies

The Real-Life Hunger Games: Meet the Ancient Women Who Lived Like Katniss

Hunger Games Mockingjay
Murray Close—Lionsgate

Women may have battled in the Roman arena, too, according to some evidence

Katniss Everdeen returns to the big screen Friday in The Hunger Games: Mockingjay Part 1, and though she left the arena behind at the end of Catching Fire, she’s still a gladiator at heart.

Or rather, a gladiatrix.

It turns out there is some historical evidence that women may indeed have fought in the Roman games—though not necessarily alongside their male peers, as Katniss does in the Hunger Games, and likely not with such high stakes.

Kathleen M. Coleman, Professor of Classics at Harvard University, says there are accounts of the emperors staging gladiatorial spectacles in which women also participated, and that a decree of the Senate from A.D. 19 forbade both male and female descendants of the upper class from participating in such spectacles. “This doesn’t prove that women were fighting as gladiators,” she says, “but it suggests that the society was afraid that they might want to.”

More famously, a marble bas relief sculpture from between the first and second century A.D. depicts two gladiatrices in battle, with an inscription saying they fought to a draw. They are named Achillia, the feminine form of Achilles, and Amazon, the name of a group of mythical female fighters. It was common for gladiators to adopt epic stage names after their favorite heroes.

Roman civilization, Relief portraying fight between female gladiators
Dea / A. Dagliorti—De Agostini/Getty Images

Since neither woman died in the fight, the sculpture is clearly not an epitaph, so Coleman says it might have been “something put up in a gladiatorial barracks,” where the fighters lived separately from civilians, “commemorating the sort of greatest hits of that barracks.”

Like Katniss, gladiatrices likely had humble beginnings. While some gladiators did choose of their own volition to take on the profession and thus enter the lowest rung of the social ladder, the majority were slaves. Those who did volunteer were likely in it either for the valor or to escape debts—after all, as Coleman says, “if you can’t own, then you can’t owe.”

Is that really so different from the Girl on Fire, the volunteer from District 12 who sacrifices herself to pay her sister’s debt?

There were many types of gladiators, and each type came with its own weapons, armor and moves. You sometimes might see two styles pitted against each other, Coleman says. “So the one style might be very heavily armed and protected, and will therefore be relatively impregnable—but slow. The opponent might be very scantily armed, and therefore very fast and unencumbered, but vulnerable. These kinds of pairings seem to have interested Romans.”

Katniss might have been at ease in the arena with her weapon of choice: the Sagittarius gladiator was known for using a bow and arrow.

Unlike the young combattants in the Hunger Games, the gladiators didn’t usually fight to the death. Though “occasionally a very poor performance might result in the gladiator losing his life,” Coleman says, losers would often be sent back for more training, and might even have the option to retire.

The odds may not have been ever in their favor, but they sure got a better deal than Rue.

TIME Opinion

Is Obama Overreaching on Immigration? Lincoln and FDR Would Say ‘No’

Barack Obama
President Barack Obama announces executive actions on immigration during a nationally televised address from the White House in Washington, D.C., on Nov. 20, 2014 Jim Bourg—AP

Like Lincoln and Roosevelt before him, Obama occupies the White House in a time of great crisis

Last night, President Obama announced new steps that will allow about five million undocumented immigrants to obtain work permits and feel free of imminent deportation. Given that we now have an estimated 10–11 million such people within our nation and that many of them clearly will never leave, this seems a reasonable first step towards giving them all some kind of legal status. But, because of the anti-immigration stance of the Republican Party, which will entirely control Congress starting on Jan. 3, the President will have to base this step solely on executive power. And even before the President spoke, various Republicans had accused him of acting like an emperor or a monarch and warning of anarchy and violence if he goes through with his plans.

There are, in fact, substantial legal and historical precedents, including a recent Supreme Court decision, that suggest that Obama’s planned actions would be neither unprecedented nor illegal. This is of course the President’s own position, that no extraordinary explanation is needed—yet we can also put his plans in the broader context of emergency presidential powers, which in fact have a rich history in times of crisis in the United States. It is not accidental that this issue of Presidential power is arising now, because it will inevitably arise—as the founders anticipated—any time a crisis has made it unusually difficult to govern the United States. Like Abraham Lincoln and Franklin Roosevelt, Obama occupies the White House in a time of great crisis, and therefore finds it necessary to take controversial steps.

The Founding Fathers distrusted executive authority, of course, because they had fought a revolution in the previous decade against the arbitrary authority of King George III. But, on the other hand, they had come to Philadelphia in 1787 because their current government, the early version of the U.S. system established by the Articles of Confederation, was so weak that the new nation was sinking into anarchy. So they created a strong executive and a much more powerful central government than the Articles of Confederation had allowed for—and having lived through a revolution, they also understood that governments simply had to exercise exceptional powers in times of emergency.

They made one explicit reference to an emergency power, authorizing the federal government to suspend the right of habeas corpus—freedom from arbitrary arrest—”in cases of rebellion or invasion [when] the public safety may require it.” Nearly 80 years later, when the southern states had denied the authority of the federal government, Abraham Lincoln used this provision to lock up southern sympathizers in the North, and eventually secured the assent of Congress to this measure. He also used traditional powers of a government at war—including the confiscation of enemy property—to emancipate the slaves within the Confederacy in late 1862. With the help of these measures, the North won the war and the Union survived—apparently exactly what the Founders had intended.

When Franklin Roosevelt took the oath of office in the midst of a virtual economic collapse in March of 1933, he not only declared that the nation had “nothing to fear but fear itself,” but also made clear that he would take emergency measures on his own if Congress did not go along. That spring, the country was treated to a remarkable movie, Gabriel Over the White House, in which the President did exactly that—but as it turned out, the Congress was more than happy to go along with Roosevelt’s initial measures. It wasn’t until his second term that Congress turned against him; he, like Obama, used executive authority to find new means of fighting the Depression. In wartime he also claimed and exercised new emergency powers in several ways, including interning Japanese-Americans, this time without a formal suspension of habeas corpus. In retrospect both a majority of Americans and the courts have decided that some of these measures, especially the internment, were unjust and excessive, but the mass of the people accepted them in the midst of a great war as necessary to save the country, preferring to make amends later on. Though opponents continually characterized both Lincoln and FDR as monarchs and dictators trampling on the Constitution, those are judgments which history, for the most part, has not endorsed.

As the late William Strauss and Neil Howe first pointed out about 20 years ago in their remarkable books, Generations and The Fourth Turning, these first three great crises in our national life—the Revolutionary and Constitutional period, the Civil War, and the Depression and the Second World War—came at regular intervals of about 80 years. Sure enough, just as they had predicted, the fourth such great crisis came along in 2001 as a result of 9/11. President Bush immediately secured from Congress the sweeping authority to wage war almost anywhere, and claimed emergency powers to detain suspected terrorists at Guantanamo. (Some of those powers the Supreme Court eventually refused to recognize.) The war against terror was, however, only one aspect of this crisis. The other is the splintering of the nation, once again, into two camps with largely irreconcilable world views, a split that has paralyzed our government to an extent literally never before seen for such a long period. Immigration is only one of several problems—including climate change, inequality and employment—that the government has not been able to address by traditional means because the Republican Party has refused to accept anything President Obama wants to do.

The Founders evidently understood that when the survival of the state is threatened, emergency measures are called for. We are not yet so threatened as we were in the three earlier crises, but our government is effectively paralyzed. Under the circumstances it seems to me that the President has both a right and a duty to use whatever authority he can find to solve pressing national problems. Congressional obstructionism does not relieve him of his own responsibilities to the electorate.

David Kaiser, a historian, has taught at Harvard, Carnegie Mellon, Williams College, and the Naval War College. He is the author of seven books, including, most recently, No End Save Victory: How FDR Led the Nation into War. He lives in Watertown, Mass.

TIME Science

A Sheep, a Duck and a Rooster in a Hot-Air Balloon — No Joke

Ascent in captive hot air balloon made by Pilatre de Rozier, Paris, 11 October 1783 (1887). Artist: Anon
Illustration of a Jean-Francois Pilatre de Rozier flight from 'Histoire des Ballons' by Gaston Tissandier Print Collector / Getty Images

Nov. 21, 1783: Two men take flight over Paris on the world’s first untethered hot-air balloon ride

Before subjecting humans to the unknown dangers of flight in a hot-air balloon, French inventors conducted a trial run, sending a sheep, a duck and a rooster up in the air over Versailles.

Anyone who was anyone in pre-revolution France came out for the September 1783 demonstration in the courtyard of the royal palace. According to Simon Schama, the author of Citizens: A Chronicle of the French Revolution, the spectators included King Louis XVI, his wife Marie Antoinette and 130,000 French citizens who, six years before returning to the palace to riot over the scarcity of bread, were drawn by sheer curiosity over how the animals would fare in the balloon’s basket.

The eight-minute flight, which ended in the woods a few miles from the palace, didn’t seem to do the barnyard trio any harm, Schama writes: “‘It was judged that they had not suffered,’ ran one press comment, ‘but they were, to say the least, much astonished.’”

The public was similarly astonished when, on this day, Nov. 21, two months after the sheep and fowl made their historic trip, two eminent Frenchmen went aloft themselves in the world’s first untethered hot-air balloon ride.

Jean-François Pilâtre de Rozier, a chemistry and physics teacher, and the Marquis d’Arlandes, a military officer, flew nearly six miles, from the center of Paris to the suburbs, in 25 minutes. This time, Benjamin Franklin was among the spectators, according to Space.com. He later marveled in his journal about the experience, writing, “We observed [the balloon] lift off in the most majestic manner. When it reached around 250 feet in altitude, the intrepid voyagers lowered their hats to salute the spectators. We could not help feeling a certain mixture of awe and admiration.”

It was more than a century before the Wright brothers lifted the first powered airplane off the ground in 1903, and more than two centuries before another pair — a Swiss psychiatrist and a British balloon instructor — circumnavigated the globe in an air balloon in a record-breaking 20 days. This first balloon, rather delicately constructed of paper and silk, and requiring a large supply of fuel to stoke the fire that kept it aloft (but also threatened to burn it down), likely wouldn’t have made it so far.

There were still a few bugs to work out in this novel form of flight. The inventors themselves didn’t quite grasp the physics that made the balloon rise, believing that they had discovered a new kind of gas that was lighter than air. In fact, the gas was air, just hotter and therefore lighter than the air surrounding it.

Experimenting with different gases ultimately led to the demise of one of the intrepid voyagers aboard the first balloon flight. Pilâtre de Rozier was killed two years later while attempting to cross the English Channel in a balloon powered by hydrogen and hot air, which exploded.

Read about the 1999 balloon trip around the world, here in the TIME Vault: Around the World in a Balloon in 20 Days

TIME movies

How a 1960s Literary Trend Brought Us The Hunger Games

Hunger Games: Mockingjay
Murray Close—Lionsgate

Dystopian fiction used to be for adults

As Katniss & Co. get ready to storm movie theaters this weekend with Mockingjay, the latest installment in The Hunger Games series, it may seem like a foregone conclusion that futuristic teenagers will have to battle an oppressive dystopian regime alongside their crushes.

But it wasn’t always that way. As TIME’s Lev Grossman wrote back in 2012 while exploring the history of the teen romance-dystopia genre in books and movies, until the 1960s — notably, with the release of the Tripod series by Christopher Samuel Youd — dystopia wasn’t for teenagers. Books like 1984 and Brave New World are seen as classics of grown-up literature; during the last 50 years, their analogues have usually been meant for teenagers.

But that doesn’t mean that the genre hasn’t changed further during that half-century:

The Hunger Games is every bit as grim as the Tripod books, but it also tells us a lot about how the future, and the present, has changed since the 1960s. Now we have a great tradition of strong female characters in young-adult fiction thanks to writers like Madeleine L’Engle, Judy Blume and Anne McCaffrey. And along with coed dystopias comes, inevitably, romance: it’s understood now that if you’re fighting to save the human race, you’re going to have to deal with a star-crossed crush at the same time. If the Tripod books were published today (they’ve been reissued with covers that make them look like novelizations of the boy’s-own science-fiction cartoon Ben 10), Will Parker would fall for a tough fellow resistance member with a fetching pageboy haircut over her mind-control cap. Or better yet, a Tripod would crack open and disgorge a nubile, sufficiently humanoid alienne.

Read the full article here: Love Among the Ruins

TIME remembrance

Mike Nichols: A Half-Century of Raves

The June 15, 1970, cover of TIME
The June 15, 1970, cover of TIME Cover Credit: SANTI VISALLI (NICHOLS); BOB WILLOUGHBY (ARKIN

He was 'the sort of director whom most writers and actors only meet when they are asleep and dreaming'

When Mike Nichols, who died on Wednesday at age 83, first gained notice, it was not as a director. In 1958, Nichols, then 26, appeared along with his comedy partner Elaine May, on NBC’s Omnibus revue; within six months, the two were touted by TIME as “the fastest-sharpening wits in television.” The two had met at the University of Chicago and began their dual career as sketch and improv comics in that city, as part of a group that would eventually feed into Second City.

Though they were an instant hit on TV, the question of how to translate their comedy to the censored and scripted world on screen. They found their footing on Broadway instead with An Evening with Mike Nichols and Elaine May, which debuted in 1960. “An Evening with Mike Nichols and Elaine May,” TIME’s critic declared, “is one of the nicest ways to spend one.” From that point on, the rave reviews just kept coming.

By 1962, Nichols was acting in a play written by May, and in 1963 he directed Barefoot in the Park, earning another rave: “If the theater housing this comedy has an empty seat for the next couple of years, it will simply mean that someone has fallen out of it. Barefoot is detonatingly funny.” He would later tell TIME that it had been a turning point, the moment he realized he was meant to direct:

Nichols remembers: “The first day of rehearsal, I knew, my God, this is it! It is as though you have one eye, and you’re on a road and all of a sudden your eye lights up, and you look down and you know, ‘I’m an engine!’ ”

For his next Broadway foray, Luv, he earned the headline “The Nichols Touch” and was called “the sort of director whom most writers and actors only meet when they are asleep and dreaming.” As for 1965′s The Odd Couple, “[t]he only worry they leave in a playgoer’s head is how to catch his breath between laughs.”

In 1966, when he made his first foray into movies with Who’s Afraid of Virginia Woolf?, the performance he got out of star Elizabeth Taylor was deemed “a sizeable victory.” By 1970, when he made a movie of Catch-22, he landed on the cover of the magazine, with a multi-page feature that praised the maturity of the work:

Fully loaded, the bombers take flight, make their lethal gyres and return empty. Under Nichols’ direction, the camera makes air as palpable as blood. In one long-lensed indelible shot, the sluggish bodies of the B-25s rise impossibly close to one another, great vulnerable chunks of aluminum shaking as they fight for altitude. Could the war truly have been fought in those preposterous crates? It could; it was. And the unused faces of the flyers, Orr, Nately, Aardvark, could they ever have been so young? They were: they are. Catch-22‘s insights penetrate the elliptical dialogue to show that wars are too often a children’s crusade, fought by boys not old enough to vote or, sometimes, to think.

Despite his much-acclaimed career — which would continue for decades — not every one of his projects won applause.

In 1967, for example, TIME gave one of his films a rare pan. The picture in question “unfortunately shows his success depleted” because “[m]ost of the film has an alarmingly derivative style, and much of it is secondhand” with “a disappointing touch of TV situation comedy.”

But, as befits a comedian by training, he had the last laugh: that movie was The Graduate.

Read the full cover 1970 story here, in TIME’s archives: Some Are More Yossarian Than Others

TIME

The Princess Diana TV Interview That Made History

Princess Diana and Martin Bashir
The Princess of Wales is interviewed by the BBC's Martin Bashir (R) on Nov. 20, 1995 AFP / Getty Images

Nov. 20, 1995: Princess Diana admits, during a TV interview, that she had an affair with her riding instructor

The infidelity itself was hardly news, but when Princess Diana admitted on British TV that she had been unfaithful to Prince Charles, it “plunged the monarchy into the greatest crisis since the Abdication,” the Daily Mail said, referencing King Edward VIII’s 1936 abandonment of his throne to marry a divorced American socialite.

On this day, Nov. 20, in 1995, the BBC aired Diana’s first solo press interview — a stroke of independence made possible by keeping her meeting with BBC’s Martin Bashir secret from Buckingham Palace. She took the opportunity to address the tensions within her marriage, her struggles with postpartum depression and bulimia, and the fact that she had been intimately involved with a man other than the Prince.

Charles had admitted to his own infidelity on a TV documentary the previous year. His longtime affair with Camilla Parker Bowles was no secret when Diana told Bashir, “There were three of us in this marriage, so it was a bit crowded.”

Rumors of her five-year affair with cavalry officer James Hewitt, whom she’d hired as a riding instructor, had already made news as well, partly because he’d profited from the story by collaborating on a tell-all book, Princess in Love. In the interview, Diana said, “Yes, I adored him. Yes, I was in love with him. But I was very let down.”

In her surprise that Hewitt betrayed her, some saw the same naïveté that a young Diana, barely out of her teens, had worn like a veil on her wedding day. Following Diana’s death in a 1997 car crash, Joyce Carol Oates wrote an essay for TIME in which she linked “the drama in the princess’s life” to “her often desperate search for love” and her utter devastation when her husband failed her. Oates wrote:

Of Diana at the time of the wedding, it was said by a former classmate that she was “one of the few virgins of her age around. She was a complete romantic, and she was saving herself for the love of her life, which she knew would come one day.” There is no evidence that Diana would have behaved other than devotedly to her husband and family if she hadn’t been forced to acknowledge that her husband wasn’t only having a clandestine affair with another man’s wife, but had been having this affair for years.

By the early 1990s, when Charles and Diana were already leading separate lives, a biography of Diana revealed that, just a year after her wedding, she had attempted suicide over her suspicions about Charles’ relationship with Camilla. On a 1985 visit to a London hospice, according to TIME, “she let slip a telling comment. ‘The biggest disease this world suffers from,’ she complained, ‘[is] people feeling unloved.’ ”

That was no excuse for airing dirty laundry, however — at least not to the Royal Family. Within a month of Diana’s interview, her press secretary had resigned and the Queen had sent Charles and Diana a letter urging them to divorce quickly.

Read the 1997 special report on Diana’s death, here in the TIME Vault: Death of a Princess

TIME health

Smoking News to Make You Cringe

160009671
Stephen St. John—Getty Images/National Geographic Creative

Read TIME's reports from the era when the medical community thought it was O.K. to smoke

Thursday marks the American Cancer Society’s Great American Smokeout (GASO), a nationwide event encouraging smokers to kick the habit.

We know today that cigarette smoking causes serious diseases in every organ of the body, including lung cancer, diabetes, colorectal and liver cancer, rheumatoid arthritis, erectile dysfunction, age-related macular degeneration, and more. Tobacco use rakes up more than $96 billion a year in medical costs, and it’s estimated that 42.1 million people, or 18.1% of all adults in the U.S. smoke cigarettes.

This year marked the 50th anniversary of the historic 1964 Surgeon General’s report that concluded that smoking caused lung cancer, and should be avoided. Before then, smoking messaging was depressingly inaccurate. Despite concerns — initially from a small minority of medical experts — the tobacco industry boomed in the U.S., and even doctors considered the effects of cigarettes to be benign.

Here are some examples of tobacco-related beliefs that appeared through the years in TIME Magazine:

1923: In an article about a recent compilation of smoking-related data, TIME was mostly concerned with whether smoking made people more or less brainy: “The outstanding fact of this survey is that every man in the literary group smokes, and the majority of the literary women. Moreover, most of them consider its effects beneficial, and claim that their literary and imaginative powers are stimulated by it.” And later: “From the laboratory data, the author concludes that it is impossible to say that tobacco smoking will retard the intellectual processes of any one person, but in a large group it may be predicted that the majority will be slightly retarded.”

1928: Some experts tried early on to warn about the effect of nicotine, but were met with resistance. In an article about a researcher presenting data on nicotine and the brain, TIME writes: “Many U. S. doctors have contended and often hoped to prove that smoking does no harm. In Newark, N. J., five children of the Fillimon family have been smoking full-sized cigars since the age of two. The oldest, Frank, 11, now averages five cigars a day. All of these children appear healthy, go to school regularly, get good grades.”

1935: Questions began to be raised about the effects on infants, though uptake was limited: “Physiologists agree that smoking does no more harm to a woman than to a man, if harm there be. According to many investigators, the only circumstances under which a woman should not smoke are while she has anesthetic gas in her lungs (she might explode), and while she produces milk for her baby. Milk drains from the blood of a smoking mother those smoke ingredients which please her, but may not agree with her nursling.”

1938 Even if there might be adverse health events for some smokers, not all physicians agreed it was a universal risk: “In step with a recent upsurge of articles on smoking, in the current issue of Scribner’s, Mr. Furnas offers several anti-smoking aids for what they are worth. Samples: 1) wash out the mouth with a weak solution of silver nitrate which ‘makes a smoke taste as if it had been cured in sour milk’; 2) chew candied ginger, gentian, or camomile; 3) to occupy the hands smoke a prop cigaret. For many a smoker, however, this facetious advice may be unnecessary, since many a doctor has come to the conclusion that, no matter what else it may do to you, smoking does not injure the heart of a healthy person.”

1949: By the late 1940s, smoking had become a contentious debate in the medical community: “Smoking? Possibly a minor cause of cancer of the mouth, said Dr. MacDonald. But smoking, argued New Orleans’ Dr. Alton Ochsner, can be blamed for the increase of cancer of the lung. Surgeon Ochsner, a nonsmoker, was positive. Dr. Charles S. Cameron, A.C.S. medical and scientific director, who does smoke, was not so sure. For every expert who blames tobacco for the increase of cancer of the lung, he said, there is another who says tobacco is not the cause.”

1962 More evidence was linking tobacco to cancer, and some groups were trying to get pregnant women to quit out of potential risks to the child, but still: “Some doctors, though, see no direct connection between smoking and prematurity; they argue that the problem is a matter of temperament, that high-strung women who smoke would have a high proportion of “preemies” anyway.”

1964 In a historic move, the 1964 Surgeon General’s report officially stated that cigarette smoking causes cancer, giving authority to anti-smoking campaigns. TIME wrote:

The conclusion was just about what everybody had expected. “On the basis of prolonged study and evaluation,” the 150,000-word report declared, “the committee makes the following judgment: Cigarette smoking is a health hazard of sufficient importance in the U.S. to warrant appropriate remedial action.” More significant than the words was their source: it was the unanimous report of an impartial committee of top experts in several health fields, backed by the full authority of the U.S. Government.

Read TIME’s full 1964 coverage of the Surgeon General’s report, here in the TIME Vault: The Government Report

TIME technology

It Took Microsoft 3 Tries Before Windows Was Successful

Microsoft Windows 1.0
Microsoft Windows 1.0 AP

Windows 1.0 wasn't exactly a huge win — even with Microsoft Paint helping out

The first version of Microsoft Windows will be knocking on the door of its third decade Thursday when it turns the ripe old age of 29 — well past retirement in software years, given that Microsoft officially put it out to pasture in December of 2001. Still, looking back at Windows 1.0 offers exactly what its name implies: A window into how things used to be, and, in a way, how little has changed.

First announced in 1983, Microsoft Windows 1.0 wouldn’t make it to the consumer market for another two years — making it one of the first pieces of software to be dismissed as “vaporware,” a term actually coined by a Microsoft engineer a year before the Windows announcement, as a disparaging title bestowed upon a product that’s announced but never sees the light of day.

Windows 1.0′s big selling point was its Graphical User Interface (GUI), intended to replace MS-DOS-style command prompts (C:/DOS/RUN) with a computing style that looked much more like the multitasking, mouse-click-based computing most of us use today. It also came with software intended to show off its new graphical computing environment with what we’d now call “apps” like “Calendar,” “Clock,” and yes, of course, “Paint.”

Windows wasn’t the first operating system with a GUI as its primary feature. Microsoft rival Apple, for example, beat Windows to that punch by about a year when its Macintosh hit the market in 1984, and other “desktop”-style graphical interfaces were floating around before that. (Late Apple CEO Steve Jobs is said to have gotten a nudge towards the Apple desktop interface after visiting a Xerox facility in 1979.) But Windows 1.0 was marketed as an upgrade for people already running MS-DOS — and, in fact, it ran on top of MS-DOS, so anybody who wanted Windows had to have MS-DOS installed first.

So did Windows 1.0 fly off the shelves? Not exactly. Early reviews panned the product for running far too slowly — not the last time the tech press has made that particular critique. The New York Times wrote that “running Windows on a PC with 512K of memory is akin to pouring molasses in the Arctic.” Many reviews said the speed slowdown only got worse when users ran more than one application at a time — an ability that had been intended as a primary draw. And that weird mouse thing Microsoft insisted Windows users embrace? Lots of people hated it.

Despite those early hiccups, Microsoft didn’t just give up and close Windows — a smart move, given that computers running Windows operating systems now make up about 90% of the market. But not even Windows 2.0, released in 1987, set Windows on its path to world dominance. That spark didn’t come until Windows 3.0, released in 1990 to critical acclaim and widespread adoption, thanks to a redesigned interface and speed improvements. As TIME put it in the June 4 issue of that year, “Microsoft seems to have got it right this time.”

TIME Business

A Historical Argument Against Uber: Taxi Regulations Are There for a Reason

Taxi Rank
Yellow cabs waiting in line at LaGuardia Airport, New York City, in March of 1974 Michael Brennan—Getty Images

The author of a cultural history of the NYC taxi — a former cabbie himself — explains why he believes oversight is necessary

New York taxis used to have a reputation for smelly cars, ripped seats and eccentric drivers. Today, New York cabs are nearly all clean and well-maintained. Drivers don’t usually say much unprompted. The cabs feel safe. In other words, they’re boring. And maybe that’s a good thing, because they’re vying against a polished new competitor.

Uber, the ride-sharing app, has grown explosively in the five years since its inception, challenging established taxi services, expanding its annual revenue to a projected $10 billion by the end of next year and attracting drivers away from its competitors. Uber drivers get 80% of a fare, and the company only takes a 20% cut. Uber’s cars are mostly slick, clean and easy to hail via the company’s app.

But a big reason Uber has grown so quickly is that it’s not regulated the same way that traditional taxi services are. Uber proponents say it’s about time for monopolistic, overregulated city cab services to be broken up. Riders deserve options, they say, and better pricing, and more nimble technology. Still, the company is no stranger to controversy, most recently over reports of executives abusing the company’s ability to track riders.

And, says one taxi expert, history shows that the larger reason to be concerned about Uber is that those regulations were established for a good reason.

Graham Hodges is the author of Taxi! A Cultural History of the New York City Cabdriver and a professor at Colgate University — and a former cabbie himself, who patrolled New York’s dangerous streets in the early 1970s for a fare. Hodges is suspicious of upstarts like Uber and says that the cab industry needs to be regulated.

Hodges’ argument? Taxis are pretty much a public utility. Like subway and bus systems, the electric grid or the sewage system, taxis provide an invaluable service to cities like New York, and the government should play an important role in regulating them. They shouldn’t be, Hodges argues, fair game for a private corporation like Uber to take over and control, any more than an inner-city bus service should be privatized.

Without getting too much into the nitty-gritty of taxi rules, what do passengers get out of cab regulation? Regular taxi maintenance, says Hodges, which taxi commissions like New York’s require. “You want to know you’re getting in a safe cab that’s been checked recently,” he explains. “They’re taking a pounding every day.” Knowing your fare is fixed to a predictable formula is important, too, says Hodges. (Uber does that, though the company’s surge pricing at peak hours can really up the cost.) And you want to know that your driver has had a background check, which established taxi services usually require, so that you can be less afraid of being attacked with a hammer, abducted or led on a high-speed chase, as has allegedly happened on some Uber trips.

Regulations have been around for a long time, Hodges says: “Taxi regulations developed out of livery and hansom-cab regulations from the 19th century. They’re a necessary part of urban transportation. They’ve been that way since the metropolitization of cities in the 1850s. And those in turn are based on a long-term precedent in Europe and other parts of the world. From hard-earned experience, those regulations ensure fairness and safety.”

In the 1970s, when Hodges drove, those regulations ensured that a driver made a decent living, and could comfortably choose his or her own hours. (“I made $75 the first night I was out,” he says. “I felt fantastic.”) The golden days of cab driving, Hodges continues, were even earlier, in the ’50s and ’60s. Think sometime before seedy New York full of troubled men like Robert De Niro in Taxi Driver (1976), and more like the omnipresent, wise-seeming driver of Breakfast at Tiffany’s (1961).

“Back then, drivers stayed on for a long time,” says Hodges. “They were beloved. They were culturally familiar. That’s where you get the classic cabbie and someone who was an encyclopedia of the city. Those are guys who dedicated their lives to the job and owned their taxis. They had a vested interest in a clean, well-managed auto that lasted a long time.”

Today, Uber drivers do enjoy some of those benefits. Though they’re hardly known for an encyclopedic knowledge of the cities they drive, or as cultural touchstones, they own their own cabs and have a lot at stake in driving. What’s more, they get a large cut of each fare and have a lot of freedom. And regulation doesn’t always work the way it’s supposed to: even after the Taxi and Limousine Commission started more closely regulating taxi drivers in the 1970s, riders were often in for a surprise. Taxis were rusty tin-bins and drivers were erratic.

In 1976, TIME offered a sardonic view of the New York cab ride:

A taxi ride is the chief means by which New York City tests the mettle of its people. A driver, for example, is chosen for his ability to abuse the passenger in extremely colorful language, the absence of any impulse to help little crippled old ladies into the cab, ignorance of any landmark destination, an uncanny facility for shooting headlong into the most heavily trafficked streets in the city, a foot whose weight on the accelerator is exceeded only by its spine-snapping authority in applying the brakes. Extra marks are awarded the driver who traverses the most potholes in any trip; these are charted for him by the New York City Department of Craters, whose job it is to perforate perfectly good roadways into moonscapes.

The taxi machines are selected with equally rigorous care. Most are not acceptable until they have been driven for 200,000 miles in Morocco. After that, dealer preparation calls for denting the body, littering the passenger compartment with refuse, removing the shock absorbers, sliding the front seat back as far as it will go, and installing a claustrophobic bulletproof shield between driver and passenger—whose single aperture is cunningly contrived to pass only money forward and cigar smoke back. All this is designed to induce in the customer a paralytic yoga position: fists clenched into the white-knuckles mode, knees to the chin, eyes glazed or glued shut, bones a-rattle, teeth a-grit. To a lesser extent, the same conditions prevail in other taxi-ridden U.S. communities.

In the end, Hodges says, cabbies and passengers have always wanted the same things — “We don’t want to have hyper competition, we don’t want reckless driving, we don’t want drivers about whom we don’t know very much,” he says — and, whether or not it always works perfectly, he believes that history has shown that regulation is the best way to get there.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser