TIME Opinion

What We Can Learn From the Bobbi Kristina Brown Tragedy

And why we thought we could save her

I woke up today with two questions on my mind and here is the first: Could we have saved Bobbi Kristina?

Bobbi Kristina Brown died Sunday night at the age of 22 after six months in a coma and a lifetime under a harsh, unforgiving spotlight. As Oprah Winfrey said on Twitter in response to the heartbreaking news: “Peace, At Last.”

Since Whitney Houston and Bobby Brown’s only child was found submerged in her bathtub on Jan. 31, I, and millions of others, have kept watch. We hoped for the best and at the same time raged for the preventable, predictable circumstances that led to her demise.

Each day, information would trickle online, drop by drop from one unnamed source after the other. And each day, the “news” became more and more difficult to decipher. Is she breathing on her own? Why is the family feuding? She’s getting better? No, she’s getting worse? Were drugs found in her system or near her body? Was she abused? Did her boyfriend have something to do with this? Would she make it? Did she already die? Dr. Phil episodes, leaked photos of her body, speculation about her status in hospice, lawsuits and an open letter from Tyler Perry all confirmed that even with death hovering, absurdity and chaos would always swirl around her.

But in all of the confusion, one clear constant remained: The public’s desire to armchair diagnose every one of the 22 years of her life.

Judging from the tragic headlines, columns and private conversations uttered at dinner tables, we all believe that we, as wise, compassionate individuals knew exactly what went wrong and what it have taken to give her a life much easier than that which she lived. Despite actually knowing nothing about the ins and outs of her day-to-day reality or of the people who loved her, the wisdom we thought we had is why we are able to shake our heads and wax poetic about the alchemy of counseling, rehab, boundaries and love that she should have received at various points in her life.

This ‘we know better’ attitude is why we are able to scold the adults who surrounded her for allowing her to appear so clearly troubled and broken on a reality show immediately following her mother’s death. It is why we were able to identify the pain on her face in every public appearance and ask “Who is taking care of her?” It is why we give one another the sad, knowing face and utter horrible, insensitive missives like “With those parents, she never had a chance.” or “If someone just could have gotten her away from all of that…”

In other words, we believe we know what could have saved Bobbi Kristina. And we say it every chance we get.

But why? Why do we speak with such presumption and confidence? Our intentions are undoubtedly good and sound like the very essence of empathy and humanity. We do indeed wish we could have saved her, but perhaps we also say these things to validate ourselves; because it makes us feel good to think that if given the chance, in some make believe world where we had influence and impact on her life, we — you and I — could have done the right thing and made all the difference.

It is an assertion of powerlessness (she was too far outside of our reach). And that powerlessness, while painful, is also oddly comforting. It preserves our self-righteousness and allows us to sleep at night watching a child struggle so hard and fall so fast. We hold on tight to the idea that her life was out of our control, with the implication that we certainly would have helped if we could have.

But what if that isn’t true?

The truth is that for the children within our own spheres of influence, the ones languishing in foster care, or being harassed online, or being abused in the home down the street, or struggling with grief or depression or addiction or any of the ills that we assume troubled Bobbi Kristina, most of us do nothing. We walk blindly by, ignoring their realities and talking among our-selves about what should be done rather than actually doing it.

Maybe we are not as empathetic as we pretend to be in times of celebrity tragedy.

Because it is much easier to mourn the ones we “couldn’t” save than it is to acknowledge our crushing failure to save the children we could have but didn’t.

Today, Bobbi Kristina is dead, and yes, I am one of the ones praying that she is resting joyfully in the arms of her mother and her God. But for the children who remain, are we just as confident and sure about what they need? Do we know the magic formula to save the Bobbi Kristinas in our own communities — those not separated from us by a glass case of celebrity but who are so close they are practically daring us to reach out and touch them? Do we know what they should be protected from and given? They too are crying out for love, care and all of the collective support and wisdom that we offered Bobbi Kristina in comment sections, on Twitter and from afar.

So the second question on my mind today is this: Can we save them? Or better yet, will we?

TIME Foreign Relations

We Know Why Obama Changed U.S. Policy Toward Cuba. But Why Did Cuba Change Its Policy Toward the U.S.?

(FILE) Picture taken 20 December 1999 in
Adalberto Roque—AFP/Getty Images Fidel Castro (L) with his brother Raul Castro on Dec. 20, 1999 in Havana

To understand the change we need to acknowledge that Castro has always followed a policy of “revolutionary pragmatism”

History News Network

This post is in partnership with the History News Network, the website that puts the news into historical perspective. The article below was originally published at HNN.

The restoration of U.S. and Cuban diplomatic ties is quite an event, particularly given the hostility that defined relations between the two countries for so long. President Obama’s decision to re-open an embassy in Havana and Raul Castro’s agreement to do the same in Washington continues the thaw in U.S.-Cuban relations. The steps taken by both countries have generated much publicity over the past few months. Numerous U.S. media outlets have produced stories on the implications for Obama’s legacy and the potential fallout for 2016 presidential candidates. As usual Washington politicians and pundits have focused their attention on the reasons for the U.S. shift. Yet, it is not President Obama’s decision to seek a normalization that warrants the most attention, but rather the Castro government’s reasoning behind their determination to chart a new course in U.S.-Cuban relations. In fact, much more can be learned from concentrating instead on what is behind the Cuban leadership’s thinking.

Havana’s recent decisions are deeply rooted in what can best be termed as Cuba’s “revolutionary pragmatism.” Though the Castro government continually speaks the language of revolutionary change, it also has also taken a sensible view to foreign policy matters when necessary. Such an approach has guided Cuban engagement with the world from the 1960s to the present.

“Revolutionary pragmatism” traces back to the very beginning of the Castro regime. In the years immediately following the Cuban Revolution, for example, a top issue in US-Cuban relations included Fidel Castro’s support for anti-US guerilla movements throughout Latin America. Castro repeatedly challenged Latin Americans and others around the world to stand up to the United States. He famously declared in 1962 that it was “the duty of every revolutionary to make the revolution. In America and the world, it is known that the revolution will be victorious, but it is improper revolutionary behavior to sit at one’s doorstep waiting for the corpse of imperialism to pass by.”

Yet, privately, Castro proved willing to develop a foreign policy based on practical considerations. On a recent research trip to Cuba I gained access to the Foreign Ministry Archive in Havana and was surprised at what I found. Many detailed reports from the early 1960s discussed the prospects for revolution in Central and South America, but concluded that conditions were not ripe in many nations for radical change. This reality led to a more pragmatic position being taken by leaders in Havana as they approached Latin America.

The most documented aid came in the form of training young Latin Americans in guerilla tactics who traveled to Cuba. As historian Piero Gliejeses’s excellent studies demonstrate, Castro turned his attention to Africa as early as 1964. Havana’s decision to abandon any large-scale support for revolutionary groups in Latin America was not made due to a lack of enthusiasm for challenging Washington’s traditional sphere of influence, but owed instead to practical considerations.

Similarly, in the 1980s when the Sandinista triumph in Nicaragua offered Havana an ally in Latin America, Castro held to “revolutionary pragmatism.” He counseled Daniel Ortega not to antagonize elite economic interests too much. On a visit to Managua, Castro even declared that allowing some capitalism in the Nicaraguan economy did not violate revolutionary principles. He bluntly told Nicaraguan leaders that they did not have to follow the path taken by Cuba, “Each revolution is different from the others.”

Perhaps the greatest illustration of Cuban flexibility was the Castro regime’s response to the collapse of the Soviet Union. In June 1990, after receiving word that aid from Moscow would no longer flow to Havana, Fidel Castro announced a national emergency. He called his initiative “the Special Period in Peacetime.” Cuba welcomed foreign investment, tourism, the U.S. dollar, and allowed small-scale private businesses. While many prognosticators predicated a complete collapse of the Castro regime, the revolutionary government endured due to its ability to adapt.

Thus, recent developments must be viewed within their proper historical context. As it has in the past, Castro’s regime is pursuing “revolutionary pragmatism.”

The impetus for changes in Cuba’s approach owes to several reasons. First, since the death of Hugo Chávez in 2013 Venezuela has become a questionable economic ally. Political instability coupled with a crumbling economy has likely caused Havana to view a key economic patron in Caracas as increasingly unreliable. A complete breakdown of order in Venezuela would greatly affect the Cuban economy in a negative way. Thus, a better economic relationship with the United States is one way of protecting the island from a changing relationship with Venezuela.

Other reasons for Cuba’s rapprochement with the United States owe to domestic concerns. Since taking power in 2008, Raul Castro has been open to reforms in an attempt to make socialism work for the twenty-first century. Over the last few years the Cuban government has relaxed controls over certain sectors of the economy, but reforms have been slow and halting. Anyone who has spent time in Havana cannot help but notice the aging infrastructure and inefficient public transportation system. A key to any reform agenda is attracting foreign investment, and the United States stands as an attractive partner.

Furthermore, as Raul is poised to step down from power in 2018, Cuba is starting to make preparations for a successful turnover. An improving relationship with Washington may help his likely successor, Miguel Díaz-Canel, better navigate the transfer. In sum, at this point and time, normalization of U.S.-Cuban relations serves Havana’s best interests.

It remains to be seen just how far the Cuban government will go regarding changes in policy. Going back to 2010, Raul Castro declared during a national address that “we reform, or we sink.” His recent push for renewed relations with the United States will likely create an influx of U.S. tourists and more capital from American businesses. In turn, this could place Cuba down the path of other communist nations who embraced elements of capitalism, China and Vietnam notably. Just how far Raul will go with his reform agenda remains to be seen.

Ultimately, a U.S.-Cuban thaw is a positive step. Antagonism between the two countries serves no one, especially the Cuban people. Yet, we should not see the recent shifts as merely Washington changing course. The steps taken by Havana are equally important and should be viewed as part of a long history of shrewd diplomacy. While Cuban foreign policy has traditionally been revolutionary in rhetoric, it has proven once again to be pragmatic in practice.

Matt Jacobs received his PhD in History from Ohio University in 2015. This fall he will be a Visiting Assistant Professor of Intelligence Studies and Global Affairs at Embry-Riddle’s College of Security and Intelligence. He has conducted research at the Cuban National Archive and the Cuban Foreign Ministry Archive, both in Havana.

Read more: Why Did the U.S. and Cuba Sever Diplomatic Ties in the First Place?

TIME Opinion

Not Without My Smartphone: The Case for Somewhat Distracted Parenting

Mother holding telephone and hugging daughter (12-13)
Getty Images

Here's why being on your phone doesn't make you a bad parent

Somebody once told me I treated my smart phone like Wilson, the volleyball Tom Hanks turns into a friend when he’s stranded on a desert island in that movie “Castaway.” It’s an apt comparison: parenting a toddler occasionally feels like being marooned and your phone is your only connection to the rest of the world. Thanks to the Internet, moms like me can now get Amy Schumer videos and Instagram to help us survive the monotony.

But fellow parents, there is trouble on the horizon. A growing army of journalists and experts are calling for an end to using phones in front of our kids. They say it makes kids feel less loved, and teaches the wrong lessons about how to use devices.

To quote my three year old: “No. Noooooo. Noooooooooooooooo.”

That phone in my hand keeps me sane, not to mention employed. If anything, I’m writing a new movie for Lifetime called Not Without My Smartphone. Here are all the reasons I’m rejecting this latest round of parent shaming, and why I’m going to keep on cherishing my screen time, yes, in front of my kid:

Parenting can be boring. Brutally, mind numbingly boring. In a dispatch from her fainting couch, Jane Brody of the New York Times writes, “I often see youngsters in strollers or on foot with a parent or caretaker who is chatting or texting on a cellphone instead of conversing with the children in their charge.” Just asking: When was the last time Brody spent an entire morning pushing a stroller around town? It is like watching paint dry. Hell yes I’m going to be on my phone.

I’m not raising a self-centered brat. My daughter’s name is Estee, not Lady Mary, and I am not her valet, at her beck and call. Study after study has shown that making your child the center of her and everyone else’s world will destroy her competence, autonomy and resilience. That blog post I’m reading while my kid gives the State of the Union to her bath toys? It benefits her as much as me. Let her understand that I am not her raison d’etre (and vice versa), and that the world does not revolve around her. Let her have a moment to herself, to come up with a new song or bathtub game without my lavish praise of her every move – research shows that doesn’t help her, either.

My kid could use some space. Catherine Steiner-Adair, a psychologist with a new bestselling book on parenting and social media, attributes a 20% increase in pediatric ER admissions to a spike in screen-distracted parents. But wait: what about all those books telling me to let my kid fail, scrape her knee, and develop independence?

Okay, now I get it: I’m supposed to nag my kid to get down from that ledge and stop trying to catch a bee for the fiftieth time. Why should my daughter learn anything the hard way if I can protect her from ever having to figure out not to touch bees on her own.

There’s no way to win as parents right now. If I hover, I turn her into an incompetent basket case. If I let go and check my gmail, I send her to the ER.

I give up.

I have a job. Steiner-Adair told Brody that “parents should think twice before using a mobile device when with their children.” All this parent-shaming is distracting us from the fact that, like the dishwashers of the 1950s, smart phones are labor saving devices. In 2015, with the Feminine Mystique in our rearview mirrors and nearly 70% percent of moms working, my phone lets me work remotely.

These experts seem to be implying that I’m spending all my time with The Fat Jewish on Instagram (and, okay, I’m spending some of my time with him, and loving every minute of it). But I can be with my kid because I can pretend to be at work, using that smart phone to respond to emails and calls.

Experts like Steiner-Adair rightly point out the times to put away your phone, like school pickup and dropoff, and meals (and obviously, while driving). And I’m sure there are parents that need to hear this. But I am growing weary of the parent police. All this finger wagging, well intentioned as it is, implies that parents – code moms – are merely vessels for their children, and should attend to their every last need and feeling at the expense of all else.

If smart phones had been around for women in the 1950s, The Feminine Mystique might never have been written. The depression and ennui of housewives would have been blunted by Pinterest and Facebook. But this is 2015. Devices aren’t going away, for us or our kids. When parents pretend they don’t exist, kids don’t learn how to use them, either. Instead of telling me everything I’m doing wrong as a mom, it’d be nice if someone cut me a break and told me what I’m doing right. It’s enough to make you want to find a volleyball for company.

Rachel Simmons is co-founder of Girls Leadership Institute and the author of the New York Times bestsellers Odd Girl Out: The Hidden Culture of Aggression in Girls and The Curse of the Good Girl: Raising Authentic Girls with Courage and Confidence. She develops leadership programs for the Wurtele Center for Work and Life at Smith College. Follow her @racheljsimmons.

 

 

 

TIME Opinion

How Go Set a Watchman Speaks to Our Time

Harper Lee's Go Set A Watchman Goes On Sale
Joe Raedle—Getty Images The newly released book authored by Harper Lee, 'Go Set a Watchman', is seen on sale at the Books and Books store on July 14, 2015 in Coral Gables, Fla.

Whatever its literary merits, Harper Lee’s second novel sheds more light on our world than its predecessor did

To Kill a Mockingbird was set in the early 1930s, and Harper Lee portrayed Atticus Finch—obviously based on her own father—as a calm, fearless crusader for justice for all, regardless of race. But readers of her just-released novel, Go Set A Watchman, featuring nearly all the same characters but set, this time, in 1955, when Atticus was 72 and Jean Louise, or Scout, was 26 will find Atticus a determined opponent of integration and a racist who believes Negroes (as he and the rest of polite America then called them) are too inferior to share equally in American society—or even, in many cases, to be allowed to vote. A careful reading of the two books in light of 20th-century southern history, however, shows that the contradiction is far more apparent than real.

Read TIME’s review of Go Set a Watchman here

Atticus’ tragedy, which was more fully revealed in the earlier book, was the tragedy of the white south. Since well before the Civil War, many white southerners—even those who recognized that they exercised a tyranny over black men and women—were too frightened even to think of giving that tyranny up, for fear that their slaves or former slaves would take their revenge and treat their former masters the way they had been treated. Ironically, while Mockingbird appeared just as the civil rights movement was hitting its stride, it is Watchman that explains the enormous resistance that movement faced and how it left an enduring mark on American politics. It may be less inspiring than Mockingbird, but it tells us more about the United States not only in the 1950s, but even today.

In Mockingbird, Atticus directly addresses the race question most clearly after Tom Robinson has been convicted of the rape that obviously never took place. “The one place where a man ought to get a square deal is in a courtroom, be he any color of the rainbow,” he says, “but people have a way of carrying their resentments right into a jury box.” Nothing in Mockingbird suggests that Atticus thinks that Negroes should vote, attend the same schools as white children or even, significantly, have their own attorneys to represent them. He is relatively socially liberal: he seems a bit shocked when he finds that Calpurnia, the black maid who has raised his children since his wife’s death, took them to her own church while he was away, but he stands up for Calpurnia when his sister, Aunt Alexandra, wants him to dismiss her. But while he thinks that white people can and must treat Negroes fairly in the law courts, and while he is willing to put his own life on the line to prevent his client’s lynching before the trial, nothing suggests that his views on the broader race question are out of the ordinary for Alabama in the 1930s.

By 1955, the era in which Watchman is set, a bombshell has been thrown into the South: Brown v. Board of Education, the unanimous 1954 Supreme Court decision mandating integrated schools. Change was coming, and Maycomb wasn’t happy about it. Jean Louise is shocked to find her father taking a leading part in a meeting, in the same courthouse where he won a rape case for a black defendant years earlier, of the local White Citizens’ Council—the more respectable alternative to, but often close ally of, the Ku Klux Klan. And later, in the climactic confrontation between father and daughter, Atticus explains that Alabama’s black population simply cannot be allowed to vote, because there are too many of them and they would elect a black government. He also insists that he and his young law associate Henry take the case of Calpurnia’s grandson, a young black man who has run over and killed a local white drunk in his car. But Atticus doesn’t want this case to try for an acquittal: he simply wants to make sure that the NAACP doesn’t hear about it, send in an out-of-state lawyer, and turn it into a cause.

Evidence suggests that it was the white South’s terror over the coming of the Civil Rights Movement that revived the spirit of the Civil War in the 1950s, and led the South Carolina legislature to start flying the Confederate flag over the state capital in 1962. The terror of federal intervention on behalf of African Americans was sufficient to wipe out a whole species of southern politician, the New Dealer who was liberal on everything but race. The federal government under Lyndon Johnson did desegregate public accommodations and assure black voting rights, but in response, the white South became—and in the deep South, remains—solidly Republican. And horrible though it is, the old spirit was strong enough to inspire the killing of nine black people in a Charleston church, for which Dylann Roof will be tried next year. He is reported to have said, “you rape our women and are taking over our country.” That spirit was not strong enough, however, to keep the Confederate flag flying, and that is a real sign of progress.

Just a few years before a New York editor rejected Go Set A Watchman, William Faulkner, from the neighboring state of Mississippi, published Intruder in the Dust, another tale of a black man unjustly accused of murder who is saved by a couple of courageous whites who discover the real killer. Musing about the situation in the context of mid-century America, Faulkner later estimated that out of 1,000 random “Southerners”—by which he meant white southerners—there might be only one who would actually commit lynching, and but all of them would unite against any outsiders trying to stop one. That, sadly, was not far from an accurate prediction of the white South’s response to the civil rights movement—a response that has reshaped American politics ever since. That is what young Harper Lee documented in Watchman. While she and her editors did the nation and the world a great service by publishing Mockingbird instead, the earlier manuscript gave a much better sense of what the country was up against in the late 1950s—and how we got to where we are today. Now that South Carolina State Senator Paul Thurmond, son of staunch segregationist J. Strom Thurmond, has announced that the Confederate flag does not represent a heritage he is proud of, it seems that Atticus Finch, too, would have no trouble changing his mind with the times.

The Long ViewHistorians explain how the past informs the present

David Kaiser, a historian, has taught at Harvard, Carnegie Mellon, Williams College, and the Naval War College. He is the author of seven books, including, most recently, No End Save Victory: How FDR Led the Nation into War. He lives in Watertown, Mass.

MONEY Opinion

Millennials Will Outgrow the Sharing Economy

Rep. Kyrsten Sinema
Bill Clark—CQ-Roll Call,Inc. Rep. Kyrsten Sinema, D-Ariz., catches a ride in a ZipCar at the House steps following a vote on Thursday, Sept. 18, 2014.

Young adults aren't buying things like homes and cars because they have lower-paying jobs and heavy student debt, not because they don't value possessions.

What does the Millennial generation’s culture of sharing mean for the future of the U.S. economy?

Millennials use AirBnb to share their homes. They share cars, thanks to Uber Pool and hourly car rental services like Zipcar. They go to Rent-the-Runway rather than buy an expensive outfit for a big event.

And because many Millennials took longer than expected to get on their feet, they share residences, whether it be with a gaggle of roommates or with their parents.

But now Millennials are growing up financially. More of them are getting jobs. In fact, a recent Pew Research Center analysis showed that as of the first quarter of 2015, Millennials were the largest generation in the American workforce.

With money in their pockets, more of this generation that reached young adulthood around the year 2000 are moving into their own homes. And despite previous assertions that this generation would want to squeeze families into urban apartments, there is now data showing Millennials want to do what so many of their parents did: move to the suburbs.

“The lack of home-buying activity from Millennials thus far is decidedly not because this generation isn’t interested in homeownership, but instead because younger Americans have been delaying getting married and having children, two key drivers in the decision to buy that first home,” says Zillow’s Chief Economist Stan Humphries. “As this generation matures, they will become a home-buying force to be reckoned with.”

And so it follows logically: as Millennials settle into their suburban homes, it might make sense to buy a car – since it’s not realistic to rent Zipcars every day for the carpool to baseball practice. Busy parents will probably not have time to lease clothing for a special event. At a certain point, it just might make more sense to buy the dress and not worry about returning it, especially after little Madison spit up all over it!

Which begs the question: Is this a generation that has permanently adopted a culture of sharing and will continue to do so? Or, as they grow up and evolve into financially stable adults, will they act more like their parents and buy into an ethos of owning a lot of stuff? (Or at least get a little more possessive?)

When Millennials share, it’s because they enjoy having common experiences with friends, not because they don’t want to own the better things in life, according to Morley Winograd and Michael Hais, co-authors of “Millennial Majority”.

As soon as they can afford it, Millennials will become big consumers, according to Winograd and Hais. A key difference in their purchasing power: acquiring what is truly valued – from clothes to owning a car or home – instead of merely visiting a thrift shop for a good deal or renting a dress for a special occasion.

A recent Fannie Mae survey of 18- to 34-year-olds shows 70 percent of Millennials prefer to own their own home, rather than rent, because of the protection from a rent increase, the authors note. In addition, participants in the survey say owning a home is a better investment in the long run.

The reason more Millennials don’t own homes is because they have lower-paying jobs and heavy student debt, not because of any attitudes about personal possessions, according to Winograd and Hais.

And if that theory proves correct, the Millennial culture of shopping more as their resources grow could be great news for the economy.

TIME Opinion

Ellen Pao Was One More ‘Difficult’ Female Executive

Ellen Pao
Eric Risberg—AP Ellen Pao, the interim chief of Reddit, has alleged she faced gender discrimination from former employer Silicon Valley venture-capital firm Kleiner Perkins Caulfield & Byers

She may have not been the right person to lead Reddit. But that doesn’t mean the deck wasn’t stacked from the start

Take a woman in the middle of an intensely polarizing Silicon Valley gender-discrimination lawsuit and put her in charge of cleaning up a tech company known for its mostly male, highly vocal and often controversial user base. What could go wrong?

You could say it’s no surprise that Ellen Pao is stepping down as interim CEO of the message-board site Reddit. Her short and brutal tenure began last fall and slammed into a wall in May when she announced that the site would begin enforcing antiharassment policies that some of the site’s 164 million, mainly anonymous users believe to be antithetical to the community’s free-speech ideals. (Though a for-profit enterprise, Reddit has grown into a powerhouse because it is largely self-governed.)

The company’s decision in early June to ban of five of the site’s notoriously virulent and abusive forums, many of which have been condemned by civil rights watch organizations like the Southern Poverty Law Center and various women’s groups for glorifying everything from racism to rape, was not Pao’s alone. The site’s executives, board and high-profile investors realize that the company has to modernize, i.e. become more commercial. Doing that means shining light on the darker corners of the site so the socially enriching part can thrive.

But Pao became the face of change. The controversial, “difficult” female face of unwelcome, unholy change. The resulting clash of an anonymous online army and a perceived lady enforcer is worthy of an HBO epic series.

The announcement about the renewed antiharassment rules, designed to protect individuals from attack, came just few months after Pao lost her high-profile suit against venture-capital firm Kleiner Perkins Caufield & Byers. In the suit — she is currently appealing the ruling against her — she alleged the company retaliated against her for calling executives out on endemic corporate sexism. The firm, in turn, alleged that she was not promoted because she was “difficult” and not a “team player.”

Sure, Kleiner Perkins didn’t come out looking particularly good either, especially when partner John Doerr was quoted as saying that the most successful tech entrepreneurs are “white, male nerds.” But Pao’s reputation took the biggest hit. So when she told Reddit’s users that they were going to have to shut down five threads accused of fat shaming individuals among other nefarious deeds, she might has well have been wielding a flamethrower. Even if Reddit management was united about the rules, it sure looked like mom was coming in to make everyone behave. That did not go over well.

A Change.org petition sprung up in June accusing Pao of ushering in an age of “censorship” and calling her “manipulative.” The document — and the flood of anti-Pao threads on Reddit — argued she had attempted to “sue her way to the top.” Never mind that she has better on-paper credentials than most executives. (She is Princeton-educated engineer with a Harvard law degree and an MBA.) Nor was she the most controversial, or abrasive or difficult boss in an industry known for CEOs that sometimes lack, to put it gently, interpersonal skills.

But the rules are so often different for women at the top. Personality matters and the margin of misinformation is tiny. Be very good at your job. And also, play nice. When Jill Abramson was fired as editor of the New York Times she was described with many of the same adjectives used to vilify Pao at trial. Abramson made a fuss over gender inequities, she was “difficult,” she “challenged the top brass.”

By July 2 when Pao made the mistake of firing a popular female staffer who served as an intermediary with the volunteer moderators, the site’s users were already primed to grab their virtual pitchforks. The petition to get rid of her racked up thousands more signatures and moderators started shutting down pieces of the site and writing editorials in the New York Times. Pao apologized, not just for the abrupt firing, but also for a general lack of communication with volunteer-forum moderators, a problem that even many of her critics admit predated her tenure.

Then on July 10 she announced she would be stepping down and that co-founder Steve Huffman would return as permanent CEO. She is planning to stay on as an adviser, though in an interview with TIME, the company’s chairman Alexis Ohanian did not clearly define what that actually means. However, in his statement board member Sam Altman did acknowledge some of the toxic abuse aimed at Pao saying: “It was sickening to see some of the things Redditors wrote about Ellen. The reduction in compassion that happens when we’re all behind computer screens is not good for the world. People are still people even if there is Internet between you.”

Finding a way to curb those baser impulses without crushing the vibrancy and goodness that exists on the 10-year old site will now be Huffman’s challenge. It won’t be easy. In reality, the censorship that some users were so furious about barely nicked at the not-so-subtle undercurrents of hate and misogyny. Sure, the repulsive “creepshots” thread is no more, but “CoonTown,” Reddit’s 10,000-subscriber racist community, rife with the N word is still there. And at a moment when Southern Republicans are calling for the removal of Confederate flags, fighting to preserve those kinds of forums looks as outdated as it does insensitive.

TIME Opinion

Museums Are Changing. Thank Goodness

Confederate Flag Removed From South Carolina Statehouse
John Moore—Getty Images An honor guard ties up the Confederate flag after lowering it from the South Carolina Statehouse grounds for the last time on July 10, 2015 in Columbia, S.C.

As the Confederate flag debate shows, museums are not just attic storage, but battlefields of history

History News Network

This post is in partnership with the History News Network, the website that puts the news into historical perspective. A version of the article below was originally published at HNN.

2015 is shaping up to be an important year in American identity. We are already well into the 2016 presidential campaign, and every candidate is seeking to shape a unique national narrative that can resonate with Americans, or enough likely registered voters in Iowa and New Hampshire, or at least one billionaire. Even without the election, changing notions of identity are roiling the cultural waters. The Supreme Court stands on the brink of reaffirming or denying (or something in between) recent crucial changes in American life.

Overlooked in this changing notion of American identity is the role of museums. Museums have long occupied a privileged yet marginal space in American culture, somewhere between elite bastions defining high culture and “America’s attic” (a nickname for the Smithsonian Institution). They are objects of civic pride and economic anchors. They are the destination of innumerable field trips and millions of tourists, domestic and foreign. But they also function as historians, as interpreters of the American past. The permanent collections and the special exhibits, the blockbusters and the dusty corners, all of these are statements about what is important about our past.

These museums often produce predictable highlights of the American past, affirmations of an American narrative of founding, growth, and dominance, filtered through the lenses of locality, region, subject, and time. Sometimes these museums touch a nerve, most notably the Smithsonian’s cancelled Enola Gay exhibition of 1995 when curators ran into protests that questioned the exhibit’s planned focus on use of the atomic bomb rather than Japan’s role in World War II. Sometimes these museums redefine how history is learned and even experienced, such as the United States Holocaust Memorial Museum in Washington D.C.

And these historical interpretations are not just limited to museums that are explicitly about history. Museums of natural history often reaffirm 19th century American notions of race just as museums of science often reaffirm a 20th century American faith in technology. Art museums, many of them founded by Gilded Age millionaires seeking immortality, often reflect the Eurocentric tastes of the late 19th century. The question of what is worthy of a museum collection is a form of history, one that is less about narrative and more about values.

But of course the world of museums is not static, and new and reconfigured museums of all shapes and sizes shape the historical discourse. The Whitney Museum, long one of New York’s premier yet second tier artistic spaces (it is hard to claim top billing with the Metropolitan Museum of Art and the Museum of Modern Art as neighbors), is seeking to redefine American art and identity with its new home and reconfigured collection. The relocation of the Whitney Museum of American Art not only reflects a changing New York—what was once the meatpacking district and rail yards has been reinvented in the last generation as one of the most desirable neighborhoods in the world, a transformation that the Whitney’s location affirms rather than drives—but offers a bold new vision in defining what is both “American” and “art.” Instead of being physically and metaphorically trapped between the classical and modernist visions of its former Upper East Side neighbors, the Whitney is seeking to redefine American art, a debate that has been unsettled for decades. Paintings, sculpture, and photographs are part of the collection, but so are films, videos, and installations. American-born and foreign-born artists are both represented in the museum, and the subject matter ranges from landscapes to politics. Works recognized as classics will draw visitors, but they will also see and hear commentary on recent events like 9/11 and the 2008 financial crisis. In the process, the Whitney is providing its own take on American history and identity.

The Whitney will be a success. Its significance in shaping the cultural agenda will clearly rise. But recent events in South Carolina demonstrate that years of planning and hundreds of millions of dollars are not the only way to occupy an outsized role in questions of American identity. In the months and years to come, a simple piece of fabric will be central to the meaning of museums and to American identity and history in the South and beyond. The Confederate battle flag is revered by some as sacred, hated by some as profane, and awkwardly embraced as “heritage” by those trying to split the difference. Dylann Roof, his terrible actions, and his website have made an enormous impact on the American psyche. Although there are strong disagreements about the prevalence of racism in contemporary American life, many Americans have suddenly recognized that the Confederate battle flag is a problematic symbol. Politicians hesitated, but suddenly a consensus seems to be forming that the flags should be removed from active civil life, a shocking turn of events considering how deeply the Confederate battle flag has been rooted in the former Confederate states since the Civil Rights Movement.

But if anyone thinks this will be a simple transition, they need to recognize that a crucial part of the consensus is that the removed flags—and in some cases the literal flags that are removed—will be placed in museums. How will this be done? Will new museums be created? How will curators respond? How will the public respond? In 2016, the National Museum of African American History and Culture will open. How will it deal with the Confederate battle flag? Americans will be reminded that museums are not just attic storage, but battlefields of history.

John Baick is a professor of history at Western New England University in Springfield, Mass.

MONEY Opinion

Socially-Responsible Mutual Funds Don’t Add Up

140360986
Alain Le Bot—Getty Images Vanguard's FTSE Social Index Fund invests in McDonald's, which is ground zero for paying poverty wages and selling some of the unhealthiest food money can buy.

Mutual funds that invest in socially-responsible companies now have $2 trillion in assets.

Harvard psychologist Steven Pinker has a fascinating idea: People get nicer over time.

Throughout history most societies have become more peaceful and compassionate, more cooperative and tolerant. War, violent crime, discrimination, and tyranny have all plunged over time. It’s “the most important thing that has ever happened in human history,” Pinker writes.

You can see the trend hitting financial markets with the growth of socially responsible investing.

Mutual funds investing in socially responsible companies – those passing a screen of environmental, social, and governance tests – have exploded from $641 billion in assets in 2012 to nearly $2 trillion in 2014.

It’s one of the biggest trends in investing. Investors don’t just want a high return. They want to feel good about investing in socially responsible companies.

But there’s a problem: Everyone has a different definition of what’s socially responsible.

Take a look at some of the companies in Vanguard’s FTSE Social Index fund. This is supposed to be a professionally curated list of America’s most responsible companies:
Bank of America, which has paid $74.58 billion in fines – more than any other company in history — for its role in blowing up the financial system.

McDonald’s, ground zero for paying poverty wages and selling some of the unhealthiest food money can buy.

Pepsi, purveyor of sugar water when one-third of adults are obese. As one study recently found, “sugary drinks kill as many as 184,000 adults each year.”

JPMorgan Chase, which has paid $27 billion in fines for systematically screwing homeowners and rigging currency markets.

Herbalife, which has spent the last two years trying to convince investors and regulators that it’s not a pyramid scheme.

Ameriprise Financial, whose Wikipedia page has an entire section on “critics & controversy” outlining the number of times it’s been fined for conflicts of interest.

Moody’s, which gave perfect ratings to subprime mortgage securities that smashed the global economy.

Tyson Foods, which was caught bribing meat inspectors and has been the target of countless animal cruelty investigations.

UnitedHealth Group, who was called before Congress to explain the practice of rescinding insurance after customers get sick.

Citigroup, famous for inept management and requiring one of the largest government bailouts of all time, plus your standard multi-billion-dollar settlement for destroying the housing market.

Modelez, who just a few months ago was sued for manipulating the wheat market.

The Forum for Sustainable and Responsible Investment says “there is no single term to describe” what socially responsible investing is. But excluding a company that has paid more fines for its social indiscretions than any other business in history seems like a good start.

Of course, no company is perfect. All of these businesses are run by decent people trying to do well despite an occasional slipup.

But that’s also true for big oil companies, which are portrayed as the antithesis of a socially responsible company. BP caused an oil spill that splashed millions of barrels of oil across the gulf. That’s terrible. But it also supplies cheap oil that everyone reading this article relies on to keep society running smoothly, which is great. Fracking isn’t good for the environment. That’s a social cost. But it also creates tens of thousands of high-paying jobs in economically impoverished areas. That’s a social benefit.

No company is purely good or purely evil, and every investor filters with their own definition of what’s socially responsible. That’s fine – I do it too. If it helps you sleep at night, it’s a win.

But realize that everyone’s own definition is biased and selective. Nothing’s black and white, and even measuring shades of grey is less objective than it looks.

More From Motley Fool:

MONEY Opinion

4 Agenda Items Missing From Monday’s White House Conference on Aging

536989529
Getty Images

The agenda for the July 13th conference overlooks some of the most pressing issues facing seniors today.

When presidents call Americans together to talk about aging, major change is possible. The first White House Conference on Aging in 1961 played a midwife’s role in the birth of Medicare; the 1971 conference led to creation of the automatic cost-of-living adjustment for Social Security, which has been in place since 1975.

This year’s conference, set for Monday, July 13, could have similar impact in a country facing the challenges of a rapidly aging population.

Unfortunately, I’m not optimistic that this year’s summit will be as productive as past ones have been. While I’d love to be proven wrong, the agenda overlooks too many important issues: rapid diversification of our older population, retirement inequality and assigning a bigger role to Social Security, and finding a way to protect pensions and Medicare.

Also, a failure by Congress to fund the event forced a sharp downsizing, limiting the number of voices that will be heard.

All in all, it’s shaping up as a missed opportunity at a time when aging in America is a growing challenge. In 2050, the 65-and-older population will be 83.7 million, almost double what it was in 2012, according to the U.S. Census Bureau.

Four broad topics will be considered: retirement security, healthy aging, preventing elder financial exploitation and abuse, and long-term services and supports. All are important, but much of the agenda reads like a rehash of ideas the Obama administration has been promoting for years, especially in the area of retirement security.

“The White House can always get a bunch of people together to talk about its own initiatives, but that isn’t the idea behind the conference on aging,” said Paul Kleyman, a longtime observer of trends in aging who was a delegate to the 1995 aging conference hosted by President Bill Clinton. “They’re using a talking points format to say ‘Here’s what we think and want to do,’ without really taking in and assessing what an aging nation is saying needs to be done.”

On the plus side, the agenda highlights the need to eliminate conflicted financial advice, and includes questions about how to better promote healthy aging.

Also up for discussion is how to help people age in place. A recent report from the National Association of Area Agencies on Aging (n4a) found that the biggest challenges seniors face concern inadequate transportation, living independently and finding affordable housing.

“The most frequent calls for help that we hear concern aging at home and staying in the community,” said Sandy Markwood, n4a’s chief executive officer. “That is the goal of most individuals. Rarely do we hear anyone saying, ‘I just can’t wait to go into an institutional setting.’ ”

But so much is missing. For starters, the rising importance of ethnic, non-white and LGBT elders. Kleyman, who directs coverage of ethnic elders at New America Media, noted that the percentage of ethnic and non-white elderly in the 65-plus population will double by 2050, to 42 percent. LGBT seniors, while smaller in total numbers, face discrimination in housing and healthcare.

Longevity Inequality

Another omitted topic: the pressing moral issue of inequality in longevity. White men with 16 or more years of schooling live an average of 14 years longer than black men with fewer than 12 years of education, according to the Centers for Disease Control.

Racial and gender and racial disparities also are evident in wealth and retirement income, another issue that gets short shrift. Instead, we get a rehash of ideas the Obama Administration has been hawking for years now: auto-IRAs at the federal and state levels, better access to workplace saving plan enrollment and simplified requirement minimum distribution rules.

The discussion of Social Security looks like it will be especially disappointing. The policy brief embraces generalities about “strengthening Social Security” without mentioning the boldest, smartest idea being advanced by the left flank of the President’s own party: expansion of benefits focused on low- and middle-class households. Finding ways to protect traditional pensions? Preserving Medicare as a defined benefit, and defending it against voucherization? Those are nowhere to be found.

The conference should be talking about the upside of aging, along with ways to encourage trends such as encore careers by fighting age discrimination in hiring, getting more employers to support phased retirement and re-thinking how higher education can serve older adults.

Plenty of advocates would like to raise these issues, but most won’t be present due to the funding constraints. Actual delegates will be replaced by an audience of hand-picked dignitaries; everyone else will be relegated to watch parties and submitting questions via social media.

So, let’s get the party started: @whitehouse. Take a wider, more inclusive view of aging in America.

TIME Opinion

How the Declaration of Independence Can Still Change the World

Declaration of independence 1776 from the Congress of Representatives. Signed by John Hancock, President of the Congress
Universal Images Group / Getty Images Declaration of independence

The key is that its language is inclusive

Three weeks ago Britain observed the 800th anniversary of the Magna Carta, the charter of liberties King John was forced to issue to his barons in 1215. Most contemporary commentaries took the opportunity to point out how far short that document fell of modern principles of justice. It benefited only the great nobles, not the common people; it was not, in any case, fully put into effect for a long time; and it contained some provisions, such as those relating to Jews, reflecting medieval prejudices. As the Fourth of July rolls around once again, some commentators will undoubtedly make similar points about the Declaration of Independence. Yes, the Declaration declared that “all men are created equal,” but it thereby left the female half of humanity out of account. It said nothing about slavery, which then existed in every colony and obviously contradicted its principles. It referred to “merciless Indian savages” whom the King had incited against the colonists. In short, the authors and signatories of the Declaration did not use the language that is fashionable in the 21st century, and thus it is a relic from another time that is irrelevant to our world today.

That view misses two very important points. Earlier generations have revered both Magna Carta and the Declaration because they were critical milestones in the development of modern ideas of liberty and government—milestones that can only be understood in the context of their own times, not according to 21st-century views. More importantly, the authors of the Declaration used universal language which has inevitably led to the extension of the rights and freedoms they championed to more and more of humanity. That language is why the Declaration of Independence still has the power to inspire progress.

Because we have taken the principles of the declaration for granted for so long, we must remind ourselves of how revolutionary they were in 1776. It was “necessary,” Thomas Jefferson and the others wrote, “to dissolve the political bonds” which had connected the Americans and the British, because the royal government no longer met the standards for just and effective government that they themselves were defining. The colonists were acting, they wrote, in the face of “a long train of abuses and usurpations,” acts by the King that in their opinion violated the long-standing principles of British law that had developed over the centuries, and especially since the Glorious Revolution of 1688, during which parliamentary control over the Throne was solidified. The King had refused to allow colonial governments to function properly. He had sent troops to the United States to enforce his will, and quartered those troops among the population. He had tried to deprive large numbers of people the right to elect legislators, and much more. But his government—like all governments—did not exercise power by divine right, only insofar as it respected established principles and traditions of liberty. That idea was shortly to set not only the colonies, but much of the western world, aflame.

In its most famous passage, the declaration asserted the ultimate authority of human reason. “We hold these truths to be self-evident,” it said: “that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.–That to secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed, –That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it, and to institute new Government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their Safety and Happiness.” Yes, it is true that the colonies had not, and for many years would not, extend all those rights to poorer men, or indentured servants, or slaves—but their language made no exception for any of those categories. Thus, the Declaration established a contradiction between their principles and existing conditions in the 18th-century world. That contradiction was bound to lead to further political struggles. So, although the Founding Fathers referred to “all men”—the Constitution, written 20 years later, generally referred more broadly to “persons”—it was equally inevitable that women would clam their rights as well, and that the logic of the founders’ language would allow that progress, too.

No one understood this better than Jefferson himself. Fifty years later, in the spring of 1826, he was invited, along with the few other surviving signatories, to attend a celebration of the signing in Washington. He began his reply by regretting that illness would not permit him to attend. (Indeed, his remaining ambition was simply to survive until July 4, which is exactly what he and his fellow signatory John Adams managed to do.) Yet he proclaimed the enduring significance of the declaration he had drafted:

“May it [the declaration] be to the world, what I believe it will be, (to some parts sooner, to others later, but finally to all), the signal of arousing men to burst the chains under which monkish ignorance and superstition had persuaded them to bind themselves, and to assume the blessings and security of self-government. That form which we have substituted, restores the free right to the unbounded exercise of reason and freedom of opinion. All eyes are opened, or opening, to the rights of man. The general spread of the light of science has already laid open to every view the palpable truth, that the mass of mankind has not been born with saddles on their backs, nor a favored few booted and spurred, ready to ride them legitimately, by the grace of God.”

And so it was, through most of the rest of the 19th and 20th centuries, on every continent.

The struggle for these principles, however, has proven to be an enduring one. In much of the world reason is once again fighting with superstition, and finds itself in retreat. In our own nation, inequality threatens to create a new aristocracy that will ride upon the backs of the masses. The principles and language of the declaration remain by far the best defense against oppression and superstition. Most importantly of all, it is only upon the basis of impartial principles that new coalitions for justice can form. The Declaration of Independence remains a precious part of our heritage—one which we simply cannot do without.

The Long ViewHistorians explain how the past informs the present

David Kaiser, a historian, has taught at Harvard, Carnegie Mellon, Williams College, and the Naval War College. He is the author of seven books, including, most recently, No End Save Victory: How FDR Led the Nation into War. He lives in Watertown, Mass.

 

Your browser is out of date. Please update your browser at http://update.microsoft.com