TIME world affairs

The Day I Discovered My Grandparents Survived a Genocide

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

A simple school assignment unlocked the horrifying truth behind my family’s moves from Armenia to Lebanon to the United States

I remember when I first learned about the bad thing that happened long ago. It was in the mid-1980s in Tulsa, Oklahoma where my parents moved to from Beirut shortly before I was born. Like the children of most immigrants, I had become adept at navigating the different worlds I inhabited and at sidestepping the humiliations that often came with being one of the very few students in school whose roots weren’t in Oklahoma (let alone the United States). Sometimes a slight setback would occur (like that terrible day my father put a whole cucumber in my lunch; oh, the pathos I invoked after that ordeal) but for the most part I was staying ahead, being as American as a girl who talked kind of funny and had a weird last name could, deftly incorporating y’alls into my accented speech like a third-generation Oklahoman.

At home, Lebanon permeated the air. I became used to waking up, coming home, and falling asleep to the sounds of CNN, our only lifeline to a Beirut ravaged by civil war. My parents’ worries and sadness were palpable. I would fall into an uneasy sleep, dreading being woken up by the phone. Those awful nights when it would ring, a chill would run down my spine as I tried to make out my parents’ voices, all the while praying that our relatives were okay, that Lebanon was okay.

This was my normal. My feet so firmly planted in the present that it hardly ever occurred to me that there was something else–something that happened somewhere else, long ago–that had touched my family in ways I could hardly begin to understand. I had my hands full as a six-year-old processing our exile – little did I know we were exiled from our exile.

I learned about it almost by accident. We had received an assignment in school to fill out a family tree. I came home, a bit baffled by the assignment (fill in some names? that’s it?), and became more baffled still when, after asking my parents for help, it turned out that most of those branches on the family tree were going to have to remain blank. I implored my parents to try to remember. I became desperate, begging them to just make up some names. (I was about to receive a lesson in ethics and family history all at once.)

As delicately as they could, my parents told me my mother’s parents were orphaned when they were young. That my mom’s aunt, who helped raise her, was not actually her aunt, but a member of the makeshift family that formed in the Beirut orphanage where my grandparents met and grew up. I remember asking what happened and being told that there had been fighting in a country called Turkey, where my grandparents were born (yet another revelation: they weren’t even from Beirut!). That bad things had happened and many people died but my grandparents survived. That they were little when they were found and rescued and taken to Beirut. I thought about my grandpa. My always smiling, cuddly dede, who only had one eye and whom I loved more than anything. Who wore a beret, snuck me candy bars, and sang funny songs to me while the bombs fell that time we visited Beirut.

It all suddenly became too much. I just wanted to finish my assignment. I asked for just enough information to include in a note for my teacher. And so, I scrawled on the bottom of that half-empty family tree, “I couldn’t fill in all the names because of the Armenian genocide. One million people died but my grandparents survived. You can ask my parents.”

Somehow, along with the war in Lebanon, the genocide was folded into my consciousness, yet another part of my normal. Something bad had happened to people I loved long ago, and that was it. Absent in my conception of the war and of the genocide were those markers so often used to differentiate and to categorize: there was no Christian, there was no Muslim; there was no Turk, there was no Arab. There was only dede and yaya and mez mama and all those others I would never know. I knew I was Armenian, but I couldn’t have told you what that meant. It was something I only understood through the prism of the life I knew at home: the mixture of Armenian, Turkish, and Arabic we spoke; the shish kebab, hummus, and manti we ate; and the Armenian, Turkish, Lebanese, and American pop that we danced and laughed to when my parents finally had enough of CNN. All my happiness and all my sadness existed here, in this unquestioned plurality.

Of course, I couldn’t stay there forever. Eventually, I began to see myself through other peoples’ eyes and realized the strangeness of our normal. I became painfully aware of the incongruity of all the different parts of myself and of the way my ethnicity, nationality, past, and present appeared to others. I saw a Lebanon of terrorists and belligerent Armenians who couldn’t let go of the past. I became ashamed of my father’s dark skin—that most obvious sign of our “Otherness”—and of my mother’s insistence on asking my teachers why they don’t teach the Armenian genocide in school. And soon I saw the question mark that follows this genocide wherever it goes and that shows itself in the linguistic acrobatics of politicians and journalists avoiding the g-word.

I first saw it at 13, when I volunteered with my mother at an international fair where the small Armenian community in Tulsa had a booth. Included in the brochure that we were handing out was a small paragraph on the genocide. As I returned to the booth after a short break, I saw a stranger berating my mother, telling her she was lying about the genocide. When she calmly engaged him, I should have been proud. But then, I could only see the humiliation of it. The utter humiliation of having to fight to be believed, of having to fight to be heard. Of knowing your narrative must be silenced in order to keep another’s intact. As I watched her with tears in my eyes, an image of my now deceased dede—who I missed more than anything–flashed before me and I wanted to hug him, to protect him like he protected me. And it all suddenly became too much. I wanted her to stop—just please, stop. I didn’t want this anymore. I didn’t want to be Armenian, I didn’t want to be Lebanese. I wanted to be something that could just be.

It’s a yearning too many of us know, this desire to be something stripped of implications, of politics, and of history. It’s something you can’t think about too much because it can feel overwhelming, suffocating. In those moments, I close my eyes and think about when I don’t have to be anything other than myself: eating Turkish food, dreaming in English, gossiping in Armenian, cursing in Arabic, singing with Johnny Cash, and going to the football games on Fridays. I grieve for the family I’ll never know, for those empty branches on the family tree. My stomach turns as I think about the horrors they suffered and recoil at the taunting question mark that mocks our pain. And I cry—I cry because I’m sad, because I’m angry, because I still don’t understand.

And then, when it becomes too much, I think about my dede. I can feel his hand in mine. He sings to me that funny song and we laugh.

Sylvia Alajaji is an associate professor of music at Franklin & Marshall College in Lancaster, Pennsylvania. She is the author of the forthcoming book, Music and the Armenian Diaspora: Searching for Home in Exile. She wrote this for Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Education

Why Student Athletes Continue To Fail

sports-trophies-books-shelves
Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

The problem’s not the NCAA. It’s players’ expectations of their peers

Seventy-four college underclassmen have been declared eligible for the NFL’s upcoming draft, but Ohio State’s quarterback Cardale Jones won’t be among them. A few days after winning the national championship game in January, Jones shocked fans and football analysts by saying he wasn’t ready to go pro, that it was important for him to graduate from college first. What made the announcement all the more surprising, beyond the fact that Jones may never again be as desirable an NFL prospect as he is the year he won a national championship, was that his previous claim to fame was a notorious tweet posted two years ago in which he complained about the “college” part of being a college football player. He wrote that he’d gone to Ohio State to play football, not “to play school,” and that classes were pointless.

Jones now regrets and disavows that tweet. Earlier this month, he was tweeting that nothing is more important than education, under the hashtag “StudentBeforeAthlete.” It’s hard to know how sincere his attitude adjustment has been, or how sincere his initial dismissal of academics was. What is clear is that Jones and his conversion represent a messaging coup for his university and for the NCAA, which has maintained for decades that its primary goal is to help scholar-athletes receive an education that would prepare them for life beyond sports.

Despite the NCAA’s insistence that it is concerned about student athletes’ academic growth, it often feels as though “student” plays second fiddle to “athlete.” Indeed, on a typical day, a visitor to the NCAA homepage will be overwhelmed by the articles (and videos) about athletics but will not find a single article (or video) about the academic achievements of the athletes.

This also seems to hold true for many of the NCAA’s member schools. The University of North Carolina and Syracuse are just two of the most recent universities to be under the spotlight for academic scandals involving student athletes. UNC offered a “no show” class for student athletes (where students received grades for phantom classes that they didn’t attend), and Syracuse allowed academically ineligible athletes to compete. And while these cases are the ones currently grabbing headlines, they are hardly unique; The Chronicle of Higher Education is reporting that 20 additional schools are being investigated for academic fraud.

And what about the student athletes themselves? Student-athletes tend to take easier classes and get lower grades than non-athletes. This is not only true for schools from power conferences in big-money sports, it has been observed in Division III liberal arts colleges and Ivy League schools, neither of which even offer athletic scholarships.

It’s tempting to believe that student athletes care only about their sport, and not about their schoolwork, as many popular commentators have suggested – and as Ohio State’s Jones once tweeted — except that in the dozen years that I’ve been teaching in university settings, that hasn’t been my experience at all. I’ve taught hundreds of Division 1 student athletes at several different schools, and they have been among the hardest working students I’ve encountered. The student athletes I’ve worked with have viewed their sport as a complement to, not a replacement for, their studies.

My observations were hardly unique. One of my students, Josh Levine, ran a youth hockey clinic and was upset by the widespread perception that the students he worked with did not care about school. After several conversations about the issue, we decided that the only way to find out the truth was to run a study. And so we did, surveying 147 student athletes (including some still in high school) involved in various team sports from football and basketball to lacrosse and golf about how much both they and their teammates cared about sports and academics.”

Here’s what we found: When student athletes were asked how much they care about athletics, they rated their interest a healthy 8.5 on average, on a scale of 1 to 10. But when asked the value they place on academics, the result was higher than 9 on average. If anything, the average student athlete cares more about his studies than his sport. #StudentBeforeAthlete indeed.

So why do they underperform in their classes?

One possible and intriguing reason suggested by our study is that student athletes don’t think their teammates take academics as seriously as they do. When asked to assess how much their teammates cared about athletics, the athletes were close, guessing 8.8. However, when asked to evaluate how much their teammates cared about academics, those same athletes guessed only 7.8 – far below the 9+ average.

Why is this important? Because when an athlete thinks that the rest of the team doesn’t care about academics, that athlete tries to fit in by pretending not to care either. In a perverse form of peer pressure, Cardale Jones’s tweet about classes being worthless may be what student athletes tell each other in an effort to fit in, based on the mistaken belief that if they care about academics, they are in an uncool minority.

All of this creates a distressing and self-perpetuating cycle. Tight-knit student athletes will seek ways of fitting into a culture that they perceive as neglecting academics (by defaulting into majors of dubious merit and spending less time doing homework), knowing that their habits are observed by teammates. When their teammates observe those habits, it reaffirms the (false) conviction that caring about academics is an unfortunate aberration, best suppressed.

One of my co-authors on this project, Sara Etchison, has described this process particularly well: “There are student athletes who want to excel in the classroom, but think their teammates would judge them for it, so they study a little less, or take an easier major. And it turns out, that’s how virtually everyone on the team feels, but there’s never an opportunity to realize, ‘Oh wait, all of us really care about what’s happening on the academic side.’”

This is a phenomenon that psychologists call “pluralistic ignorance” – when private preferences differ from perceptions of group norms. It leads people to engage in public behaviors that align more with the perceived norms than with their true preferences. The tragedy is that the norms are false – in reality, everybody would be happier if they just behaved in line with their true preferences.

Pluralistic ignorance has also been shown to underlie the phenomenon of binge-drinking on campuses. A study conducted at Princeton University revealed that a majority of students who drink excessively did so not because they wanted to, but because they felt that was what their friends wanted to do. Once they all had a more accurate assessment of what the group norm was, the amount of alcohol consumed declined.

This suggests that helping student athletes do better in the classroom may be as simple as letting them know that their teammates care as much about academics as they do. Many of them care deeply about the education they are receiving, and should care, because financial success in professional sports will elude the vast majority of them.

As the NCAA and the media focus more attention on athletes’ academic performance, one of the best ways to improve the education of student athletes is to give them license to pursue their academic goals by making it clear that their teammates, and society as a whole, support them in their academic endeavors. For this to happen, we will need many more stars like Cardale Jones speaking out about the importance of education, instead of tweeting about the pointlessness of going to class.

Daniel Oppenheimer is a professor of psychology and marketing at the Anderson School of Management at UCLA. He is the author of over 30 peer-reviewed journals, and several books, including Democracy Despite Itself: Why a System that Shouldn’t Work at All Works So Well. In addition to numerous awards for his teaching and research, he won the 2006 Ig Nobel science humor award. He wrote this for Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME society

My Lawn Is Worse Than Yours

Gardeners remove grass plants trimmed ahead of planned watering reductions in Beverly Hills, Calif. on April 8, 2015.
Damian Dovarganes—AP Gardeners remove grass plants trimmed ahead of planned watering reductions in Beverly Hills, Calif. on April 8, 2015.

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

What's a Southern California homeowner to do with high water bills, a historic drought, and no consensus on what to plant instead?

Forgive me for bragging, but my front lawn looks a lot worse than yours.

As the drought deepens and the state water board revises its plans for mandatory restrictions this week, California’s lawn culture has flipped, dirt-side up. With outdoor watering being called a society-threatening scourge, your local community pillars, once celebrated for lawns and gardens even greener than their money, run the risk of becoming social outcasts.

On the other side of this flip is your columnist, who is allergic to lawn watering and pretty much all other forms of lawn maintenance. Now, at the dawn of this new and drier California era, I find that I have become—quite unexpectedly and unintentionally—fashionable. Not to mention an accidental avatar of civic virtue. It used to be that if you didn’t keep your lawn a pristine green, you didn’t care. Now, you don’t care if you do.

“More and more people want to move away from having to spend weekends mowing lawns,” Sierra Club California director Kathryn Phillips told KQED recently, thus heralding my own allergy to lawn care as socially progressive. She also said: “It’s sort of a learning moment for all of us.”

And not just because we hate thinking of water as finite, but also because of the fervent devotion so many Californians have to beauty and design. I hope my own story can serve as beacon, parable, and perhaps comfort to those who may be wondering whether life can go on when their green grass turns to dust.

When my wife and I bought our home in South Pasadena nearly four years ago, schools for our little kids—not lawns or drought—were on our minds. The house itself was, and remains, a mess. But we also inherited several lovely fruit-giving trees and an unpretentious Bermuda grass front yard served by an automatic sprinkler system. For our part, we put in grass behind the house where a collapsing garden shed had stood.

Then came the water bills—they were shockingly high, nearly $200 monthly. We cut back on watering to twice a week. We installed low-flow toilets and a new washing machine. But the bills stayed high. The problem, as it turned out, was our small city, which had neglected to update its aging water infrastructure for decades. To replace that failing infrastructure, the city has increased rates more than 170 percent over the past seven years.

So at about this time last year, I stopped watering altogether.

Money was the biggest motivator. Lack of time was another—with three kids and a demanding job, lawn care was never going to be a priority. The drought provided a justification for a shut-off. And my own travels through this water-stressed state, particularly in the Delta and the San Joaquin Valley, reinforced my determination to avoid watering my Southern California lawn.

As a descendant of Okies, I was prepared for the outside to go full Dust Bowl, but that didn’t happen. In back, the new lawn has survived just fine, with some bare patches. (To keep trees alive, I’ve given them bath water). In front, the changes have been dramatic. On the south side of the lawn, the grass still grows, still green, protected by shade from a neighbor’s trees and a magnolia on the street. But the sunbaked north half slowly turned yellow, before giving way to dirt patches. Weeds—some carrying beautiful yellow flowers, some with nasty stickers that hurt my hands when I pull them—have gotten a foothold. Relatives and neighbors agree: My lawn looks awful.

At first, I felt guilty. But that didn’t last. Two people across the street sold homes for well over their asking prices, so clearly my lawn wasn’t hurting property values. My 6-year-old, who has deeply absorbed all the water conservation messages in the California media, began taking note of all the homeowners with sprinklers pouring water onto sidewalks and streets on his short walk to kindergarten; I didn’t want to turn the water back on and risk his wrath. And my bills have come down, though they still remain high by the standards of many Californians—about $70 a month.

Now, with the full force of the State Water Resources Control Board and Gov. Brown’s mandatory 25 percent reduction behind me, I feel pride when I look at what’s outside my front door. When the state disclosed that my city had some of the highest water use rates in the state, and would be required to cut down by 35 percent, my pride swelled into moral superiority. Some of us need an intervention, but not in my household.

Yes, I can hear the horrified screams of the gardeners and the horticulturists and the homeowners associations and the good neighbors across our state: Not watering at all is not an answer! You can’t just let your lawn become an eyesore! I know. I know. The change in lawn culture will require more from me.

But what exactly is required? And how on earth am I supposed to balance my responsibilities to my neighbors, the state water supply, the environment, and the family pocketbook?

After a couple of months of investigating the possibilities, I have no clear answers to those questions.

Official and expert opinions contradict themselves. Many water agencies want to pay Californians to take out their turf and replace it with drought-resistant landscaping, which sounds good. Except that the reimbursement rates cover only a fraction of the cost. And if you do what’s most responsible and aesthetically pleasing, it could run $20,000 for even a small lawn like mine, which is about $19,500 more than I can afford to spend on this.

There are some very cheap options, but those typically replace your lawn with unsightly landscaping and hard surfaces that can add to the “heat island” effect of cities. To confuse things further, some experts argue that the right kind of grass, maintained with very low levels of water, can be better for the environment than some drought-resistant landscaping.

Reading the fervent and contradictory advice, one can see that the arguments during this shift in lawn culture will be as much about ideals of beauty and neighborhood as about water. That’s fine, but for the legions of us who don’t care about looks and don’t have time, the water worthies need to get their stories straight and give clear guidance. How do I—cheaply—keep the front of my house presentable and water-wise?

If no answer is forthcoming, I’m perfectly happy to keep the water off. Let others bemoan the eyesore I’ve created. I’ll be celebrating my civic-mindedness.

Joe Mathews wrote this Connecting California column for Thinking L.A., a project of UCLA and Zócalo Public Square.

Read next: California Could Become a ‘Dust Bowl’ Like 1930s Oklahoma

Listen to the most important stories of the day.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME society

Why I Quit High School Football

football-helmet
Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

We need to realize the game’s dangerous potential and make changes to an ultra-competitive culture

I wore football pads, real pads, like the pros wear, for the first time when I was 12 years old, walking out onto the practice field at the start of my seventh grade season, moms and dads cheering the entire team on before we ran our first wind sprint. I felt so much anticipation before heading out on to that field. I made sure the foam guards fit snug around my knees and thighs and that the contraption wrapped around my shoulders and chest wouldn’t give. I saw the world through a facemask when I put my helmet on, something I’d only experienced through toy helmets up to that point in my life.

All I could think about was how I had waited my whole life for this. I could feel the glory of doing something big, making some catch or tackle in a varsity game in the years to come. I had no idea that I’d be done with football only four years later, that I’d leave the game by my own choice before I finished high school. I quit organized ball my sophomore year without giving any reasons to my coaches or ever really coming up with one for myself.

It wasn’t until the weeks leading up to my graduation from college in 2012 that I found my reason, something I think I knew all along. One of my favorite players, one of the most energetic and enthusiastic linebackers of all time, Junior Seau, was found dead after taking his own life. It sparked a national conversation about the physical toll the game takes on players, especially on professional players who dedicate years to what is potentially killing them. For me, it was a realization of how much I hated the pain everyone experiences playing. Of course, Seau experienced a lot more pain than I ever did. Tests revealed he suffered from CTE, a type of chronic brain damage that was later found in many other players, most likely caused by repetitive hits to the head. But I was finally honest with myself. I quit playing football all those years before because I was afraid of getting hurt. I couldn’t tell anyone at the time, including myself, because of the stigma against quitters, against those who couldn’t take the pain required to play, especially in the conservative part of rural Minnesota where I grew up. There’s so much ritual, so much rite of passage associated with playing junior high and high school football, that quitting is almost akin to leaving a cult.

With how important football is to my family, I remained a loyal fan of both my college and professional teams, but I never played another down. In a family ingrained with football culture, that is especially difficult. My father loves to tell stories about the game. From watching his favorite team, the Browns, as a kid growing up in Ohio to attending some of the greatest games in the Orange Bowl when he moved to South Florida as an adult. Even more, he loves telling stories from his days playing.

My older brother, who also quit before he finished high school, likes to talk about the time he broke the leg of the coach’s son in a tackling drill during a summer practice his final season. Though he’s in his thirties now, I can see whenever he tells it that the story still carries a sense of achievement for him.

These two men taught me the game, my father who threw me routes in our back yard, who told me to always come down with the ball whenever it was thrown too high, even if it meant a terrifying hit from a defender. And my brother, who chased me all through the house, picked me up, and threw me down wherever he could, always trying to hit me hardest so I’d be ready for the real thing someday.

Yet I have no football stories to tell, at least no stories of personal glory, no big games that I helped win or hard hits I put on other players. The only memories I really have of the game are times I felt terrible pain, times I really questioned why I even played at all.

The first time was eighth grade, in a game we lost by some large margin, which was how most of our games turned out. I ran with the ball towards the sideline when an opposing player dove and caught my shoulder pads at the back just below the neck, what’s known as a “horse collar” tackle. My full sprint stopped immediately, shoulder pads pulled up to my throat, the full weight of the player cutting off the air in my windpipe.

When I popped up in front of my coach, gasping for air, eyes already watering from the sudden pain, he knew immediately that I wanted to sit out the next few plays.

Frustrated by the score, maybe misreading how hurt I actually was, he yelled, “Fine! Get on the bench!”

I walked over, gasping, trying to breathe through thick mucus that had suddenly formed in my throat, what I thought was blood. I started to cry. To this day, I can’t remember if I played again in that game. I do remember an odd wheeze in my throat for a week or so after.

Still, I pressed on for two more years, that dream of playing in a big game taking me back to the first practice each summer, until another incident when I was a player on the junior varsity. It happened during tackling drills we ran as a team with the older juniors and seniors. In one line was the offense that, one by one, picked up a ball and sprinted over three tackling dummies before trying to avoid a single defender on the other end. The defense, in the other line, took turns popping up off their backs after the whistle, trying to tackle the guy coming at them with the ball.

Somehow I paired up with one of the biggest defensive seniors on the team. He was, I remember, always a nice enough guy outside of football, but turned agitated and mean with the excitement of the drill. After the coach blew the whistle, I only made it to the last dummy. Trying to take the last step over, I caught this senior’s shoulder pad in mid air just above the chest. My helmet popped half off, the chin guard caught on my throat. The world went black for a second and, once I staggered to my feet, I “saw stars” for the first time in my life. Head ringing, I stumbled back to the end of the line. For what it’s worth, the coach complemented my bravery getting back up.

A year later, I tried to forget the whole thing as I sat at home alone while another round of summer practices started a new season. I never wanted to feel pain like that again.

Reflecting on this aspect of my past after Seau’s death and the national discussion on head trauma that followed made me realize that I am part of the last generation to play football before its serious injuries, especially head injuries, were ever a consideration. Between 2010 and 2012, years that coincide with what many consider to be the “concussion crisis” in the NFL, participation in the youth football program Pop Warner dropped 9.5 percent, according to ESPN. I graduated high school in 2007. As someone who really does love the game, it’s difficult to see the number of young players entering high school drop across the country. It’s even more difficult to know I was one of them.

But if the game is going to survive, it’s important to realize that people like me do have stories to tell, just not the ones of glory and triumph. It’s not to make accommodations for weaker players, but to realize the game’s dangerous potential and make changes to an ultra-competitive culture that desperately needs it.

I don’t think you can ever make football entirely safe. I don’t think you ever need to. But a game isn’t enjoyable when it becomes something more than a game, especially something that leaves so much destruction and defeat behind.

Rian Bosse is a graduate student at Arizona State University’s Walter Cronkite School of Journalism. He no longer plays the game, but he enjoys watching and rooting on the Minnesota Vikings. He wrote this for Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME politics

Why Ted Cruz’s Campaign Will Break Barriers

GOP Presidential Hopeful Ted Cruz Campaigns In South Carolina
Richard Ellis—Getty Images Senator and GOP presidential candidate Ted Cruz answers questions from local media following a town hall meeting on April 3, 2015 in Spartanburg, South Carolina.

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

Cruz was born in Canada

Go, Ted Cruz!

I am very excited that the senator from Texas is running for president, so that we can rid this country of one of its most pervasive myths: that you need to be born on U.S. soil to be a real American.

Admittedly, that is not why most of Cruz’s fervent backers are excited he’s in the race. Or why donors have already sent his campaign tens of millions. The reasons most of them are excited about Cruz’s candidacy — his aversion to compromise in politics, the centrality of God in his political platform, and his disdain for any sensible immigration reform — are precisely the reasons why I would be horrified to see him actually win the race I am so glad he is running. If Ted Cruz ever became president, I’d be tempted to flee to Canada.

Which brings me back to the one thing I love about Ted Cruz: The man was born in Canada!

If his candidacy is taken seriously, and his qualifications aren’t challenged in any of the primary states he contests, Cruz will be joining Barack Obama and Hillary Clinton in the list of presidential candidates whose campaigns broke barriers for minorities in the political process — in Cruz’s case, for Americans born outside the country.

I am one such “natural-born” American born elsewhere—in Mexico—and it’s been one of my lifelong frustrations to have people question my Americanness, and be utterly ignorant about the fact that you can indeed be born a U.S. citizen outside the country, if born to an American parent. I have nothing but the utmost respect for naturalized Americans who opt to become citizens later in life, but I am not one of them – I was born clenching my blue passport.

Who cares, you might ask, is the only difference between “natural-born” and naturalized Americans — in terms of their rights — is the right to be president? That awkward phrase “natural born” is in the Constitution, listed among the other qualifications for the highest office. Listed, but not defined, which is one of the reasons for all the confusion.

The qualification made its way into the Constitution because the Founding Fathers wanted to prevent their young republic from ever being hijacked by scheming European monarchs. It’s clear from both the prevailing English common law and from the first major law passed by Congress on matters of citizenship in 1790 that “natural-born” citizens included Americans born to an American father in another country. (American mothers, thankfully for me and Sen. Cruz, gained the equal right to transmit U.S. citizenship to their kids by a law passed in 1934.) Federal statutes over time have further defined what it means to be a natural-born American, often requiring a certain period of residency within the United States before an American parent could be entitled to pass on US citizenship to a child born outside the country.

So go on, Senator Cruz (but not too far!), and make everyone understand that you are as American as anyone, qualified (at least on this count) to be our leader. And don’t feel ashamed of your background — tell folks who come to your website where you were born, as opposed to just telling them, as your site currently does, where your mom was born.

Now that I have made clear that I belong in the “natural-born” club, I should add that it is an absurd club. All American citizens should share the same privileges, including the right to lead the nation. It’s shameful that countries like Germany and France are more open to the possibility of a naturalized immigrant becoming their head of state than we are. Can’t we just trust the voters to determine whether presidential candidates are sufficiently American for them?

Andrés Martinez is the editorial director of Zócalo Public Square and a professor at the Walter Cronkite School of Journalism at Arizona State University.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Family

Why Have Kids?

woman-standing-beach-silhouette
Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

In the midst of rapidly changing family structures, why does childlessness still carry a stigma for women?

It used to be that the Cleavers — dad working an office job, mom raising two boys full-time — were the model American family. But the past several decades have seen dramatic changes — recent studies find that only about half of American adults are married today, compared to around 70 percent in 1960. The share of interracial marriages has doubled since 1980. Thirty-seven states and the District of Columbia now recognize same-sex marriage. More men than ever are becoming single fathers. More mothers are becoming family breadwinners. More children are being born outside of marriage.

A Pew Research Center study from 2010 found that 20 percent of American women now end their childbearing years without having borne a child, compared to 10 percent in the 1970s. During that time, the public has become more accepting of these women, but 38 percent of Americans surveyed for that study felt this trend was bad for society. When it comes to some other changes to the American family — such as marrying someone of a different race or women working outside the home — the public has said in greater numbers that those trends were good for or at least didn’t harm society.

In advance of the Zócalo event, “Why Have Kids?”, we asked a panel of experts: If Americans have come to accept a range of non-traditional family structures, why does a woman’s choice not to have children still elicit skepticism and judgment?

 

Bella DePaulo — We want other people to share the worldviews we care about most

“As long as women bounce around kidding themselves that life is full when alone, they are putting their hedonistic, selfish desires ahead of what’s best for children and society.” That was one reader’s response to a 2002 cover story in Time about women who were choosing to stay single and not have kids. At the time, I was just starting to research my first book on single people and I was perplexed. The reader had no relationship to the women in the story — they were strangers. If these women didn’t have qualms about their life choices, why should this guy get so angry about them?

I hadn’t yet recognized the power of people’s views of the world. Worldviews help us make sense of the world. They can boost our self-esteem, enhance our good feelings, and keep our bad ones at bay. We want other people to share the worldviews we care about the most. When it comes to marriage and family, one of the strongest worldviews is that women are supposed to get married and have kids. And if they do, they will be happier and healthier than everyone else — and morally superior, too.

The “problem,” then, with women who do not follow the culturally valued life course of marrying and having children, is that they are threatening beliefs that people hold dear.

What’s more, it is even worse if they choose not to marry or have kids. For example, research has shown that single people who want to be single are judged more harshly than those who want to find a partner. They are seen as lonelier, colder, less sociable, and more miserable. Even more tellingly, other people express more anger toward them. That irate reader of the Time story was not only irked because he thought the women were stupid, but also because they were happy. How dare they claim that life without marriage or kids is a good and happy life — a life that someone would actually choose!

Bella DePaulo, who has a doctorate in psychology from Harvard University, is the author of Singled Out: How Singles Are Stereotyped, Stigmatized, and Ignored, and Still Live Happily Ever After and the forthcoming How We Live Now: Redefining Home and Family in the 21st Century. Visit her website at www.BellaDePaulo.com.

 

Elaine Tyler May — Women have opted out of motherhood throughout history

Womanhood equals motherhood has long been accepted as the norm for women’s lives. But in fact, throughout history, women have often opted out of motherhood. In the 19th century, for example, the average number of births per woman declined by half—from eight in 1800 to four in 1900. Many women chose not to marry, and even some of those who married chose not to have children. The rate of childlessness was at an all-time high at the dawn of the 20th century, and then dropped to an all-time low after World War II in the midst of the Baby Boom.

Today, more and more women are choosing not to have children for a wide variety of reasons. Women without children are not scorned or pitied to the extent they once were, but a stigma still attaches to women who choose not to procreate. It is way past time for that stigma to lift. American women today lead rich and varied lives, with or without partners, with or without children. It is time to celebrate all the choices women have, and protect their ability to make the choice to have children—or not. Besides, there are many ways to have children in one’s life without giving birth to them or raising them. Just ask any devoted aunt, teacher, doctor, childcare worker, or anyone with children in their lives. As one teacher said proudly, “I’m not childless! I have 400 children!”

Elaine Tyler May is Regents professor of American studies and history at the University of Minnesota. She is the author of several books on women and the American family, including Barren in the Promised Land: Childless Americans and the Pursuit of Happiness.

 

Laura S. Scott — People are ignoring studies that point to happy, regret-free seniors who didn’t have children

Behind all the media attention around baby bumps, intentional single moms, egg freezing parties, and celebrity surrogacy is a belief that the only path to a purposeful and fulfilling life is parenthood, particularly motherhood. If you value the experience of motherhood over all other experiences, you will tend to judge someone who values a different experience.

There is also the persistent belief that, if you don’t have kids, you will regret it and die alone or in a home with 30 starving cats. Everyone chooses to ignore the multitude of studies that point to happy, socially connected, regret-free childfree seniors who are living their dreams and contributing in many creative ways. The lingering stigma is puzzling unless you factor in the judgment, unspoken regrets, and dare I say, envy, from parents who say, “I didn’t think I had the choice!”

We now have the means and opportunity to remain childfree, but we have to have the intent and will to resist the prenatal messaging, peer and family pressure, and be true to ourselves. We also have to have reliable birth control and doctors who believe us when we say, “I don’t want kids, ever! And I will not change my mind and sue you if you perform this tubal!”

We also need to be able to wrap our brains around this question: “If everyone is invited to decide for themselves if they want to be a parent, how does our thinking and our world have to change to allow for that?”

Laura S. Scott is an executive and reproductive decision-making coach, author of Two is Enough: A Couples Guide to Living Childless by Choice, and director of the Childless by Choice Project.

 

Bill McKibben — There’s also prejudice towards people who chose to have just one kid

There’s another choice that yields almost as much skepticism: the decision just to have one child. Surveys show that the biggest reason for having a second kid is so the first won’t be an only child. There may be plenty of good reasons for having a big family, but it turns out that isn’t one of them: all the data show that only kids grow up to be indistinguishable from their peers with siblings. Not spoiled, not crazy. Just fine.

In fact, it’s a perfect example of how easily we’re led astray by prejudice. The “study” that convinced everyone that only children were odd was conducted in the late 1800s, and the definition of “odd” included “very pretty,” “very ugly,” and “very strong.” (It also found that immigrant children were odd; go figure.) The subjects in the study included not just actual only children, but only children in works of fiction.

Happily science has marched on, and so should the rest of us. It’s time that we learned to accept that people, and families, come in many different shapes and sizes; that they face different circumstances and want different things. It’s time, that is, to stop with the judging.

Bill McKibben is a Vermont-based writer whose books include Maybe One: An Argument for Smaller Families.

 

Melanie Notkin — Choose happiness

We have “Mom-opia” in America—the myopic view of motherhood as womanhood.,. And yet, the latest U.S. Census Report on Fertility shows that 46 percent of women of childbearing years are childless.

This all-women-as-mother view generates “black and white” assumptions for why women make their choices, ignoring nuances and shades of gray. I worked closely with DeVries Global PR on a 2014 national demographic study entitled: “Shades of Otherhood,” inspired by my book: Otherhood: Modern Women Finding a New Kind of Happiness, to better understand this cohort of modern women. Of the 19 million childless American women ages 20 to 44, over one-third (36 percent) are childless by choice. Some never felt motherhood was for them. Some don’t feel financially secure enough for parenthood. Some enjoy the freedom to live life to what they envision as its potential. And 18 percent of all childless women are on the fence, having not yet made a choice on motherhood either way.

And then nearly half (46 percent) are involuntarily childless, some by biology, and more often, among the cohort I explore more widely in Otherhood, by circumstance.

The women of the Otherhood are often single, often not by choice, and they choose to wait for love before motherhood.

Still, whatever the reason for childlessness, 80 percent of women in our study said they can live a happy life without children of their own. Moreover, even among those who are childfree by choice, 80 percent are “childfull” — they play an active role the lives of other people’s children.

Whatever the choices or circumstances of childlessness, the only way to live a meaningful and happy life is to live an authentic life—making the right choice for oneself, not by the measure of what society believes is the “right” choice. And the only one who can make that authentic choice is the women who chooses. She chooses happiness.

Melanie Notkin is the founder and author of Savvy Auntie and author of Otherhood: Modern Women Finding a New Kind of Happiness. Connect with her at Otherhood.co and @SavvyAuntie.

This article was written for Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Culture

The Serious Business of Pulp Fiction

open-book
Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

How paperbacks helped forge our modern ideas about sex, race, and war

Cheap paperback books are like sex: They claim attention, elicit memories good and bad, and get talked about endlessly. The mid-20th century was the era of pulp, which landed in America in 1939.

You could pick up these paper-bound books at the corner drugstore or bus station for a quarter. They had juicy covers featuring original (and sometimes provocative) art, blurring the lines between canonical literature (Emily Brontë and Honoré de Balzac) and the low genres of crime, romance, and Westerns. Even fairly tame cover images grabbed attention. The Unexpected!, Bennett Cerf’s 1948 collection of “high tension stories” for Bantam Books, featured a cover by artist Ed Grant of a woman and a man—both in proper suits, though hers is flaming red—standing horrified in front of an open trapdoor. It hints at the thrills within.

Paperbacks went beyond lurid fiction. They brought high culture and scholarly nonfiction to readers, covering every conceivable taste and topic, from anti-Semitism (James Parkes’ 1946 An Enemy of the People) to works by diplomats, philosophers, physicists, anthropologists, even Sigmund Freud and Stendhal.

At 25 to 35 cents a pop—wages for a night of babysitting or the cost of a pack of cigarettes— paperbacks could be had by anyone, and so enabled teenagers and poor and working people to enter fully into the cultural landscape. Irving Shulman’s The Amboy Dukes was passed among Brooklyn street kids who emulated the gangs rampaging through the novel; bored housewives whiled away hours in suburbia minding house and children by reading paperbacks they picked up at the grocery store. In the 1950s, these books created secret communities of readers who fashioned identities through ownership; women in rural America might come to recognize themselves as lesbians after finding Women’s Barracks by Tereska Torrès on a candy store rack; would-be intellectuals could glimpse William Gaddis’ postmodernism in the New American Library’s New World Writing collection.

Unlike other forms of mass media that could be consumed at home—radio and later, television—paperbacks offered more than ephemeral content. Compact enough to carry anywhere, they were total packages, providing content not only through the text in their pages and illustrations on their covers, but also as objects. The books’ shape, smell, feel, even the sound of their cracking bindings helped to create a rare sense of connection among readers, booksellers, publishers, and authors. E.L. Doctorow, who was an editor at New American Library before becoming a published author, remembers his first deep reading experience plowing through the first 10 Pocket Books published in 1939 while he languished in a hospital bed recovering from a near-fatal disease.

Paperbacks could be serious business. In His Eye Is on the Sparrow (1952), Ethel Waters told her story of fighting against racism, poverty, and abuse to become a breakthrough singer and actress on Broadway and in Hollywood. Declaring her birth, “October 31, 1900, was the date, Chester, Pennsylvania, the place, which makes me, I trust, an American citizen,” she connects her life to that of the century and the nation, making clear that by 1951, even before Brown v. Board of Education, a poor, black woman claimed full citizenship in this country. Daphne Rooke’s Mittee (1953) unveiled South African apartheid through an interracial love triangle involving a white man, a white woman, and her mixed race female servant. The book included a glossary of Afrikaans terminology for U.S. readers to enlarge their vocabulary and open their eyes to distant parts of the world.

Since the publication of my book America Pulp: How Paperbacks Brought Modernism to Main Street, I have had the moving (and somewhat bewildering) experience of hearing very personal stories that confirm my argument about how paperbacks permeated daily life. Dozens of people—total strangers, long-ago boyfriends, friends of friends, collectors, bibliographers—have written me emails and letters, and sent me packages containing pulp pulled from their shelves.

I recently received an email from one reader, who wrote about her relative, Jack Nemec, an immigrant who wrote a dozen kinky titles, including Sin Caravan, The Spy Who Came to Bed, and The Darkest Urge. She learned about these books after his death; some relatives are a “smidge embarrassed that we have a ‘porn’ writer in the family,” she wrote—in fact, they threw out all his books after he died. Nemec’s books are hard to come by these days; she has read only one, Is She a Dyke? Her email concluded that she is a “bit proud that we’re a part of this little part of Americana.”

Her pride zeroes in on the crux of what pulp meant in the middle of the 20th century—and how we might understand it now. These books are a little piece of Americana; they linked the midway to the bedroom. The plot of Nemec’s Easy Sue—there’s one copy available on eBay—features an enormous woman who finds fulfillment working in a carnival where she meets the human giant; he’s well endowed. And yet the book’s design—its sleazy cover and handy shape—resembled the cover of William Faulkner’s novel The Unvanquished. Appealing to every taste, paperbacks were (and still are) trafficked—bartered, bought, exchanged, sequestered, hidden, destroyed … and loved. Because they carry traces of both the illicit and aspirational, they figure as distillations of America’s various dreams.

Pulp elevated working people into writers and collectors and was a quirky means for immigrants to assimilate to American culture. A rural Midwesterner moved to New York City and found paperbacks were the vehicle for his social mobility. The man’s daughter, who lives in the Netherlands, sent me a copy of his memoir. Karl Zimmer became “a Johnny Appleseed from the Heartland [who] spread American books across three continents.” In the 1950s, he drummed Ian Ballantine’s paperbacks to booksellers up and down Manhattan while working “a night job operating a machine that extruded insulation onto electric cable at a factory in Brooklyn.”

Hawking paperbacks was an uncommon route out of the Midwest and into pulp; most entered as consumers, picking something off a rack. A New York artist described in an email how she remembered her parents’ bedroom side tables filled with paperbacks. They bought the books at Pete Bianchi’s soda fountain and tobacco store in Ohio, “where our father used to take us kids for comic books and treats, and where he could peruse the rotating shelves of pulp mysteries.” Those drugstore visits to find books that were unavailable at the local library still inhabit readers’ memories. Encountering pulp was a family affair, undertaken together, in plain view, not hidden in a plain brown wrapper as Nemec’s novels surely would have been. Those books were a part of the landscape of Smalltown, U.S.A. They made us who we are: a pocket-book nation.

Cheap paperbacks helped to forge modern ideas about race, sex, war, science, and much more. During their heyday from the late 1930s to the late 1950s, pulp spread the practice of everyday reading, bringing to a mass audience of avid readers—everything from smut to theology; from whodunits to Macbeth. Paperbacks ’R’ Us.

Paula Rabinowitz is professor of English at the University of Minnesota and author of America Pulp: How Paperbacks Brought Modernism to Main Street. She wrote this for What It Means to Be American, a national conversation hosted by the Smithsonian and Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Family

How Do You Talk to Kids About God?

mom-girl-looking
Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

For secular parents, explaining sex is a cinch, but tackling religion can be terrifying

Talking openly with children about sensitive subjects is hard. It always has been. In my parents’ generation, the three-letter taboo was S-E-X. My older sister was 13 when my dad gave a kid “The Talk” for the first time. It was the ’80s, and my dad dodged it like any educated man of his time. He tossed her a sex-education book and said, “Read this, but don’t do it.”

Discussing sex isn’t quite so scary today. Many modern fathers don’t flinch when their daughters ask about anatomy or start inquiring about how babies are made. But progressive thinking has a way of replacing certain taboos with others. And today, for a great many parents, there is a new three-letter word: G-O-D.

With two of Western religion’s most important holidays—Easter and Passover—in the air, I find myself thinking back to the first time I had the “God Talk” with my own daughter. Maxine was barely five years old when she piped up from the backseat on the way home from her Los Alamitos preschool one day.

“Mommy,” she said, “you know what? God made us!”

I felt like a cartoon character being hit in the back of the head with a frying pan. My heart raced. I’m quite sure I began to sputter. Visions of Darwin and the evolving ape-man raced through my mind, followed closely by my childhood image of the big guy upstairs in his flowing white robes. I couldn’t speak.

And, in the awkward silence that followed, I was forced to confront the truth: The idea of talking to my kid about God—and, more specifically, about religion—scared the bejesus out of me.

I swallowed hard and forced myself to speak. “Well,” I said, “Who is God?”

Now, I don’t remember if Maxine actually said “duh,” or whether she simply bounced a “duh” look off the rearview mirror. But I can tell you that the “duh” message came across loud and clear.

“He’s the one who made us,” she said, her eyebrows knitted. “Okay… well, what is God doing now?” I tried for casual.

Again with the nonverbal “duh.”

“God is busy making people and babies,” she answered.

This information could not have been delivered with more certainty. My little girl, who had never heard an utterance of the word “God” in our house, aside from decidedly ungodly uses of the word, now had it all figured out thanks to a Jewish classmate who also happened to be her very first boyfriend. I was beaten to the punch by a cute preschool boy.

I let the subject drop, but my chest constricted all the way home. It stayed that way for hours. Why hadn’t I been prepared for this? What was I supposed to say now that she was getting her information from this boy at school?

As a science-minded non-believer with a generally non-confrontational personality, I was stumped by how to handle the situation. I wanted to be truthful about what I believed to be truth, but I didn’t want to indoctrinate her into my worldview either. And I certainly didn’t want others indoctrinating her into theirs, either. So where did that leave me? Was I to sit Maxine down and tell her that evolution, not God, was responsible for her existence? Was I to impose my own beliefs on her, the way other parents seemed to be doing? Or should I leave her alone to explore on her own timetable? What was the difference between guidance and pressure anyway? What was I willing to “let” her believe, and what wasn’t I?

Luckily for me, I have a husband who is cool under pressure. Later that day, after I’d rather breathlessly presented him with all the facts of the disastrous car ride, I asked him, “What if she believes in God?” His answer, my wakeup call, has become a mantra I repeat often. He said, “It’s not what Maxine believes, but what she does in life that matters.”

What I took from this was: Relax . . . it’s just God.

So I set aside my own irrational concerns and began to talk with my kid about God—lots of gods, actually. We talked about Brahman and Buddha, Jesus and Muhammad. My husband bought her a Children’s Bible, and I brought home lots of picture books highlighting aspects of various religious cultures.

To my delight, Maxine became genuinely interested in religion—as long as it came in bite-size pieces, rather than overly long oratories. She became engaged in the stories we told, and good at deciphering the various “moral” aspects of various tales for herself. In her hands, the Bible wasn’t a tool of indoctrination, but a tool of religious literacy—even critical thinking. Once when she was reading the 10 Commandments, for example, she got to the 10th and read (aloud): “Never want what belongs to others.” Then she stopped and corrected Moses. “Well, you can WANT what belongs to others,” she said. “You just can’t HAVE it. You can buy one for yourself.”

In the four years that have passed since Maxine first told me about God, we have discussed the subject countless times. I have learned that compassion and an open mind are more important than being right. I’ve also learned that the best way to combat intolerance is with knowledge, and that the best way to combat indoctrination is with critical thinking. No longer is there awkwardness around the subject. We talk about lots of different beliefs, encourage her to learn about what motivates the faith of others, and make clear that there is no shame in choosing an unpopular path. After all, her own parents are happy, well-adjusted, and (I like to think) good-hearted people.

Today, Maxine is 9 and believes in God “two days a week — on Sundays and Wednesday.” Is that logical or rational? No. But who cares? It works for her, and that’s what’s important.

I haven’t always done everything right. I have stumbled sloppily through more than a few conversations along my own journey and regretted my word choices now and again. (Our unique biases have a way of filtering through from time to time, despite our best efforts.) But, because the conversations keep coming, I’ve almost always had a chance to right my wrongs, to clarify my position, to bring a new perspective to each situation. The point here is not to be perfect—as my daughter says, “That would be boring”—but to give us something to aim for.

Exposing kids to various brands of spirituality and religion (not to mention non-religious philosophies) is not only fascinating and surprisingly fun; it also has the potential to improve our children’s— and our own—awareness about and compassion for the multiplicity of kinds of people in the world. Like the “sex talk,” discussions about God may come up sooner (and differently) than you had pictured. But it’s our obligation to embrace it. After all, if we’re not prepared to explore ideas of God, religion, and faith with our curious children, someone else will do it for us.

Someone cute.

Wendy Thomas Russell is an award-winning journalist and author of Relax, It’s Just God: How and Why to Talk to Your Kids About Religion When You’re Not Religious. Russell hosts a blog called Natural Wonderers at Patheos.com and writes an online column for the PBS NewsHour. She wrote this for Thinking L.A., a partnership of UCLA and Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Education

What Are Universities For?

empty-classroom-chairs
Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

What will it take to redesign and reinvigorate American higher education?

Pretty much anyone you talk to in America today has an opinion about what’s wrong with our universities. Parents think they’re too expensive. Recent graduates fear being crushed by debt and ending up untrained for the current job market. Professors worry that entering students have not been adequately prepared by their high schools. Economists and sociologists point to troubling studies about a lack of diversity—in both income and race—on American campuses. In Silicon Valley, they talk about MOOCs and STEM, flipped classrooms and gamification. And in Washington, D.C., they talk about federal aid and compliance, Title IX and irresponsible lending.

It’s safe to say that American universities are under fire—for everything from perpetuating inequality to failing to adapt to our digital age. In advance of the Zócalo event “What Are Universities For?” we asked scholars: Does the contemporary university need to be redesigned to address these problems—and if so, how?

Reinvigorate instead of chasing trends by redesigning — Jeffrey Schnapp

The contemporary university needs to be reinvigorated, not redesigned simply to promote social justice or “adapt” to the digital age. It already suffers far too much from narrow vocationalism and presentism under pressure to reflect our era’s “needs” or social aims. Reinvigoration means something different: stripping away many of the layers of administration and student services that have porked up university budgets over the past decades; creating flexible structures of governance and new disciplinary taxonomies that free up researcher-teachers to think, explore, experiment, and take risks; substituting the consumerist-managerial ethos that has invaded campuses nationwide with an ethos of in-depth inquiry and serious intellectual play. Most of all, it means collective investment in education, teaching, and research as public goods. That renewed collective investment will be repaid by the sort of porous, centrifugal model of university life that is already emerging around the edges, expressed by everything from MOOCs (when they are well-designed and well-delivered) to a renewed interest in civic or public humanities to efforts to re-conjoin thinking and making (design education, training through problem solving initiatives, and the like).

Jeffrey Schnapp is faculty co-director of the Berkman Center for Internet and Society, and the founder of metaLAB (at) Harvard, a digital arts and humanities research center dedicated to exploring and expanding the frontiers of networked culture in the arts and humanities.

Change starts on Capitol Hill — Allison Deegan

Reimagining today’s university system necessitates three visionary moves by the federal government:

First, pressure public universities to return to a state funding scheme rather than making students bear the majority of the cost of attendance. Universities lobby effectively for all sorts of programs and privileges. They need to lobby vigorously and deliberately for their students and shift tuition and other costs back to the state, sharing them among all taxpayers. If universities refuse, ban them from participating in federally funded programs. Today, a four-year degree at the University of California costs over $100,000—if you can even get a seat. Most middle-class families don’t qualify for any financial aid. Students and their parents have to borrow that money, risking both the student’s financial future and the parents’ retirement possibilities. This cannot be how public colleges are funded anymore.

Second, revoke the nonprofit status of all private universities that enroll fewer than 10 percent low-income students. I’ve advised student applicants for the past 15 years—low-income students are out there, everywhere. Many private colleges claim they don’t apply. Then go to them! We should require private colleges funded by taxpayers to better reflect our society, and to truly provide opportunity for all hardworking students and their families.

Third, reset all outstanding student loans, up to and including voiding the liability (and any incidental tax consequences) for anyone who has borrowed under misleading or misunderstood conditions—which probably fits most student loan debtors. We would not let teenagers with no income or assets borrow six figures in any other circumstance, because that would be crazy. It’s crazy for student loans too, and profiting from it is disgusting. We need these young people working, contributing to our economy, and raising families. We gain nothing from them sinking into lifelong debt.

Dr. Allison Deegan provides college admissions advising through her consulting company, College App ASAP (www.collegeappasap.com). She also volunteers as leader of the college admissions program for WriteGirl (www.WriteGirl.org), a creative writing and mentoring program for teen girls.

Renew the democratic purpose of higher education — Harry Boyte

The scale and nature of the challenges facing higher education require that we reinvigorate its democratic purpose, a far deeper aim than education as simply a path for individual success. This also requires a much bigger view of democracy itself, now usually seen as simply elections.

It is worth recalling universities that embodied a democratic purpose. James Angell, president of the University of Michigan at the turn of the last century, believed that UM needed to help shape the dynamics of America’s changing democracy with a “democratic atmosphere” full of debate, discussion, experimentation, exchange of views, and wide engagement with society. William James at Harvard, founder of experimental psychology, urged the university to take scientific approaches to enacting democratic values and practices, including cooperative inquiry, free exchange of ideas, and testing of ideas. A democratic view of higher education’s purpose was central to President Harry Truman’s Commission on Higher Education in 1947.

This view of education’s purpose understands democracy as a way of life. Septima Clark, an architect of the civil rights movement’s grassroots citizenship schools, said their aim was “to broaden the scope of democracy to include everyone and deepen the concept to include every relationship.”

Renewing the democratic purpose of higher education highlights the importance of colleges and universities as public spaces for diverse interests and views to find common ground in a sharply divided society. There are many obstacles—Ivory Tower cultures, disciplinary silos, bitter ideological divides. But stories of such democracy-building work exist, sometimes on a substantial scale and led by students, even if those stories are now overshadowed by news about campus racism or sexual abuse. Stories of democratic and empowering change are crucial for the whole society to hear.

Harry Boyte, senior scholar in public work philosophy at Augsburg College, is editor of the recent collection Democracy’s Education: Public Work, Citizenship, and the Future of Colleges and Universities (Vanderbilt, 2015).

Design race-conscious institutions — Estela Mara Bensimon

When the Supreme Court ruled in 2003 that race could be used as a factor in college admissions, the justices supported their decision by arguing that universities are responsible for preparing leaders “with legitimacy in the eyes of the citizenry”; therefore it is “necessary that the path to leadership be visibly open to talented and qualified individuals of every race and ethnicity.”

Yet the data on who goes to college and, in particular, which colleges they go to, make clear that higher education is not behaving as an engine of social mobility. The path to a college degree is bifurcated, with whites disproportionately going to the most elite colleges, and blacks and Latinos disproportionately going to the most underfunded, least selective, and least successful public colleges.

Why is it that the role of higher education in creating and perpetuating racial inequality has not aroused a commitment to an agenda of social justice action? Why is it that liberal leaders in higher education, government, and philanthropy do not embrace racial equity as a goal as heartily as they embrace innovations in technology, online education, and the international expansion of campuses?

Surprisingly, there seems to be more reluctance to talk about race and racism in 2015 than in 1965. Increased racial diversity in the student body, particularly in the less selective and least well funded colleges, along with the growth of diversity-related programs and positions, creates the impression that race is no longer an issue. But although universities are obsessed with numbers—the average SAT scores of incoming freshmen, the money faculty members bring in through grants—most institutions lack equity metrics to monitor their success with recruiting and retaining minoritized students.

To break the cycle of inequality in higher education, college and university leaders have to be race-conscious, and continuously submit their institutions’ practices to the questions: What racial and ethnic groups benefit from our policies? Who is disadvantaged? They should also take notice of the racial composition of their professional networks. Unless leaders hold themselves accountable for racial equity within their institutions, universities will continue to reproduce white racial privilege.

Estela Mara Bensimon is professor and co-director of the Center for Urban Education in the Rossier School of Education at the University of Southern California.

Universities are already changing and adapting — Julie Schell

The popular mantra that American universities do not or are not adapting to our digital age perpetuates dangerous misconceptions about our institutions of higher and postsecondary education. There have been enormous gains in adoption of learning technologies on American campuses that have facilitated unprecedented student access and have transformed the nature of teaching and learning. For example, online Learning Management Systems (LMS), provide a basic infrastructure to open up wider access to continuous learning experiences for diverse groups of students—before, during, and after class. LMS’s serve as central, online hubs that drive student-content, student-student, and student-faculty interactions. Examples of popular LMS’s include Canvas, Blackboard, and Moodle. Sure, LMS’s have limitations, and faculty members do not always leverage them to their full extent. However, with the near universal adoption of LMS’s, it is no longer the case that higher education only accommodates 18-22 year olds who are available for classes from 8 a.m. to 3 p.m. and in-person study sessions in their dorm rooms.

The modern disparagement of American universities as stubbornly resistant to change in the digital age overgeneralizes complex issues with facilities and infrastructure, sometimes on campuses over 300 years old. These issues can make it difficult to engage students in more innovative experiences in brick-and-mortar classrooms. But on every campus you can find clusters of bright spots—faculty and administrators who are incorporating cutting-edge technologies, pedagogies, and research tools to catalyze widespread change in the 21st century. Until we begin to pay serious attention to these bright spots, we will stay stuck in a roundabout of critiques of American higher education that have been recycled nearly verbatim for centuries.

Julie Schell is a clinical assistant professor and director at The University of Texas at Austin. She is a former fellow in the Mazur Group at Harvard University and a specialist in pedagogical innovation. Visit her website at www.julieschell.com and follow her on Twitter @julieschell.

Build colleges that prepare students for the economy — Lara Couturier

A widening wealth gap and declining economic mobility are among the most damaging trends facing the U.S. Over the last three decades, the top 0.1 percent’s share of the nation’s wealth has tripled—from 7 percent to 22 percent. Between 2010 and 2013, the net worth of the typical African-American family declined by one-third. As beneficiaries of significant taxpayer support, colleges and universities must help tackle this crisis.

Colleges and universities wield tremendous power: They determine which students learn skills needed for success in today’s economy. As such, they are uniquely positioned to ensure that low-income students, students of color, displaced workers, and underprepared students learn effectively, earn credentials, and connect to jobs that offer financial and career advancement. But college completion rates are unacceptably low. To meet the challenge, all but a few need to redesign almost everything they do.

Erase equity gaps: Work with high schools to ensure all students are prepared. Analyze student outcomes by race and income and respond: For example, every college should know when and why young men of color walk out their doors.

Ensure student success at scale: Create a visionary design for cross-institutional reform that integrates multiple interventions known to improve student outcomes: For example, create pathways for students that provide the clarity, structure, and guidance they need to complete their degrees.

Link students to careers: Collaborate with employers and community partners to ensure students are ready for work and programs lead to real jobs.

Nothing shy of full transformation will enable colleges to help end the downward spiral of American social mobility, and we have seen that it is possible.

Lara Couturier is director of postsecondary state policy at Jobs for the Future (JFF). Prior to joining JFF, she served as interim director at the Futures Project: Policy for Higher Education in a Changing World. She holds a Ph.D. in history from Brown University.

This article was written for Zocalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Science

You’re Not Fooling Anyone With Your Pretend Laughter

Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

A fake laugh is produced with a slightly different set of vocal muscles controlled by a different part of our brain

Why do we laugh? The obvious answer is because something is funny. But if we look closer at when and how laughter occurs in ordinary social situations, we see that it’s not so simple. Like most aspects of human behavior, laughter is complicated.

Scientists are learning about not only the ways in which people hear and categorize laughs, but also how human laughter relates to similar vocal behaviors across the animal kingdom. We have now uncovered many clues about the origins of this fascinating and ubiquitous behavior: While laughter might seem on the surface to be about jokes and humor, it turns out that it’s really about communicating affiliation and trust.

And then it gets tricky.

In one line of research in my Vocal Communication Lab at UCLA, we have been playing recorded laughs to listeners and asking them, is this laugh “real” or “fake”? Our recorded laughs were taken from real conversations between friends in a laboratory setting, or produced on command, also in the lab. Listeners were able to tell the “real” laughs from the “fake” laughs about 70 percent of the time. Why are people falling for the fake laughs? Before we can answer that, we need to address where laughter came from in the first place.

Laughter in humans likely evolved from play vocalizations in our primate ancestors. We see related vocal behaviors in many primate species today, as well as in rats and dogs. Scientists have described these play vocalizations as evolved from labored breathing during play. If one animal bites another, it could be taken as an attack—but if they signal while panting that they are just playing, the play can continue without being interrupted by an unnecessary real fight.

So how does this relate to human laughter? By comparing traits in different species, and then incorporating what is known about the evolutionary connection between those species, we can estimate how old a trait is, and how it has changed over evolutionary time. Our laughs have become longer, and the sound has more of a tone, including the stereotyped vowel sounds we all know: the human “hahaha.” Of course, human laughter can be composed of many sounds, including snorts, grunts, and hisses. But when we produce the classic “hahaha” sound, we are revealing the action of an ancient emotional vocal system shared with many species, and that has important consequences in how people appraise a laugh, and the laugher, at that moment.

Laughter triggers the release of brain endorphins that make us feel good, and it reduces stress. There is even evidence that we experience a temporary slight muscle weakness called cataplexy when we laugh, so we could be communicating that we are unlikely (or relatively unable) to attack. But laughter is not always made in fun, and can be quite hurtful (e.g., teasing). Laughter is a powerful signal with huge communicative flexibility.

We were laughing before we were talking, and crying, screaming, gesturing, and making other nonverbal signals. But we did eventually learn to talk, and developed fine control over our breathing to regulate it for speech, and better motor control over our larynx, lips, and tongue. These innovations afford us the ability to be vocal mimics. As this skill developed for speech production, the ability to imitate other non-speech sounds came rather quickly. Suddenly, the fake laugh was born.

A fake laugh is produced with a slightly different set of vocal muscles controlled by a different part of our brain. The result is that there are subtle features of the laughs that sound like speech, and recent evidence suggests people are unconsciously quite sensitive to them. For example, if you slow down a “real” laugh about two and half times, the result is strangely animal-like. But when you slow down human speech, or a “fake” laugh, it sounds like human speech slowed down.

The ability to be a good faker has its advantages, so there has likely been evolutionary pressure to fake it well, with subsequent pressure on listeners to be good “faker detectors.” This “arms race” dynamic, as it’s called in evolutionary biology, results in good fakers, and good fake detectors, as evidenced by many recent studies, including my own.

The reasons we laugh are as complicated as our social lives, and relate closely to our personal relationships and communicative strategies. One focus of researchers now is trying to decipher the relationship between specific sound features of our laughs—from loud belly laughs to quiet snickering—and what listeners perceive those features to mean. For someone studying the evolution of human communication, there are few things better to study. And it’s no joke.

Greg Bryant received his Ph.D. in psychology from UC Santa Cruz in 2004 and is now an associate professor in the Department of Communication at UCLA. He is interested broadly in the evolution of communication and cognition, and has published on a variety of topics exploring the role of the voice in social interaction. He wrote this for Thinking L.A., a partnership of UCLA and Zócalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

Your browser is out of date. Please update your browser at http://update.microsoft.com