TIME privacy

International Hacking Ring Charged With Theft of Xbox Software and Data

Hackers also allegedly stole software used by the U.S. Army to train military helicopter pilots

Four members of an international hacking ring were charged with the theft of over $100 million worth of software and data related to the Xbox One and Xbox Live consoles and other technologies, the Department of Justice announced Tuesday.

The hackers were also charged for stealing data from the unreleased video games Call of Duty: Modern Warfare 3 and Gears of War 3, as well as the U.S. Army’s proprietary software used to train military helicopter pilots, the statement said.

Between Jan. 2011 and March 2014, the four men allegedly hacked into the computer systems of video game makers Microsoft, Epic Games and Valve Corporation, according to court documents. They also allegedly stole software from the U.S. Army and Zombie Studios, which produced helicopter simulation software for the Army.

Two of the charged members, whose ages range from 18 to 28, have already pleaded guilty to charges of copyright infringement and conspiracy to commit computer fraud.

“As the indictment charges, the members of this international hacking ring stole trade secret data used in high-tech American products, ranging from software that trains U.S. soldiers to fly Apache helicopters to Xbox games that entertain millions around the world,” said Assistant Attorney General Caldwell.

Three of the hackers are Americans, while one of the hackers is Canadian, the Department of Justice said. Officials believe the Canadian’s guilty plea is the first time a foreign individual was convicted of hacking into U.S. firms to steal information.

“The American economy is driven by innovation. But American innovation is only valuable when it can be protected,” Caldwell said. “Today’s guilty pleas show that we will protect America’s intellectual property from hackers, whether they hack from here or from abroad.”

TIME Security

Londoners Unwittingly Exchange First Born Children For Free Wi-Fi

Signed agreement that included a "Herod Clause," in experiment designed to show dangers of unguarded Wi-Fi hotspots

Not reading the small print could mean big problems, as a handful of Londoners who accidentally signed away their first born children in exchange for access to free Wi-Fi recently found out.

An experiment organized by the Cyber Security Research Institute was conducted in some of the busiest neighborhoods in London and intended to highlight the major risks associated with public Wi-Fi networks.

In June, researchers set up a Wi-Fi hotspot that promised network access to users who agreed to a set of terms and conditions. These included a “Herod Clause” offering free Wi-Fi if the user agreed to hand over their eldest child “for the duration of eternity.” The page was disabled after six people signed up.

Finnish security firm F-Secure, which sponsored the research, said it had decided not to enforce the clause. “As this is an experiment, we will be returning the children to their parents,” wrote the Finnish company in its report. “While terms and conditions are legally binding, it is contrary to public policy to sell children in return for free services, so the clause would not be enforceable in a court of law.”

The company urged people to take Wi-Fi security more seriously. Sean Sullivan, security advisor at F-Secure, told The Guardian: “People are thinking of Wi-Fi as a place as opposed to an activity…You don’t do unprotected Wi-Fi at home, why are you doing it in public?”

[The Guardian]

TIME privacy

The FBI and NSA Hate Apple’s Plan to Keep Your iPhone Data Secret

Apple Inc. Launches iPhone 6 And iPhone 6 Plus Smartphones In Madrid
A man shows his new iPhone outside Puerta del Sol Apple Store as Apple launches iPhone 6 and iPhone 6 Plus on September 26, 2014 in Madrid, Spain. Pablo Blazquez Dominguez—Getty Images

Apple made the iPhone 6 pretty difficult to crack. Law enforcement isn't happy about that

Apple released the iPhone 6 with a new, powerful encryption setting that should make it much harder for law enforcement and surveillance groups like the FBI and the NSA from accessing users’ emails, photos and contacts. After the Edward Snowden revelations last year, privacy-minded users may be happy about the new feature, but the law enforcement community is decidedly not.

Speaking at a news conference Thursday, FBI Director James Comey criticized Apple’s encryption, which scrambles information on the new iPhone 6 using a code that could take “more than five-and-a-half years to try all combinations of a six-character alphanumeric passcode with lowercase letters and numbers,” as Comey said.

Comey accused Apple of creating a means for criminals to evade the law, the New York Times reports. “What concerns me about this is companies marketing something expressly to allow people to hold themselves beyond the law,” he said.

In kidnapping cases, when seizing content on a phone could lead to finding a victim, Comey said there would be times when victims’ parents would come to him “with tears in their eyes, look at me and say, ‘What do you mean you can’t'” decode the contents of a phone, the Times reports.

A senior official told the Times that terrorists could use the iPhone 6 to store their data and evade law enforcement. “Terrorists will figure this out,” along with savvy criminals and paranoid dictators, one senior official predicted. Another said, “It’s like taking out an ad that says, ‘Here’s how to avoid surveillance — even legal surveillance.'”

However, major U.S. tech companies like Apple and Google argue that they can’t do business if customers believe their data isn’t secure, particularly in foreign markets like China and Europe, where consumers fear American tech products might come pre-loaded with ways for American surveillance agencies to access their data. On top of that, a security expert told the Times that law enforcement complaints about Apple’s encrypted were likely exaggerated, as access to call logs, email logs, iCloud, Gmail logs, as well as geolocation information from phone carriers like AT&T and Verizon Wireless and other data is relatively unfettered, particularly if police get a warrant.

[NYT]

TIME privacy

Here’s How Much Access Facebook Employees Have to Your Account

Facebook Homepage
Nicholas Kamm—AFP/Getty Images

Answer by Joe Sullivan, Chief Security Officer at Facebook, on Quora.

I’m Facebook’s Chief Security Officer and I oversee data security at the company. Thanks for the question. We take our role as stewards of people’s information very seriously and have invested heavily in protecting the data trusted to us.

There is no “skeleton key.” In fact, we have advanced internal tools that restrict access to information to only those employees who need it to do their jobs (e.g., investigating user reports).

There is a cross-functional group of employees who work on these safeguards and appropriate access to these tools.

Most employees do not have access and, those who do, must sign an agreement and complete a training program before using our internal tools. Finally, we track the actions performed through internal tools.

Each use is logged and requires the employee to explain the purpose of his or her use, and we audit all of this regularly.

Neither Mark nor any other senior executive at the company has tool access granted, because they do not have roles in the company where access would be necessary.

This question originally appeared on Quora: Does Mark Zuckerberg or Facebook employees have a skeleton key granting them access to every member’s Facebook profile page and information?

TIME Security

Experts Say ‘Bash’ Bug Is a Major Vulnerability But Not a Major Threat

computer virus
Getty Images

Cybersecurity experts explain why the Bash bug might actually not be as risky as the Heartbleed bug discovered earlier this year

When the Heartbleed software bug was disclosed in April, there was no shortage of publicizing its risks and defensive measures—and for good reason. And the Bash bug, discovered Wednesday, is prompting similar widespread fear. The security flaw is named after a vulnerable piece of software, Bash, that’s built into several ubiquitous operating systems, including Apple’s Mac OS X.

“People were taking Heartbleed very seriously,” said Jim Reavis, CEO of cybersecurity firm Cloud Security Alliance. “If people don’t take Bash seriously, it’ll become a self-fulfilling prophecy.”

Cybersecurity experts like Reavis don’t doubt that the Bash bug is dangerous: it is, and it needs urgent attention. The afflicted Bash software, released in 1989, is an open source software that was built-in to Linux and Mac OS operating systems and then widely integrated into many corporate and personal computer programs, experts said. Preliminary estimates say it could impact up to 50 percent of Internet-connected servers, according to Darien Kindlund, director of threat research at FireEye, a network security company.

“Bash is yet another type of open source software that has been reused, repurposed,” Kindlund said.

But the threat posed by the Bash bug—it could theoretically remotely command computers and extract private information—is overblown, cybersecurity experts told TIME. Average computer users aren’t likely to be directly targeted by hackers, experts said. And for the vulnerability to be triggered, the attacker would need to deliver content to the user, and then get the user to execute Bash with that content, according to Kindlund. Normal web browsing, emailing or other common activities do not involve calling Bash. What average users should be worried about are more traditional hacking techniques, like phishing emails and links to malicious websites, said John Gunn of VASCO Data Security.

“There are so many other methods that have a high degree of success that would take priority over [Bash as a hacking tool],” Gunn said. “The vulnerability really exists for large organizations that may have servers running Linux.”

Companies who have web servers that aren’t updated internally on a frequent basis may be most at risk because they continue to use old technology, according to Kindlund. Some companies who still store private data on Internet-facing servers—an outdated practice, as it makes sensitive information more vulnerable—or do not have strong security may vulnerable as well, but they can take precautions by inspecting each and every of their Linux-based servers, said Tanuj Gulati, CTO of Securonix, a security intelligence firm.

“The Apples or the Amazons or the Googles of the world aren’t the ones I’m worried about the most,” Reavis said. “But it could be some big companies that use this technology, but simply don’t have an awareness budget, or not taking this seriously.”

Still, many companies already have protection mechanisms in place that would prevent Bash from inflicting significant harm. Most servers can detect anomalous traffic and behavior, and many already take precautionary efforts by keeping records offline where they are inaccessible, Gunn said.

“What this Bash vulnerability depends on is a lot of other failures,” Gunn added. “This isn’t a single point of failure, whereas in Heartbleed, it was.”

Numerous patches for the Bash bug have already flooded the market. While security researchers have claimed the patches are incomplete, experts agree that fully fixing the vulnerability would take years. Additionally, that there have not been any known major breaches using Bash has also boosted security experts’ confidence that the bug may not pose a widespread threat.

“Most vulnerabilities of value are either shared or sold in the hacking community,” Gunn said. “If this had been a viable hacking method, it would’ve been exchanged in the hacking community, and it has not.”

But fact that Bash may not pose a major threat to individuals or companies doesn’t mean its danger should be understated, experts agreed.

“You saw a lot of worry about [Heartbleed], and there really wasn’t much that happened. The economy didn’t grind to a halt. Cities didn’t black out,” said James Lewis, director of the Strategic Technologies Program at the Center for Strategic and International Studies. “It’s a vulnerability. A flaw.”

 

TIME Data

Will We Have Any Privacy After the Big Data Revolution?

Operations Inside The Facebook Data Center
Operations inside the Facebook data center Bloomberg/Getty Images

Zocalo Public Square is a not-for-profit Ideas Exchange that blends live events and humanities journalism.

Corporations know more about their customer’s lives than ever before. But the information economy doesn't have to leave us exposed

Does the rise of big data mean the downfall of privacy? Mobile technologies now allow companies to map our every physical move, while our online activity is tracked click by click. Throughout 2014, BuzzFeed’s quizzes convinced millions of users to divulge seemingly private responses to a host of deeply personal questions. Although BuzzFeed claimed to mine only the larger trends of aggregate data, identifiable, personalized information could still be passed on to data brokers for a profit.

But the big data revolution also benefits individuals who give up some of their privacy. In January of this year, President Obama formed a Big Data and Privacy Working Group that decided big data was saving lives and saving taxpayer dollars, while also recommending new policies to govern big data practices. How much privacy do we really need? In advance of the Zócalo event “Does Corporate America Know Too Much About You?, we asked experts the following question: How can we best balance the corporate desire for big data and the need for individual privacy?

Corporations need to protect vulnerable data

Last week, the government of Singapore announced an increase in the cost of a toll at Bangunan Sultan Iskandar, the customs point for travelers entering and exiting between Singapore and Malaysia. Motorists, who will have to pay over five times more than they previous paid, are furious. In protest, a group of hackers, known simply as “The Knowns,” have decided to use their skills to hack into and release corporate data on customers. The group released the mobile numbers, identification, and addresses of more than 317,000 customers of Singapore-based karaoke company K Box.

In an era of “hacktivism,” data is necessarily vulnerable. So how do we negotiate between companies’ increasing needs to collect and store our personal digital data, individuals’ privacy and ethical needs, and governments that are often slow to gain an understanding of these needs and how to address changes in this area?

If we borrow from recent work by psychologists and ethicists, we can agree upon a few preliminary guidelines: 1) Before collecting private and personal data, consumers should be informed of what data a company intends to collect, how it will be stored and used, and what precautions are being made to protect their information from data attacks. 2) Consumers should be given the ability to consent and opt-out from collection of personal data. 3) Companies that are collecting and storing personal data should periodically remind their customers about their data storing policies.

Although companies should have the freedom to be innovative in their business models (such as by collecting new types of consumer data), these methods should not compromise the individuals on whom companies ultimately depend.

Sean D. Young is the Director of the UCLA Center for Digital Behavior and a medical school professor in the Department of Family Medicine. He writes and teaches about topics at the intersection of psychology, technologies, medicine, and business, at seanyoungphd.com.

Big data isn’t magic

A big data society seems to be inevitable, and promises much, but privacy (properly understood) must be an important part of any such society. To have both privacy and the benefits of big data, we need to keep four principles in mind:

First, we need to think broadly about privacy as more than just the keeping of a secret, but as the rules that must govern personal information. Privacy rules are information rules. We have rules now protecting trade secrets, financial and medical data, library records, and computer security. We have to accept the inevitability that more rules (legal, social, and technological) will be needed to govern the creation of large data sets and the use of big data analytics.

Second, we need to realize that information does not lose legal protection just because it is held by another person. Most information has always existed in intermediate states. If I tell you (or my lawyer) a secret, it is still a secret; in fact, that’s the definition of a secret, or as we lawyers call it, a confidence. We must ensure that big data sets are held confidentially and in trust for the benefit of the people whose data is contained in them. Confidentiality rules will be essential in any big data future.

Third, we need to realize that big data isn’t magic, and it will not inevitably make our society better. We must insist that any solutions to social problems based on big data actually work. We must also insist that they will produce outputs and outcomes that support human values like privacy, freedom of speech, our right to define our own identities, and political, social, economic, and other forms of equality. In other words, we need to develop some big data ethics as a society.

Finally, it’s important to recognize that privacy and big data aren’t always in tension. Judicious privacy rules can promote social trust and make big data predictions better and fairer for all.

Neil Richards (@neilmrichards) is a Professor of Law at Washington University in St. Louis and an internationally-recognized expert in privacy and information law. His book, Intellectual Privacy, will be published in January 2015 by Oxford University Press.

Corporate research is always an unequal exchange

When asking “how can we best balance” the desires of corporations and the needs of individuals, we need to recognize that there are different “we”s involved here. Executives at Google and Facebook are interested in learning from big data, but they are, naturally, more concerned about their own individual privacy than the privacy of their users.

As a political scientist, I’m interested in what I can learn from moderately sized data such as opinion polls and big data such as voter files. And I naively act as if privacy is not a concern, since I’m not personally snooping through anyone’s particular data.

Survey organizations also profit from individuals’ data: They typically do not pay respondents, but rather rely on people’s goodwill and public-spiritedness to motivate them to participate voluntarily in helping researchers and answering surveys. In that sense, the issue of privacy is just part of the traditional one-way approach to research in which researchers, corporate and otherwise, profit from uncompensated contributions of the public. It is not clear how to balance this unequal exchange.

Andrew Gelman is a professor of statistics and political science at Columbia University. His books include Bayesian Data Analysis and Red State, Blue State, Rich State, Poor State: Why Americans Vote the Way They Do.

This discussion originally appeared on Zócalo Public Square.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME privacy

What We Know About the Latest Nude Celebrity Photo Hack

Kim Kadashian Attends The Kardashian Kollection Spring Launch At Westfield Parramatta
Kim Kardashian attends the Kardashian Kollection Spring Launch at Westfield Parramatta on September 13, 2014 in Sydney. Lisa Maree Williams—Getty Images

And what we don't

More explicit photos were posted on the website 4chan Saturday, this time purportedly showing Kim Kardashian, Vanessa Hudgens, Mary-Kate Olsen, Hayden Panettiere, Kaley Cuoco, Hope Solo and an underage Disney star, among other female celebrities.

Previously unseen photos purportedly showing Jennifer Lawrence, who became the face of the last major celebrity photo hack, were posted, too. The photos quickly spread from 4chan to Reddit, following the same pattern as the previous hack, which leaked private photos of Lawrence, Kate Upton, Ariana Grande and almost 100 other female celebrities.

Here’s what we do and don’t know about the latest nude celebrity photo hack:

Are the photos real?

At least two of the hack’s victims have confirmed their leaked photos are, in fact, real.

Actress Gabrielle Union told TMZ on Saturday that her photos were intended for only her husband’s eyes, and slammed the hackers’ insensitivity. “It has come to our attention that our private moments, that were shared and deleted solely between my husband and myself, have been leaked by some vultures,” Union said.

On Sunday, Actress Meagan Good released a statement on Instagram, saying “I’m definitely in shock… Saddened for everyone who is experiencing this… But I ‘choose’ not to give the persons responsible my power.. At the end of the day—We all know these pictures were for my husband.”

In the last celebrity hack, many victims confirmed that the photos were indeed authentic. Cuoco, whose photos were also released in the previous hack, said Thursday on Jimmey Kimmel Live! that she was disturbed to realize the photos were real, but ended up making a “joke about it,” because “you have to make fun of yourself.” Other reactions were less lighthearted: Lawrence’s rep called it a “flagrant violation of privacy.”

What about the other celebrities?

Most have not released statements, or have declined to speak. A rep for Kardashian has declined to comment about the leaked photos to multiple publications. There’s also no word from Panettiere, Olsen, Solo or Hudgens.

But many are wondering about Hudgens, and what approach she’ll take now that she’s not the young Disney starlet of the High School Musical franchise. In 2007, after being shamed for a leaked nude photo, the 18-year-old actress apologized to fans, while Disney followed up and told People that “We hope she’s learned a valuable lesson.”

How did it happen?

No one knows yet, but experts told TIME they believe it’s similar to the last celebrity photo leak, when Apple confirmed that it was a “very targeted attack on user names, passwords and security questions,” and a not system-wide breach of iCloud or Find my iPhone, as was first widely believed. (TIME has reached out to Apple for comment regarding the most recent hack.)

Bob Stasio, Vice President of Threat Intelligence at CyberIQ Services, said the most probable cause is that hackers obtained access to photos by answering security questions to recover or reset passwords—a common tactic and the one apparently used last time. Last year, Michelle Obama’s and other celebrities’ financial records were accessed by hackers who knew enough personal identifying information to impersonate them, according to CNBC.

“The problem with celebrities is that a lot of their information is publicly available,” Stasio said.

Once the passwords have been reset, the hackers can access the celebrities’ e-mail accounts to obtain the passwords to enter iCloud. Hackers will have previously gained access to the stars’ computer servers, thus their e-mails, either physically or remotely through backdoors planted in their systems, Stasio said. These backdoors may have been planted through targeted emails that tempt the users to click on a link or download an attachment.

“That’s really how hacking works,” Stasio said. “It’s all very iterative. You get to one spot, and you have to get to the next spot.”

Can the hackers be found?

They haven’t been found yet, and security experts believe it will be difficult, but not impossible, to track down the hackers. If iCloud accounts were accessed, then Apple can use a record of logins to determine the IP address, Stasio said. But hackers would likely hide their IP address by routing through a different one in another country, which complicates the process. Another method would be to track who had originally posted the pictures on 4chan.

In fact, experts say photo-leaking culprits are often caught, and the fact that both Apple and law enforcement are already involved make the investigation even more likely to turn up results. In 2011, for example, a hacker used the “forgot my password” function to access and leak nude photos and other personal information of Scarlett Johansson, Mila Kunis and Christina Aguilera. An FBI investigation resulted months later in a Florida man being sentenced to 10 years in federal prison, according to CNN.

“The success rate is very high. People doing this are very foolish, thinking they’re going to get away with it,” said Phil Lieberman, President of Lieberman Software Corporation. “For a period of time, they’re the hero. Once they’re caught, they’ll become the zero.”

So why haven’t we found the hackers yet?

In short, it takes time.

“If someone’s life is in danger, law enforcement moves very quickly,” Lieberman said. “But pictures of celebrities don’t rise to the level of kidnapping, murders or serious violent crimes. They’re seen more as economic crimes or invasions of privacy, which are serious, but go on a little slower track.”

Moreover, the fact that Apple’s weak iCloud security was patched only recently means that several intruders may have been in the system for quite a while, experts said, which would add additional layers to the investigation.

Will it happen again?

Experts say yes: This is the second major celebrity photo hack in one month, and it’s part of a rising trend. When Target was hacked last year, Stasio said, a group of hackers sent e-mails to other companies saying they’d detected a similar vulnerability, offering help through a clickable link, which, if opened, would’ve infected the company’s system.

“Not only have the trends of the actual hacks spread, but people use the awareness of the hack itself to try to use it as an infection,” Stasio said.

And there’s likely more photos that have been accessed but not yet shared. Lieberman said that for hackings in the commercial world, the average time the hacker or hackers have spent in the system is 200 days. This suggests the intruders could’ve had months to amass a large collection of explicit photos.

“This may not even be different than the first one,” Lieberman said. “This may in fact be the same group of people with the same set of data, just simply taking another bite of the apple.”

TIME Security

Here’s How Home Depot Could Have Combated Hacking

Experts say retailers should invest in detection rather than prevention

As Home Depot continues to assess the damage caused by a security breach that gave hackers access to 56 million credit and debit cards, tech experts say large retailers should turn their attention to addressing breaches quickly instead of trying to prevent all of them.

“Are we spending most of our money on trying to keep the bad guys out or trying to detect as soon as possible when the bad guys get in?” asked cyber crime expert Brian Krebs, framing the issue rhetorically. “The best you can do is stop the bleeding as soon as possible when they do get in.”

At Home Depot, where hackers used malware to collect customer data at cash registers, it reportedly took nine months for the breach to be identified and stopped allowing for the damage to affect millions of customers.

Companies face myriad and evolving ways their data can be breached, making protecting data akin to a game of whac-a-mole. Once one potential threat is identified, hackers have already begun trying to get through another way. Instead of devoting all their resources to chasing the threats, companies should focus on minimizing the time it takes to identity those breaches, said Brian Foster, chief technology officer at cyber security firm Damballa.

“There are two types of companies: those that have been breached and those that don’t know they’ve been breached yet,” he said. “The attackers only have to find one door in whereas Home Depot has to secure all their doors and before they do that they need to know where all the doors are at.”

But even if retailers like Home Depot switch focus to detection from protection, experts say they need to do a better job securing data. And, for retailers, the first place to look is the “point of sale system” where the transaction occurred (the cash register for traditional retailers).

“Some enhancement of that logical access in the point of sale would have been able to harden the system significantly,” said Guy Levy, senior vice president at technology security firm Usher. “This is part of what any big retailer that employs pos systems should be doing now. They should all be scrutinizing their systems very, very hard.”

Despite the recommendations of security experts, many companies remain reluctant to devote the funding to change. But dealing with massive security breaches almost always costs more in the long-term than instituting preventive measures would have cost. Home Depot said the breach at the company will cost at least $62 million.

“It takes awhile to update your technology, to understand the threat,” said Anup Ghosh, founder and CEO of technology security firm Invincea. “But the most expensive dollar spent in security is spent after a breach.”

TIME Security

Apple: We Can’t Give Your iPhone Data to the Government

Apple Unveils iPhone 6
Apple CEO Tim Cook shows off the new iPhone 6 and the Apple Watch during an Apple special event at the Flint Center for the Performing Arts on September 9, 2014 in Cupertino, California. Justin Sullivan—Getty Images

"We don’t build a profile based on your email content or web browsing habits to sell to advertisers."

After dozens of celebrities had their most intimate photos stored on Apple’s iCloud service stolen by hackers and released online, the company used Wednesday’s iOS 8 update launch to defend its concern for privacy and introduce new security measures.

In an open letter posted on Apple’s website, CEO Tim Cook stressed the company’s efforts to keep consumers’ information private and sought to distinguish Apple from how its competitors use personal data.

“A few years ago, users of Internet services began to realize that when an online service is free, you’re not the customer. You’re the product,” Cook wrote, referring to how major websites, such as Google and Facebook, use personal information and personal activity online to tailor advertisements to their users. “But at Apple, we believe a great customer experience shouldn’t come at the expense of your privacy.”

The statement is particularly pertinent after the Sept. 8 announcement of a smartwatch and new apps on the upcoming iPhone 6 and iPhone 6 Plus that represent Apple’s most significant foray into health tracking and mobile payments.

“Our business model is very straightforward: We sell great products. We don’t build a profile based on your email content or web browsing habits to sell to advertisers.”

Apple does have a service that tailors ads based on some of what Apple knows about users, but Cook wrote that the service doesn’t pull data from products like Apple’s health apps or the Mail app.

Cook also addressed allegations that the U.S. government has collaborated with major Internet firms to gather data on users, saying Apple has not allowed access to its servers and has “never worked with any government agency from any country” to allow exclusive access to personal information retained by Apple.

Apple also said that iOS 8, the newest iPhone operating system, would automatically encrypt data stored on iPhones and protected by your passcode, making it impossible for even Apple to share that information with the government or law enforcement. That encryption rule, however, doesn’t apply to data stored on Apple’s iCloud storage service.

“Unlike our competitors, Apple cannot bypass your passcode and therefore cannot access this data,” said Apple. “So it’s not technically feasible for us to respond to government warrants for the extraction of this data from devices in their possession running iOS 8.”

 

TIME How-To

How to Double Check Your Google Account Security Settings

Google Account Settings
Google's account settings page shows which sites, services and devices have access to your account Google

The unofficial Google Operating System site writes about a little gem found under the security section of everyone’s Google account settings page.

Head over to your account’s security section, and click the “Get started” button located under the “Secure your Account” heading.

It’ll step you through the various lock-downs available for your Google account, including setting a recovery phone number, a recovery email address and the ability to revoke access for apps, websites and gadgets you no longer use. You’ll also be able to check out your recent activity to make sure nobody’s been using your account without your knowledge.

It’s a good idea to run through a security audit such as this every once in a while, especially after a high-profile data breach.

[Google Operating System]

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser