MONEY Opinion

Amazon Is Right to Give Up on the Fire Phone

The Fire Phone was too late to market and didn't have any compelling features to set it apart from entrenched competitors.

Amazon’s AMAZON.COM INC. AMZN -3.19% foray into smartphones was always destined to fail. The e-commerce giant was simply way too late to the market. The Fire Phone didn’t have any compelling differentiating features (Dynamic Perspective was little more than a novelty gimmick) while it stuck with conventional pricing, putting it in direct competition with entrenched rivals.

It’s tempting to pin the blame on Jeff Bezos since he was reportedly “obsessed” with the Dynamic Perspective feature, which required incredible development resources and delayed the device for years, according to a former executive. It was hardly a surprise when Amazon took a $170 million inventory charge mere months later because the Fire Phone just wasn’t selling.

The Wall Street Journal is now reporting that Amazon is giving up on Fire Phone. Despite the fact that it’s only been a year, it’s about time.

Fire Phone crashes and burns
Amazon has reportedly laid off dozens of engineers at its hardware division, Lab126. The company has also restructured Lab126, consolidating two hardware development departments into one. A large-screen tablet may also be shelved as well as a few other odd devices like an image projector. Amazon is still hard at work on other hardware projects, though, like a computer that can take orders via voice commands or a different spin on a 3D interface meant for a tablet.

The layoffs run counter to a Reuters report last year that Amazon was actually planning to dramatically expand its Lab126 head count over the next five years, even after it took the Fire Phone writedown and realized the product was a flop. For once in its life, Amazon seems averse to plunging an endless amount of money into a new initiative. Cost cutting is largely how Amazon crushed analyst estimates last quarter, posting a $92 million profit and sending shares soaring.

What about tablets?
Once upon a time, the Kindle Fire tablet was the best-selling Android tablet. Amazon was one of the first companies to launch a smaller tablet, but once it enjoyed demonstrable demand, the traditional players all jumped in. These days, Amazon’s position in the tablet market has weakened significantly. IDC estimated that unit volumes in Q4 2014 fell by a whopping 70% to 1.7 million. Amazon disputed those figures, but naturally declined to provide any hard data to substantiate its claims. Amazon is not included in the top five vendors for IDC’s Q2 2015 figures. Technically, Huawei and LG tied for fourth and fifth with 1.6 million units each.

The WSJ also says that Amazon’s product mix is heavily skewed toward the lower-end versions of its e-readers and tablets, which also makes plenty of sense. But competition at the low end is particularly intense, while the iPad has a 76% share of the premium tablet market in the U.S. (priced at $200 or above). Amazon will likely shift development resources toward these lower-end tablets, while focusing on new product categories like Echo.

Why that’s the right call
Strategically, Amazon’s hardware has always served as a form of shopping portal, a gateway into Amazon Prime, if you will. For the longest time, Amazon’s strategy was to sell hardware at cost and profit later when people purchased digital content or physical products. That’s why Fire Phone’s pricing was so Un-Amazon because the company was hoping to profit up front (and later on).

If the value in Amazon’s hardware lies in its ability to sell more stuff, then first-party smartphones and tablets are decidedly not the best use of developmental resources. People already have smartphones and tablets with Amazon’s app loaded on them, so third-party devices are already shopping portals. Instead, new categories and form factors are where the real opportunity lies, such as the $5 Dash buttons or Echo or any other type of centralized order-taking machine.

These types of hardware products are true differentiators that also support the core e-commerce business, and we all know how much Bezos hates “me-too” devices.

More From Motley Fool:

MONEY Opinion

Self-Driving Cars Won’t Arrive Anytime Soon

US-BRAZIL-DIPLOMCY
JOSH EDELSON—AFP/Getty Images Brazil's President Dilma Rousseff takes a ride in a self-driving car at Google headquarters in Mountain View, California on Wednesday, July 01, 2015.

To reap the benefits of a driverless future, most cars on the road will need self-driving capabilities and be able to communicate with each other.

There’s no question that the idea of an autonomous or “self-driving” car has a great deal of appeal. There’s also no question that a world in which most of the vehicles on the roads are automated will be safer and more efficient than today’s jammed highways.

But when is that world coming? Some tech enthusiasts would have us believe that a self-driving future is just around the corner. But investors hoping to ride this trend should consider the possibility that it will be many years before the idealized self-driving future will be a reality.

The future vision is compelling
When people talk about the promise autonomous or “self-driving” cars, they generally mean vehicles that both drive themselves and communicate with the other vehicles and infrastructure around them.

Once the roads are flooded with these vehicles, the argument goes, accidents and traffic jams will be greatly reduced — and travel by car will be safer, swifter, and more pleasant.

That all sounds true to me. Companies (and regulators) are already working hard to bring about that future. But there’s a catch: To get all of the great benefits, most of the cars on the road have to have self-driving (and intercommunication) capabilities.

That’s why I think that a fully self-driving future is probably still a long way off, even though self driving cars are already heading to market.

Self-driving cars are already emerging …
It’s possible to argue that the first self-driving car has already arrived — but only for an extremely limited definition of “self-driving.”

Daimler‘s DAIMLER AG DDAIF -2.59% Mercedes-Benz already has an extremely limited self-driving feature available on a couple of models: It can take the wheel in stop-and-go highway traffic. But it only works up to 37 miles per hour, and it doesn’t work when you’re not bumper-to-bumper on a clearly marked highway.

General Motors, Tesla Motors TESLA MOTORS INC. TSLA -4.19% , and a few other automakers are expected to release similar systems over the next 18 months or so. In fact, Tesla has promised that a software update for existing Model S sedans will enable some limited self-driving abilities in the near future, possibly before the end of 2015.

For now, these first systems will mostly be gadgets in expensive luxury cars. Think of them as increasingly smart versions of cruise control rather than as robots that can drive your car.

But the expectation is that these systems will get more sophisticated over time, and automakers will gradually add them to mainstream models as costs come down. Meanwhile, the U.S. government is already working on standards for vehicle-to-vehicle and vehicle-to-infrastructure communications, and a few automakers — again, starting with luxury brands — are rolling out some very limited capabilities.

… but a self-driving future is still many years away
I’ve talked to several auto-industry executives about the likely timeline for self-driving cars. All agree that fully self-driving vehicles won’t be available for a while yet. That’s partly because the government is still figuring out the rules for such vehicles, and partly because the technology still has to overcome some big technical challenges: For instance, rain can be very confusing to a self-driving car’s sensors.

These executives say that while many manufacturers have promised self-driving cars by 2020, it’s likely to be several years after that before true, fully self-driving vehicles are available to the mass market.

But even if those cars were to hit the market tomorrow, there’s another reason it’ll take a long time before that utopian self-driving future emerges: It’s what the auto industry calls “replacement rate.”

Here’s the key figure: The average vehicle on U.S. roads today is over 11 years old. Vehicles built today are much more durable and reliable than the vehicles of 20 or 30 years ago. People (and businesses, and governments) are keeping them longer.

It’s possible that most every new car on the U.S. market will have self-driving capabilities within a decade. But even if that happens, it’s likely that it will takeanother decade before most of the cars on U.S. roads have that ability.

That is, unless some sort of big disruption happens.

Even the Apple Car won’t change that
The alternative view is that the emergence of self-driving technology leads to a sudden move away from the idea of private car ownership. Instead of dealing with the hassles of owning (and driving, and parking, and fixing, and insuring) cars, people will opt to subscribe to an automated car service that can take them wherever they need to go, without the hassles.

I think it’s likely that Apple (among others) is looking to create such a service. There has been a lot of speculation about an Apple car, but I don’t think Apple wants to enter the auto business. I think the company is exploring the idea of creating a premium car service using automated electric cars.

Uber is also believed to be working on such a service, and it’s a safe bet that other companies are as well. But even if that alternative vision is the one that prevails, it’ll almost certainly be many years before it spreads beyond the city environments where Uber is succeeding with human drivers now. And in the meantime, people outside those cities will still be buying (and probably driving, at least part of the time) familiar-looking cars and trucks from the established automakers.

Either way, that self-driving future is more than just a few years away.

John Rosevear owns shares of Apple and General Motors.

More From Motley Fool:

MONEY Opinion

Stop Pressuring Millennials to Unplug

498338509
Getty Images

Blurred lines aren't always so bad.

In a scene from the recent summer beach read “Love and Miss Communication”, a young millennial workaholic lawyer on the verge of partnership named Evie gets fired from her job for sending too many personal e-mails while at work.

Then she discovers, on Facebook, that the man of her dreams is very much in love with … someone else. And so she swears off the Internet for a year.

Needless to say, good things happen. But there’s a catch. Problems come with the things that don’t happen. For example, Evie misses a close friend’s party because she didn’t check Facebook. She grows increasingly isolated, and her friends feel insulted.

The urge for the first generation to come of age in the new millennium to stay connected is strong.

According to Boston Consulting Group, 37% of younger millennials (ages 18 to 24) said that they feel as if they are “missing something” if they are not on Facebook or Twitter every day, compared with 23% of non-millennials.

Those concerns are valid and should not be ignored. Unlike previous generations, the vast majority of their social plans are made online. Ditto for the humble brags. Friends cheer each other on, which can be a great motivator.

The reality is that millennials are digital natives. They take their devices to bed, text in almost any situation, and constantly check social networking sites to stay in the loop.

Rather than putting a negative spin on their social media-centered culture, it may make more sense to encourage them to leverage it.

Bob Pearson, president of digital marketing and communications company W20 Group, says there is simply no reason to pressure millennials to unplug, even at work.

“That is a myth that companies don’t expect you to be online and do some personal social stuff,” says Pearson, who blogs about millennials with his 19-year-old daughter. “We just have to embrace that when (millennials) have downtime, that is what they do.”

For example, Pearson’s daughter Brittany sends her co-workers Snapchat messages as they are working, even if they are right near each other.

Crowdsourcing Feedback

Allowing younger workers to be in touch at work can be advantageous. Their real-time connections provide crowdsourced feedback along with constant information that can be useful on projects.

Even though millennials may be connected to work more, they also are connected to their social world, which can make work more enjoyable, keeping them more productive. Blurred lines aren’t always so bad.

But some worry about the health effects of all this connectivity.

“If employers want to improve the health of their employees, including millennials, they should consider encouraging them to unplug when they leave work,” says Terri L. Rhodes, chief executive officer of Disability Management Employer Coalition, an employer-based non-profit organization.

“We still do not know the health effects of texting, the constant use of thumbs and the effect on arthritic thumb joints. There is also growing medical concern about neck problems from always looking down at devices.”

As for Brittany, she does have a pang of jealousy for older unplugged folks. She talks wistfully about leaving her phone at home, at least when she goes to the beach. But she does not. In fact, the urge to unplug seems to come and go in an instant – almost as fast as the constant buzz calling her back to her devices.

MONEY Opinion

Why Are People Obsessed With Financial Advisers’ Pay?

Financial Advisor
Daniel Grill—Getty Images

We don't have these conversations about doctors, lawyers, and accountants, do we?

Let’s cut to the chase: There are good, honest, hardworking financial advisers of all business and compensation models.

Unfortunately there are people in the profession—and out—who demonize those who are not following one model over another. This is unfair to tens of thousands of advisers while hindering the profession from becoming what it can be.

Let’s be honest: Each compensation model—whether it be fee-only, commission and fee, or commission-only—comes with its advantages and disadvantages.

Focusing on compensation does the profession of financial planning a major disservice. No other respected profession focuses on compensation as a determining factor of whom you should and should not work with.

You don’t have this same compensation debate in medicine, law or accounting. So why does this focus exist in financial planning? While compensation is an important aspect of the client/adviser relationship, it is far from being the ultimate determining factor as to whether or not a consumer engages an adviser.

The debate over adviser compensation reminds me of the California Milk Processor Board’s long-running “Got Milk?” campaign. The ads didn’t differentiate between the different types of milk available to consumers, whether it was whole milk, low-fat milk, skim milk, or even chocolate milk. They just asked if you got milk. Consumers of financial planning advice should be able to seek out an adviser whose model is best for them, not because they are being told which model is right.

The Financial Planning Association, of which I serve as the volunteer president, is compensation-neutral, which means that we don’t adhere to the notion that a compensation model determines whether a professional is operating in the client’s best interest. FPA’s more than 24,000 members—and more than 17,000 Certified Financial Planner professional members—are diverse in their compensation and business models. FPA believes that how advisers charges for their services is not in itself an indicator of their competency or ethical standing. What consumers really need to seek are those advisers who have earned the right to call themselves CFP professionals.

CFP professionals are required by their certifying body, the Certified Financial Planner Board of Standards, to act in a fiduciary capacity at all times during the financial planning engagement. That means that no matter how a CFP professional is paid, he or she is required to act in the client’s best interest.

This is why FPA believes in building a recognized financial planning profession around a single designation that requires high professional standards and requirements. You don’t have ongoing debates as to whether compensation determines competency among doctors, lawyers, and accountants, do you? No. That’s because putting the best interest of those they serve first is embedded in their professions.

Let’s not focus on compensation models, but on the need for those in the profession to practice full and fair disclosure and operate in the client’s best interest all of the time. Any adviser should be willing to fully disclose all material facts that may, or may not, have an impact on the client/adviser relationship. That goes beyond compensation and should include services provided, investment philosophy, experience, education, past disciplinary actions, and more.

At the end of the day, each consumer of financial services needs to be comfortable in his or her relationship with an adviser. For some people, paying their adviser a fee will make them comfortable, while for others commissions will make more sense. It is really up to the individual and the circumstances, and the more we denigrate one form of compensation over another the more harm we do to the profession and the public.

An educated consumer is going to be a better consumer of financial services. That is what we should be focused on.

Read next: HBO’s Ballers Puts Financial Advisers in the Limelight

Edward W. Gjertsen II, CFP, is vice president of Mack Investment Securities in Glenview, Ill., and is the 2015 president of the Financial Planning Association. The opinions expressed in this commentary are solely his.

TIME Opinion

Another Reason for Obama to Worry About the Iran Deal: The Treaty of Versailles

1919. TREATY OF VERSAILLES - FROM LEFT TO RIGHT, WILSON, CLEMENCEAU, LORD BALFOUR AND ORLANDO.
Keystone / Getty Images Wilson, Clemenceau, Lord Balfour and Orlando in 1919

If the Iran deal fails, there will be only one historical moment to which the defeat can be compared

Chuck Schumer is just one senator, but his announcement that he will oppose the international nuclear agreement with Iran is being hailed as a potential game-changer. House and Senate Republicans are unanimous in their determination to defeat the President’s signature diplomatic achievement, and Democrats face strong pressure from the powerful lobby AIPAC to reject it as well. If enough other Democrats in the House and Senate follow suit, Congress could override a presidential veto, defeat the agreement and leave the United States and Israel totally isolated within the world community. While foreign policy has often divided Congress and the executive branch, cases in which Congress actually stopped a major foreign policy initiative are extremely rare. If that happens, there will be only one historical moment to which the deal’s defeat can genuinely be compared: the Senate’s refusal, in 1919 and 1920, to ratify the Treaty of Versailles. There are noteworthy parallels between the two situations, and the consequences of another presidential defeat could be almost as serious as the end of Woodrow Wilson’s dreams of peace.

Confronted in 1914 by the First World War, President Wilson had spent almost two and a half years heroically trying to bring it to a peaceful resolution. In 1915, after the sinking of the Lusitania by a German submarine, he had resisted Theodore Roosevelt’s demands for immediate intervention. But in the spring of 1917, when Germany answered his last call for a “peace without victory” with a renewal of unrestricted submarine warfare, Wilson saw no option but to intervene. Yet he remained politically aloof from his allied partners, and in January of 1918, in his Fourteen Points, he held out his vision of a new world based on law, equality and what became the League of Nations. But, when the war ended in November 1918 with the collapse of Germany, the French and British—despite Wilson’s presence in Paris and his enormous international popularity—insisted on relatively harsh and unequal terms for Germany. The treaty Wilson brought home for ratification struck many liberals as a betrayal of his ideals.

Wilson, like Barack Obama, also had domestic political problems. Twice he had been elected without winning a majority of the votes cast. In the Congressional elections of 1918 he appealed to the people to elect a Democratic Congress to strengthen his hand in negotiations, but the Republicans won instead. Like Barack Obama, Wilson was a somewhat aloof leader who had little patience with his intellectual inferiors. He made no attempt to involve Republicans or Democrats from Congress in his negotiations. He also refused to accept relatively harmless Congressional reservations about the deal. The Senate eventually voted down the treaty, which failed to achieve a two-thirds majority.

The consequences were immense. When the treaty failed, France lost a promised Anglo-American guarantee of its border with Germany, and the United States washed its hands of European problems, with the exception of the war debts it still hoped to collect. The feeling that Washington could not be trusted to follow through on its noble declarations prevented the U.S. from regaining world leadership until 1941, when the Second World War was well underway. Rather than entering a new era of law and diplomacy, the world sank into anarchy.

Today, Barack Obama has even fewer cards to play than Wilson. There is not the slightest chance of his getting a single Republican vote in Congress, and his own party will—it now appears—be split. His deal with Iran represents a critical, desperately needed new departure in American foreign policy: an attempt to live with hostile regimes by engaging them diplomatically, rather than either going to war with them or pursuing an endless, futile confrontation. The deal also has the backing of the U.N. Security Council and of almost every other nation on earth, except Israel. And its failure would leave Iran utterly free to pursue any nuclear program that it wants. Like the Senate’s action in 1920, blocking the deal would end any possibility of genuine American global leadership for some time. If the agreement fails, the world’s new slide into anarchy will accelerate, with consequences we cannot foresee, even with the help of history. And that won’t be all the Democratic Party has to worry about if the President is beaten. In 1920, in the election after the Democrats failed to get the treaty through, the Republican candidate won in a landslide.

The Long ViewHistorians explain how the past informs the present

David Kaiser, a historian, has taught at Harvard, Carnegie Mellon, Williams College, and the Naval War College. He is the author of seven books, including, most recently, No End Save Victory: How FDR Led the Nation into War. He lives in Watertown, Mass.

TIME Opinion

Harry Truman’s Atomic Bomb Decision: After 70 Years We Need to Get Beyond the Myths

Both sides in the debate have left a distorted impression of why Truman decided to drop the bomb

History News Network

This post is in partnership with the History News Network, the website that puts the news into historical perspective. The article below was originally published at HNN.

President Truman’s decision to use the atomic bomb against Japan in 1945 is arguably the most contentious issue in all of American history. The bombings of Hiroshima and Nagasaki have generated an acrimonious debate that has raged with exceptional intensity for five decades. The spectrum of differing views ranges from unequivocal assertions that the atomic attacks were militarily and morally justified to claims that they were unconscionable war crimes. The highly polarized nature of the controversy has obscured the reasons Truman authorized the dropping of the bomb and the historical context in which he acted.

The dispute over the atomic bomb has focused on competing myths that have received wide currency but are seriously flawed. The central question is, “was the bomb necessary to end the war as quickly as possible on terms that were acceptable to the United States and its allies?”

The “traditional” view answers the question with a resounding “Yes.” It maintains that Truman either had to use the bomb or order an invasion of Japan that would have cost hundreds of thousands of American lives, and that he made the only reasonable choice. This interpretation prevailed with little dissent among scholars and the public for the first two decades after the end of World War II. It still wins the support of a majority of Americans. A Pew Research Center poll published in April 2015 showed that 56% of those surveyed, including 70% aged 65 and over, agreed that “using the atomic bomb on Japanese cities in 1945 was justified,” while 34% thought it was unjustified.

The “revisionist” interpretation that rose to prominence after the mid-1960s answers the question about whether the bomb was necessary with an emphatic “No.” Revisionists contend that Japan was seeking to surrender on the sole condition that the emperor, Hirohito, be allowed to remain on his throne. They claim that Truman elected to use the bomb despite his awareness that Japan was in desperate straits and wanted to end the war. Many revisionists argue that the principal motivation was not to defeat Japan but to intimidate the Soviet Union with America’s atomic might in the emerging cold war.

It is now clear that the conflicting interpretations are unsound in their pure forms. Both are based on fallacies that have been exposed by the research of scholars who have moved away from the doctrinaire arguments at the poles of the debate.

The traditional insistence that Truman faced a stark choice between the bomb and an invasion is at once the most prevalent myth and the easiest to dismiss. U.S. officials did not regard an invasion of Japan, which was scheduled for November 1, 1945, as inevitable. They were keenly aware of other possible means of achieving a decisive victory without an invasion. Their options included allowing the emperor to remain on the throne with sharply reduced power, continuing the massive conventional bombing and naval blockade that had destroyed many cities and threatened the entire Japanese nation with mass starvation, and waiting for the Soviets to attack Japanese troops in Manchuria. Traditionalists have generally played down the full range of options for ending the war and failed to explain why Truman regarded the bomb as the best alternative.

A staple of the traditional interpretation is that an invasion of Japan would have caused hundreds of thousands of American deaths, as Truman and other officials claimed after the war. But it is not supported by contemporaneous evidence. Military chiefs did not provide estimates in the summer of 1945 that approached numbers of that magnitude. When Truman asked high-level administration officials to comment on former president Herbert Hoover’s claim that an invasion would cost 500,000 to 1,000,000 American lives, General Thomas T. Handy, General Marshall’s deputy chief of staff, reported that those estimates were “entirely too high.” Hoover apparently based his projections on an invasion of the entire Japanese mainland, but military planners were convinced that landings on southern Kyushu and perhaps later on Honshu, if they became necessary, would force a Japanese surrender.

The revisionist interpretation suffers from even more grievous flaws. Japanese sources opened in the past few years have shown beyond reasonable doubt that Japan had not decided to surrender before Hiroshima. It is also clear from an abundance of evidence that U.S. officials were deeply concerned about how to end the war and how long it would take. The arguments that Japan was seeking to surrender on reasonable terms and that Truman knew it are cornerstones of the revisionist thesis. They have been refuted by recent scholarship, though impressing the Soviets was a secondary incentive for using the bomb.

The answer to the question about whether the bomb was necessary is “Yes”. . . and “No.” Yes, it was necessary to end the war at the earliest possible moment, and that was Truman’s primary concern. Without the bomb, the war would have lasted longer than it did. Nobody in a position of authority told Truman that the bomb would save hundreds of thousands of American lives, but saving a far smaller number was ample reason for him to use it. He hoped that the bomb would end the war quickly and in that way reduce American casualties to zero.

No, the bomb was not necessary to avoid an invasion of Japan. The war would almost certainly have ended before the scheduled invasion. A combination of the Soviet invasion of Manchuria, the effects of conventional bombing and the blockade, the steady deterioration of conditions in Japan, and growing concerns among the emperor’s advisers about domestic unrest would probably have brought about a Japanese surrender before November 1. And no, the bomb was not necessary to save hundreds of thousands of American lives.

The controversy over Truman’s decision seems certain to continue. The use of a bomb that killed tens of thousands instantaneously needs to be constantly re-examined and re-evaluated. This process should be carried out on the basis of documentary evidence and not on the basis of myths that have taken hold and dominated the discussion for 70 years.

J. Samuel Walker is the author of Prompt and Utter Destruction: Truman and the Use of Atomic Bombs against Japan (University of North Carolina Press, 1997, second edition, 2004). He is now working on a third edition of the book.

MONEY Opinion

This Is the Most Dangerous Identity Theft Threat

533520355
weerapatkiatdumrong—Getty Images/iStockphoto

Never take this for granted.

Last weekend, TheUpshot published the most dangerous identity theft threat: the non-expert’s tendency to underestimate the magnitude of problem. The piece in question argued that the consequences of most identity theft have been exaggerated (by identity theft experts like me), and that, “only a tiny number of people exposed by leaks end up paying any costs.”

The main source for TheUpshot’s argument seems to be the 2015 Identity Fraud Report (covering data from 2014) published by Javelin Strategy and Research, which found a dramatic increase in account takeovers (i.e., when a fraudster is able to get through the authentication process on an existing credit account and make charges) but an overall decrease in the amount of money lost to identity-related fraud.

To think that the 2015 Javelin report minimizes the threat of mega data breaches to consumers is to misread it. To suggest that the threat is overstated is both simplistic and harmful to consumers. The article focuses too much on account takeover resulting from big-name hacks like Target (a very common form of identity theft). Meanwhile, it gives nowhere near enough attention to the very real and long-lasting effects of more serious forms of identity theft – the kind that’s committed using Social Security numbers – and the equally big-name hacks like Anthem, Premera, and the Office of Personnel Management that exposed millions of records containing that data.

The Buck Doesn’t Stop With the Bank

TheUpshot dismisses the consumer cost of most data breaches (beyond lost time and annoyance) because “several laws protect consumers from bearing almost any financial losses related to hackers.” TheUpshot continues, “…banks and merchants, like Target, must bear the cost. But even their losses have been dropping in recent years, as data security experts have learned new strategies to prevent intrusions from turning into theft.”

First of all, banks do not bear all the costs if they can help it. They pass it along to the company that caused the problem in the form of fines and penalties, and in some cases the company is only alleged to be the cause of the problem. It is very hard for small businesses to fight card companies on these charges. So when it happens, it can be a near extinction-level event, or force price changes. And, of course, that cost often manifests itself at the consumer level.

Additionally, according to at least one recent report, the cost of a data breach to businesses has not been going down, as stated by TheUpshot. On May 27, IBM and the Ponemon Institute jointly reported the cost per breached record had increased by 12% over the preceding year, from $145 to $154, and that the average total cost of a data breach to an enterprise rose a not inconsiderable 23% to $3.79 million.

And it bears repeating: While it’s all very populist and fair-weather foppery to say that companies like Target and Home Depot can foot the bill of a breach, the same cannot be said of smaller businesses—after all, breaches are not confined to big companies.

5% Is a Huge Number

TheUpshot’s big reveal: “The more troubling identity theft, in which new accounts are opened in an unsuspecting person’s name, make up only 5 percent of the total figure given by Javelin.”

To the uninitiated eye, 5% sounds like a small number. But it’s missing context.

“Although we have no data to support what percentage of breaches turn into identity theft cases,” according to Brent Montgomery, Fraud Operations Manager at my company IDT911, “5% is a lot.”

In 2014 there were 12.7 million identity fraud victims, according to Javelin. Just 5% of that total is 635,000 consumers—hardly a negligible number.

Montgomery then highlighted the essence of the problem here: “There are so many breaches on a daily basis that information can be pieced together from one breach to another giving a criminal all they need to complete the puzzle.”

TheUpshot fails to account for the long tail of identity theft—the fact that scams are pieced together using data harvested from countless individual and corporate compromises oftentimes sold and resold on the data black market. A scam that happens today may use data that was compromised three years ago—especially when Social Security numbers are involved since their only expiration date is when the holder of those nine digits expires.

Another problem with using the Javelin report is that the data is extrapolated from a relatively small sample of the population, whereas the Federal Trade Commission’s Consumer Sentinel Network Data Book for January-December 2014 is driven by hundreds of thousands of pieces of consumer-reported data. That matters here because on page 13 of the Sentinel report, you will find a higher incidence of new account creation (12.5%) than fraud on existing accounts (4.9%).

There Are Very Serious Identity Theft Threats

While instances of new account fraud and some signs of existing account takeover can show up on your credit reports (you can get them for free once a year on AnnualCreditReport.com), other types of identity theft are less detectable – until they really cause damage. Of greater concern is what does happen to consumers whose information falls into the wrong hands—specifically their most sensitive information. Mentioned nowhere in the article is tax fraud, a crime that is most definitely on the rise and cannot be resolved easily or quickly (think: 6-12 months). Equally absent in this Panglossian take on what really is an identity theft epidemic: medical identity theft, which is extremely difficult to detect, equally difficult to resolve and can have potentially life-threatening consequences.

The bottom line is that while it’s easy to dismiss identity theft experts as being the equivalent of “the soap company that advertises how many different types of bacteria are on a subway pole without mentioning how unlikely it is that any of those bacteria would make you sick,” it is irresponsible to downplay the various serious risks now facing millions of Americans whose most sensitive personal information has been exposed in the breaches of Anthem, Premera, Sony Pictures and the Office of Personnel Management, to name a few. The threat for them is very real, and long-term—perhaps a lifetime.

More From Credit.com:

MONEY Opinion

10 Ways America Is Winning

549970877
Valerio Gualandi / EyeEm—Getty Images

Pessimism outsells optimism in the media, but missing out on what goes right is a bigger risk than people realize.

Jeremy Grantham, a brilliant money manager and perpetual pessimist, recently wrote a note to clients called “Ten Quick Topics to Ruin Your Summer.” It’s a good read. There’s climate change, weak GDP growth, global food shortages, income inequality, and several other points to make you sweat when pondering the future.

Grantham focuses on the downside because, he writes, “good news will usually look after itself.” He’s right.

But pessimism outsells optimism in the media ten to one. And our emotional reaction to pessimism is lopsided by the same. The result is that many of us wander through a world that is constantly improving while getting bogged down by risks that are either phantom or temporary. Over the long run, missing out on what goes right is a bigger risk than people realize.

So, here are ten quick topics to feel great about.

1. America has some of the best demographics in the world.

From now through 2050, America’s population is forecast to rise by 50 million. China’s will fall by 101 million. Russia’s will decline by 10.3 million. Germany, down by 7.7 million. Italy, down 1.1 million. Finland, down 155,000. Greece, 600,000.

Here’s how America ages over the next few decades. Our young working-age generations grow, even as baby boomers retire:

Usa

Compare that to China. Its older generations swell, then die, leaving a shrinking population and collapsing workforce.

China

South Korea is pretty awful, too.

Koreaanim

Japan is a retirement community.

Japan

2. Layoffs are at a record low.

Initial jobless claims recently hit the lowest level since 1973. That’s great, but it’s even better than it looks. The laborforce is almost twice as large today as it was in 1973. Adjust initial jobless claims to account for the size of the laborforce, and layoffs are at the lowest level in the last half-century. By a lot:

Jc

3. Investment fees have been slashed to the ground.

The only force more powerful than compound interest is the tyranny of compounding costs.

The good news is that investment fees have plunged in the last two decades. According to recent report by Morningstar:

  • 63% of mutual funds and ETFs cut the expense ratios in the last five years.
  • Since the early 1990s, average investment fees across all funds have declined by more than a third, from nearly 1% to 0.64%. On a $10,000 investment that earns 6% a year for 25 years, that’s an extra $5,000 that goes to you, rather than advisors.
  • Passive ETF fees now basically round to zero percent.

If the economy found a way to grow by an extra 0.35% a year forever, it would be considered a miracle. But investors have done just that. And given how competitive the industry has become, I doubt this trend is over.

4. Household formation is picking up.

A lot of the reason the economy was slow for the last five years is because household formation plunged. Young adults moved in with their parents. That meant they didn’t need a new home. Since new home construction is a big economic driver, it was hard to get moving.

Now things are changing. Household formation is at the highest level in a decade:

G

5. Things that used to be really deadly are way less deadly.

Motor vehicle deaths are down 60%:

Mv

Heart disease deaths are down by half:

Hd

Just these two mean more than a million people a year are alive who would have been dead just a few decades ago.

5. More people are saving for their own retirement.

People are living longer than ever, and public retirement systems are strained. The good is 401(k) participation is rising, and surging for young workers. According to a report by Wells Fargo:

Participation in the 401(k) plan among millennials has reached 55% compared to 45% in 2011. For newly hired eligible employees (meaning those who have reached the one year mark of employment), participation has increased from 36% four years ago to 48% in 2015. In addition, employees in a pay range of $20,000 to $40,000 in salary are participating at a rate of 59% versus 47% four years ago.

6. Falling healthcare inflation will save the government hundreds of billions of dollars.

In 2006, the Congressional Budget Office forecasted that by 2016 the average Medicare recipient would cost more than $15,000. The latest estimate is a little over $11,000. The amount of money this saves over previous estimates – the estimates that led people to think the government was bankrupt and the dollar heading for collapse – is insane. The New York Times wrote last summer:

The difference between the current estimate for Medicare’s 2019 budget and the estimate for the 2019 budget four years ago is about $95 billion. That sum is greater than the government is expected to spend that year on unemployment insurance, welfare and Amtrak — combined.

7. Student loan borrowing is declining.

Thanks to a clamp-down on for-profit schools – where so much student debt came from – students aren’t borrowing as much as they used to:

Federal and private-loan lending totaled $106 billion for the 2013-14 academic year, down 8% from the prior year, according to a report to be released Thursday by the nonprofit College Board. The decline marks a significant reversal in borrowing, which peaked at $122.1 billion in 2010-11 after rising for years.

8. High school graduation rates are at a record high.

This is the knowledge economy, where education, connections, and ideas are more important than any time in history. So this is great news:

More American students are graduating from high school than ever before, according to new data from the Department of Education.

The national graduation rate hit a record high of 81 percent in the 2012-13 school year, the data show.

9. Childhood obesity rates are falling off a cliff.

Something is clearly going right here, and it bodes well for future healthcare costs:

Federal health authorities on Tuesday reported a 43 percent drop in the obesity rate among 2- to 5-year-old children over the past decade, the first broad decline in an epidemic that often leads to lifelong struggles with weight and higher risks for cancer, heart disease and stroke.

The drop emerged from a major federal health survey that experts say is the gold standard for evidence on what Americans weigh. The trend came as a welcome surprise to researchers. New evidence has shown that obesity takes hold young: Children who are overweight or obese at 3 to 5 years old are five times as likely to be overweight or obese as adults.

10. 2008 was the worst financial crisis in nearly 100 years. Seven years later, unemployment is 5.3%, the stock market is at an all-time high, the dollar is surging, inflation is low, and oil output is the highest in decades.

That’s a good indication of how adaptive and resilient the U.S. economy is. It’s like we got stage-four brain cancer and ran a marathon a few years later. If we can go through 2008 and bounce back as fast as we did, run-of-the-mill recessions shouldn’t worry you at all. Keep this in mind when forecasting doom.

More From Motley Fool:

MONEY Opinion

Medicare Is Part of the Solution to Rising Health Care Costs

Democratic House Leader Nancy Pelosi Marks 50th Anniversary Of Medicare And Medicaid With Senate And House Lawmakers
Astrid Riecken—Getty Images Seniors listen to Democratic House Leader Nancy Pelosi mark the 50th Anniversary of Medicare and Medicaid on Capitol Hill on July 29, 2015 in Washington, DC. Pelosi was joined by Senate and House lawmakers who oppose any cuts to the important program for seniors.

The health insurance program's massive size gives it the power to set prices that providers will accept.

Medicare turns 50 on Thursday, riding high in the polls but under attack from presidential candidates proposing benefit cuts or even phasing out the U.S. healthcare program for older people.

When President Lyndon Johnson signed the law, half of Americans age 65 or older had no health insurance. Today, just 2% go uncovered.

And the public really, really likes Medicare, which last year covered 54 million Americans. A poll released earlier this month by the Kaiser Family Foundation found strong support across political party lines for Medicare and Medicaid, which insures low-income Americans and became law alongside Medicare.

But Medicare still sticks in the craw of conservatives.

“We need to figure out a way to phase out this program for (people who are not already receiving benefits) and move to a new system that allows them to have something, because they’re not going to have anything,” Republican presidential contender Jeb Bush told an audience of conservatives in New Hampshire last week.

Bush later tried to walk back his comment, but he is not alone in his desire to euthanize Medicare just as it hits midlife.

Fellow Republican presidential candidate Chris Christie has proposed raising the eligibility age for Medicare and Social Security to 69.

Beyond all the noise lies an important question: how to best pay for health care for our aging population.

In 2050, the 65-and-older population will reach 83.7 million, almost double what it was in 2012, according to the U.S. Census Bureau. That, along with rising healthcare costs, means Medicare will account for a rising share of the federal budget in the years ahead.

The line of attack against Medicare is that its finances are not sustainable, but what we really have is a healthcare cost problem, not a Medicare problem.

The program is funded through two trust funds.

The Hospital Insurance fund, which finances Medicare’s Part A hospital benefits, receives money mainly from the 1.45% payroll tax that employees pay and employers match. This fund is projected to run dry in 2030, leaving Medicare able to meet only 85% of that part of the program’s costs.

Meanwhile, the Supplementary Medical Insurance trust fund finances outpatient services and the Part D prescription drug program. It gets 75% of its funding from general tax revenues and 25% from yearly premiums that beneficiaries pay. It will stay solvent because contributions are reset annually to match anticipated spending, but that is expected to put more pressure on government and household budgets in the years ahead, especially if healthcare inflation takes off.

Medicare actually does a better job of restraining spending growth than private health insurance because its massive size gives it the power to set prices that providers will accept. And the Affordable Care Act mandated constraints in provider payments that have been paying off.

Medicare spending has been slowing in recent years. Reflecting that, the annual Part B (outpatient services) premium has been flat at $104.90 for the past three years.

Medicare’s trustees projected last week that the program’s total spending as a percent of the gross domestic product would rise from 3.5% to 5.4% in 2035. That is “not nothing, but neither is it insurmountable,” says Jared Bernstein, an economist and senior fellow at the Center on Budget and Policy Priorities.

The gap can be closed through efficiencies. We could change the law to allow the government to negotiate drug prices with pharmaceutical companies, for example. Or a small increase in the Medicare payroll tax from 1.45% to 1.8% would do the trick.

The conservative plan, however, is to reduce the value of Medicare’s benefits. That can be done by raising the eligibility age, effectively a lifetime benefit cut, or by replacing the program’s set of defined benefits with something called “premium support.”

With that, people would receive a voucher that they could use to purchase private insurance plans. Bush was showcasing that idea in his New Hampshire remarks.

A phase-out or redesign of Medicare will mean higher out-of-pocket costs in a program where the median income of enrollees is just $23,500 per year.

“These folks would unquestionably be worse off in the absence of Medicare,” Bernstein said.

Instead, we should continue working on reforming the delivery of care and negotiate savings with pharmaceutical companies.

And we should wish our most important health insurance program a happy birthday. Medicare, you are part of the solution, not the problem.

Read next: Medicare Turns 50 But Big Challenges Await

TIME Opinion

What We Can Learn From the Bobbi Kristina Brown Tragedy

And why we thought we could save her

I woke up today with two questions on my mind and here is the first: Could we have saved Bobbi Kristina?

Bobbi Kristina Brown died Sunday night at the age of 22 after six months in a coma and a lifetime under a harsh, unforgiving spotlight. As Oprah Winfrey said on Twitter in response to the heartbreaking news: “Peace, At Last.”

Since Whitney Houston and Bobby Brown’s only child was found submerged in her bathtub on Jan. 31, I, and millions of others, have kept watch. We hoped for the best and at the same time raged for the preventable, predictable circumstances that led to her demise.

Each day, information would trickle online, drop by drop from one unnamed source after the other. And each day, the “news” became more and more difficult to decipher. Is she breathing on her own? Why is the family feuding? She’s getting better? No, she’s getting worse? Were drugs found in her system or near her body? Was she abused? Did her boyfriend have something to do with this? Would she make it? Did she already die? Dr. Phil episodes, leaked photos of her body, speculation about her status in hospice, lawsuits and an open letter from Tyler Perry all confirmed that even with death hovering, absurdity and chaos would always swirl around her.

But in all of the confusion, one clear constant remained: The public’s desire to armchair diagnose every one of the 22 years of her life.

Judging from the tragic headlines, columns and private conversations uttered at dinner tables, we all believe that we, as wise, compassionate individuals knew exactly what went wrong and what it have taken to give her a life much easier than that which she lived. Despite actually knowing nothing about the ins and outs of her day-to-day reality or of the people who loved her, the wisdom we thought we had is why we are able to shake our heads and wax poetic about the alchemy of counseling, rehab, boundaries and love that she should have received at various points in her life.

This ‘we know better’ attitude is why we are able to scold the adults who surrounded her for allowing her to appear so clearly troubled and broken on a reality show immediately following her mother’s death. It is why we were able to identify the pain on her face in every public appearance and ask “Who is taking care of her?” It is why we give one another the sad, knowing face and utter horrible, insensitive missives like “With those parents, she never had a chance.” or “If someone just could have gotten her away from all of that…”

In other words, we believe we know what could have saved Bobbi Kristina. And we say it every chance we get.

But why? Why do we speak with such presumption and confidence? Our intentions are undoubtedly good and sound like the very essence of empathy and humanity. We do indeed wish we could have saved her, but perhaps we also say these things to validate ourselves; because it makes us feel good to think that if given the chance, in some make believe world where we had influence and impact on her life, we — you and I — could have done the right thing and made all the difference.

It is an assertion of powerlessness (she was too far outside of our reach). And that powerlessness, while painful, is also oddly comforting. It preserves our self-righteousness and allows us to sleep at night watching a child struggle so hard and fall so fast. We hold on tight to the idea that her life was out of our control, with the implication that we certainly would have helped if we could have.

But what if that isn’t true?

The truth is that for the children within our own spheres of influence, the ones languishing in foster care, or being harassed online, or being abused in the home down the street, or struggling with grief or depression or addiction or any of the ills that we assume troubled Bobbi Kristina, most of us do nothing. We walk blindly by, ignoring their realities and talking among our-selves about what should be done rather than actually doing it.

Maybe we are not as empathetic as we pretend to be in times of celebrity tragedy.

Because it is much easier to mourn the ones we “couldn’t” save than it is to acknowledge our crushing failure to save the children we could have but didn’t.

Today, Bobbi Kristina is dead, and yes, I am one of the ones praying that she is resting joyfully in the arms of her mother and her God. But for the children who remain, are we just as confident and sure about what they need? Do we know the magic formula to save the Bobbi Kristinas in our own communities — those not separated from us by a glass case of celebrity but who are so close they are practically daring us to reach out and touch them? Do we know what they should be protected from and given? They too are crying out for love, care and all of the collective support and wisdom that we offered Bobbi Kristina in comment sections, on Twitter and from afar.

So the second question on my mind today is this: Can we save them? Or better yet, will we?

Your browser is out of date. Please update your browser at http://update.microsoft.com