TIME Big Picture

Questions About a 5.5-inch iPhone

There’s already a bit of controversy surrounding the launch of Apple’s new iPhones this fall.

Most informed sources seem to all agree that Apple will introduce an iPhone 6 sporting a 4.7-inch screen, as compared to the 4-inch screen on today’s iPhone 5s and 5c models. But there are several rumors coming from the supply chain that suggest Apple is also preparing to release a 5.5-inch version of its newest iPhone, too.

The possibility that Apple could be making a 5.5-inch iPhone leads to a few important questions.

Why make a giant iPhone?

The first: If Apple really wants the 4.7-inch model to be what we in the industry call the “hero” model — one that would drive the majority of iPhone sales going forward — why even make a 5.5-inch model at all?

While we will sell about a billion smartphones this year, fewer than 70 million will feature screens larger than five inches. However, the answer to this question is actually pretty simple: While demand for smartphones larger than five inches is minimal in the U.S. and Europe, there is great interest in smartphones in the 5.5- to 5.7-inch range in many parts of Asia.

For example, well over 80% of smartphones sold in Korea have screens that are at least five inches and above. They have also become big hits in China and other parts of Asia where larger smartphones double as a small tablets, thus driving demand in these regions of the world for what are called “phablets.”

I suspect that if Apple is making a larger iPhone 6 in the 5.5-inch range, it will most likely be targeted at these Asian markets where demand for large smartphones is relatively strong. This is not to say that Apple wouldn’t offer a 5.5-inch iPhone in the U.S. — I believe there could be some interest in one of this size — but like most of my colleagues in the research world, we believe that the lion’s share of those buying the new iPhone would want the 4.7-inch version if indeed this is the size of it when it comes out.

Would you buy it?

The second question: If Apple does bring a 5.5-inch iPhone 6 to the U.S. market, would you buy one?

For the last month or so, I have been carrying three smartphones with me of various screen sizes all day long, and have learned a lot about my personal preferences. In my front pocket is an iPhone 5 that has a four-inch screen. In my back pockets are a Galaxy Note 3, which has a 5.7-inch screen and the new Amazon Fire, which sports a 4.7 inch screen — the same size that is purported to be on the new iPhone 6 when it comes to market.

Here are my observations. Keep in mind they are personal observations, but I suspect that my preferences are pretty close to what the majority of the market may prefer when it comes to the screen sizes in a larger smartphone.

I like to keep my primary smartphone with me all of the time, so my iPhone 5 is in my front pocket. The screen size is very important in this case and, at four inches, it easily fits in my right-front pants pocket and is easy to access as I need it. The other thing that is important about the four-inch screen is that I can operate it with one hand. From a design point, one-handed operation has been at the heart of all iPhones to date, as Steve Jobs was adamant that people wanted to be able to use their phones with one hand. So the idea of possibly moving up to a new iPhone with a 4.7-inch screen has intrigued me, as I wondered if a smartphone with this size screen would fit in my pocket and still be usable with one hand.

So when I got to test the 4.7-inch Amazon Fire phone, I immediately put it in my front pocket. Thankfully, it fit well and continued to be just as easy to access as the smaller iPhone 5s with its four-inch screen. Also, while I had been skeptical that I could still use it with one hand since I have medium-sized hands, I found that I could still operate the Amazon Fire with one hand easily. The other thing about a 4.7-inch screen is that the text is larger; for my aging eyes, this is a welcome upgrade. However, on these two issues, the Galaxy Note 3, with its 5.7-inch screen, flunked both tests. This phablet-sized smartphone did not fit in a front pocket, nor could I use it for one-handed operation.

That led me to wonder if a Samsung Galaxy S% smartphone, with its five-inch screen, would work in these similar scenarios. So I took a Galaxy S5 that I have, put it into my front pocket and tried to use it with one hand. To my surprise, it also worked well. But I had another smartphone with a 5.2-inch screen and, amazingly, that failed both tests. On the surface, at least for me, a smartphone up to five inches did fit in my pocket and allowed me to use it one-handed, but any screen larger than that was a bust.

I also did this test with some of the women in our office. We have a very casual workplace and most wear jeans to work, so I had them try the 4.7-inch Amazon Fire. They were also surprised that it fit O.K. in their front pockets and could still be used in a one-handed operation mode. However, like me, a screen larger than five inches did not fit in pockets and was impossible to use with one hand for all of them. These women did point out to me though that for most women, it’s less likely that they would carry a smartphone in their pockets as more keep them in a purse or handbag. That being the case, at least for the women in our office, a smartphone with a 5.5-inch screen was acceptable to them, although one person said she would prefer the smaller 4.7-inch smartphone if push came to shove.

Ultimately, it probably comes down to personal preference, yet I suspect that an iPhone with a 4.7-inch screen would take the lion’s share of Apple’s iPhones sales if this is indeed the size of the company’s new iPhone.

What about tablets?

But a 5.5-inch smartphone begs a third question that, at the moment, has stymied many of us researchers: Would a 5.5- or 6-inch smartphone eat into the demand for a small tablet?

I find that in my case, even though I do use the 5.7-inch Galaxy Note 3 often for reading books while out and about or while standing in line, my iPad Mini is still my go-to tablet due to its size. I also have a 9.7-inch iPad Air with a Bluetooth keyboard, but I almost exclusively use that tablet for productivity and less for any form of real data consumption.

Some researchers have suggested that, especially in parts of the world where larger smartphones or “phablets” are taking off, this has really hurt the demand for smaller tablets — and that’s partially why demand for tablets has been soft in the last two quarters. Unfortunately, the data is still inconclusive on this, but my gut says that “phablets” are at least having some impact on demand for tablets in many regions of the world.

With the expected launch of Apple’s new larger-screen iPhones just around the corner, those planning to buy a new iPhone might want to keep my experience in mind. There’s a very big difference between how a person uses smartphones that are less than five inches and smartphones that have larger screens. For those who keep them in their pockets and/or want to use them with one hand, they have only one real choice. For them, a smartphone smaller than five inches is their best bet.

But for those that don’t keep their smartphones in their pockets, the virtue of a larger screen is that it delivers much more viewing real estate. Consequently, it’s much easier to use when reading books, web pages and for other tasks where a large screen can deliver a real benefit. The good news is that if these Apple rumors are true, people will have better options coming from Apple. For the first time in the iPhone’s history, Apple might give users multiple screen sizes to choose from.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Understanding Apple’s ‘Continuity’ Strategy

Apple Continuity
Getty Images

Imagine all your screens working together harmoniously.

For years, I have been writing about the many screens in our lives. We have at least three primary screens we use almost on a daily basis: a TV, a PC (laptop or tablet) and a smartphone.

And lately, more screens have been showing up in our cars, appliances and wearable devices. However, even when it comes to major companies’ operating systems, too often the screens’ user interfaces and data are different on each device.

For example, the Mac’s user interface is different than the user interface on Apple’s iOS devices. And Google’s Android user interface on its tablets and smartphones is different than what’s found on the company’s Chromebooks. Same goes for these companies’ TV products. Also, some of your data is stored locally, so it’s not shared with or available on any other device you own.

At Apple’s recent Worldwide Developers Conference in San Francisco, the company introduced a concept it calls “continuity.” What this basically means is that in the future, the new Mac operating system, called Yosemite, will look and feel much more like an iOS-based device. In fact, if the continuity theme plays out as I expect it will, Apple will eventually make all of its products — including Apple TV and Apple CarPlay and any wearable devices — have the same look and feel, making it very easy to go from one device to another seamlessly. Also, in this continuity idea, everything would be in sync. That means if you change something on one device, it would be changed and updated on any other Apple device you had tied to the company’s ecosystem of apps and services.

Over at Re/code, my good friend Walt Mossberg did a great piece called “How the PC Is Merging With the Smartphone.” In it, he talks about Apple’s continuity approach to make the PC act, look like and work like an iPhone or iPad. He also mentions how Google is doing something similar with Chromebooks and Android.

For many years, I have thought that in an ideal digital world, people would ultimately have many screens as part of their lifestyles. In that vision, I also had this idea that all of these screens would be connected, work together seamlessly and, perhaps more importantly, would always be in sync with one another. The other part of this vision is that the user interface on each of these devices would be the same. I have always felt that people would be more likely to use new devices if each device worked the same as any other device they already had.

In a sense, I think Apple’s continuity strategy pretty much maps to this vision I have written about for two decades. Now, lest you think I am a serious visionary when it comes to these types of connected ecosystems, keep in mind that this vision came out of my own need for something like this. For most of my career, I really only had to deal with one computing screen — that being the one on a personal computer.

However, my digital life became more complicated when I got my first feature phone. It, too, had apps on it, albeit very limited ones. But the operating system and user interface on my feature phone were completely different than the ones on my PC. I had to learn how to use it from scratch. Then, as early as 1990, I started to use tablets. Again, because of the form factors and designs, the operating systems and user interfaces on my first three or four tablets were all different. I had another set of learning curves to contend with before I could use them with any sense of ease. Also, all of the data on these devices was local and none of these devices talked to each other.

What I wanted was for all of my devices to work together seamlessly, talk to each other, have the same operating system and user interface, and to always be in sync. Interestingly, we have had the technology to deliver on this vision for over five years, but only now have the big companies started to really move us in this direction. If Apple’s overall continuity strategy is fully realized, it would mean that every one of my Apple devices will look and act alike, talk to each other and always be in sync. If I get a new device that is part of Apple’s portfolio, I would have no new learning curve.

For consumers, this would is a big deal. First, if you learned the user interface on one device, it would be the same on all of your devices. Second, the apps and data would all be the same or extremely similar, and available on most of the screens you would be using. The exception would be wearables. These screens bring limitations, so any interface and operating system would be highly streamlined. However, even in this case, they would work very much like the other devices and, more importantly, would be connected to these devices either directly or through the cloud. And third, all of the data on all of the devices would be in sync and, at least in theory, would work together seamlessly.

Apple is not the only one driving us in this direction. Microsoft and Google are similar in that all of their respective devices will eventually look, feel and work in similar ways, tying directly into their cloud-driven ecosystems. The goal, of course, is to hook consumers into one particular ecosystem, making it hard to leave once you’re invested in the products that are tied to their respective apps and services. At the moment, it appears to me that Apple has the broader ability to deliver on this “continuity” concept since it owns the devices, processors, interfaces and services layer, making it easier to make all of its devices work together with a look and feel that’s similar across all of the company’s products.

Google would like to do the same, but there is still too much fragmentation in the Android world at the moment. But over time, I suspect it will achieve a similar level of device continuity. Microsoft’s concept would be the most challenging to deliver due to its various operating systems. And with the acquisition of Nokia, Microsoft adds Android to its product line, which has a completely different ecosystem tied to it. However, all three companies are working hard to deliver on this continuity vision, and as they succeed over time, it should make it easier for customers to better fit these companies’ devices into their digital lifestyles.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Where Wearable Health Gadgets Are Headed

fitbit
A person wearing a Fitbit fitness band types on a laptop Getty Images

Every once in a while, I’m shown a tech product and I can’t figure out why it was created. One great example of this was a two-handed mouse I was shown at large R&D-based company many years ago.

I was asked to review it to see if they should bring it to market. After trying to use it and viewing the complicated things you had to do to make it work, I told them it would never succeed. However, the engineer behind it was convinced he had created the next great mouse and was determined to try and get it to market. Thankfully, the management at this company killed it, as it would have been a complete failure and provided no real value to any customer. However, the technology was available to create it and this engineer did it because he could.

In the world of tech, most successful products address serious needs that people have. This is very much the case behind the current movement to create all types of wearable devices designed to make people healthier.

Folks behind products like the Jawbone Up, Nike Fuel, Fitbit and others have solid backgrounds in exercise and exercise science. They wanted to create stylish wearable products that could be used to monitor steps, count calories and track various other fitness metrics. Other products such as ones from iHealth, which has created a digital blood pressure device and a blood glucose testing kit that are tied to smartphones, were designed by people close to the health industry who saw a need to create products that could utilize digital technology to power new health monitoring tools.

At a personal level, I’m pleased that these folks are utilizing key technologies like accelerometers, sensors, Bluetooth low-energy radios and new types of semiconductors to create products that aim to impact people’s health. Readers of this column may remember that two years ago I suffered a heart attack and had a triple bypass. As you can imagine, this provided a serious wake up call to me about taking better care of myself. Since then, my Nike Fuelband has been my 24-hour wearable companion: I check its step-monitoring readout religiously to make sure I get the 10,000 steps in each day that my doctor has required of me as part of my recovery regimen.

While I would like to think that these tech folks are doing it for the altruistic reasons, the bottom line is that there is a lot of money to be made in health-related wearables. The folks from IHS published a good report last year on the market for wearables, which are mostly driven by health-related apps.

Most researchers that track this market believe that the wearable health market will represent at least $2 billion in revenue worldwide by 2018. In many developed countries around the world, people are becoming much more health conscious. Reports seem to come out daily, talking about the good or bad effects some foods have on our lives. And more and more, we hear that we need to exercise to either maintain our health or to improve it.

So a combination of the right technology becoming available and an increased awareness for better health has created this groundswell of health-related wearable devices and digital monitoring tools designed to help people have healthier lives. But there is another major reason that we are seeing more and more health-related wearables and digital monitoring products come to market now. This is driven by most healthcare providers and is one of their major initiatives: In simple terms, it’s cheaper to keep a person healthy than to cover their costs in the hospital when they’re sick.

Almost all the major health care providers have created web sites with all types of information about managing one’s health. These sites have information and programs for cancer patients, diabetics, and many other health issues that help people better manage these diseases. Health insurers are also really getting behind the various digital monitoring tools and health wearables, too, viewing them as vital tools that can help their customers stay healthier and keep them out of the hospital as much as possible.

Interestingly, as I talk to many of the executives of these health-related wearable companies, many of them claim to be on a mission. Yes, they admit there is money to be made, but most I speak with are serious about giving people the technology to help them keep themselves healthy. In fact, in at least two cases, the executives I have talked to have special funds they personally set aside to donate to major health causes as part of their personal commitment to using technology to make people healthier.

While there is some chatter about the market for wearable technology not being a sustainable one, I suspect that it will stay on track to eventually become integrated into everyday objects such as watches, hats and even clothes, becoming part of a broader trend called “self-health monitoring.” This trend basically says that people will want to have more and more information about calories the number of calories they’ve burned, the number of steps they’ve steps taken, their pulse and other metrics. Thanks to these new technologies, this data would be available to them in a variety of ways.

Of course, not everyone may want to know these health-related data points, but the research shows that at least one-fourth of U.S. adults have these types of health-related wearable monitoring devices on their personal radars. The fact that this market is growing around 20% or more each year suggests that we could continue to see growth for at least another three years. As these devices become part of our wardrobes, they could eventually fade into the background while still providing health-related info that many people may need to stay motivated. This is the goal that the tech world has embraced wholeheartedly, providing more and better tools for this purpose.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Why Basic Coding Should Be a Mandatory Class in Junior High

kids computers
Getty Images

One of the roles our education system is supposed to play is to prepare kids to be responsible citizens, with the skills needed to be successful in adulthood. All of the various classes — starting in kindergarten, where they lay out the fundamentals of reading,writing, sharing and even early math — are designed to be a set of building blocks of knowledge. Each consecutive year introduces new blocks in kids’ education, designed to get them ready for life so that they’re capable of earning a living.

For some reason, all of the classes I took from about third grade forward are still burned into my mind. Even today, I can go back in time and remember how my fifth-grade teacher got me interested in math or how my seventh-grade teacher’s method of teaching Spanish crippled my ability to learn that language due to his “repetitive” teaching methods.

However, one class in seventh grade has become very important to me, as I use the skills I learned in that class every day of my life: That class was my typing class. I can still envision that class as if it were yesterday, with my seat in the middle of the first row, learning to touch-type on an IBM Selectric typewriter. I even remember the line I had to type over and over again as part of a test to determine how fast I typed: “Now is the time for all good men to come to the aid of their country.” I can still touch-type that sentence today in about five seconds. Back then, the goal was to touch type at about 90 words per minute.

While the typewriter is now a thing of the past, typing and keyboards remain highly relevant today. In most cases, they’re the main way most of us enter data into our computers. And understanding the QWERTY layout is important when using a touch keyboard or even when programming our set-top boxes or other devices that use a keyboard for input.

Now, one could argue that kids these days seem to intuitively know how to use technology. Even at an early age, they start touching screens and keyboards, quickly learning how to navigate around all types of digital devices. The need for kids to learn how to code isn’t important, right? While that’s true to some extent, fundamentally understanding how these technologies work and how they can ultimately be customized for even greater functionality would enhance kids’ experiences with digital devices and could become much more important to them later in life.

Anyone that has taken an introductory programming class will tell you that at the very least, it helped them understand basic programming logic, structure and design. Even those who did not go on to become software engineers say that the fundamentals of programming a computer at the coding level has helped them shape how they think logically, has sharpened their common sense and, in a lot of cases, has helped them apply what they have learned to getting more out of their smartphones, tablets, computers and other devices that now populate their lives.

We live in a digital age in which technology plays a role in much of what we do every day. We use technology at the office, at school and at home. Digital devices are all around us. However, in many cases, we barely scratch the surface of what technology can do for us. We pretty much accept the fundamental role technology plays in our lives and mostly use the basic functionality of each of our digital devices.

Yet, when hardware and software designers create devices, they usually add a great deal of features and functions that most people barely use. That’s O.K. in a broad sense, since we “hire” our devices to handle things like phone calls, messaging, music and entertainment. But as technology has evolved, especially mobile technology, we are now holding in our hands real personal computers that can do much more than these fundamental functions. Even our TVs and appliances are becoming multipurpose devices designed to be more than meets the eye.

While most people will never get under the hood to try and change the code of an appliance or device they use, by learning the fundamentals of creating the software code that runs our devices, a person will gain a greater understanding of how their devices work, and would be more inclined to go beyond their devices’ basic functionality.

A coding class would also help them gain a greater understanding of how technology is designed and how software serves as the medium for triggering all of a device’s capabilities. This type of knowledge could be important in a future working environment where they’re called upon to use technology as part of their overall job.

It goes without saying, but understanding how technology works makes it much easier for a person to get the most out of it.

In an important article on GreaterSchools.org, author Hank Pellissier includes a comment from a recognized authority on programming. Douglas Rushkoff, author of Program or Be Programmed and evangelist for Codeacademy, is one of the nation’s leading digital crusaders. He argues that our schools need to incorporate computer programming into the core curriculum or get left behind. “It’s time Americans begin treating computer code the way we do the alphabet or arithmetic,” he writes.

Mr. Rushkoff sees the need to teach coding in order to create more hardware and software engineers to meet the rising demands for skilled tech workers. I agree wholeheartedly with this, since the U.S. is far behind in having a robust technical workforce created within our own borders. We rely heavily on coders in China, India and other parts of the world to meet the high demand for programming skills. I also agree that coding is just as important as other basic learning skills, since technology is now an important part of all of our lives. Understanding coding would give our kids a foundation in understanding how technology works, serving them well even if they do not become professional programmers.

One of my passions has been to help bring technology into the education system: I have worked on the sidelines with the State of Hawaii to champion the role of personal computers in education for decades. It has been rewarding to see how computers have impacted the educational process throughout the U.S., with every school system in America now having some type of computer aided learning programs in use today.

But it’s time for schools to realize that technology is now a part of our lifestyle. Helping our kids understand how technology works at the ground level and how it can be used to its fullest potential needs to be a building block that’s added to the educational curriculum. At best, it could get kids interested in tech as a career. At the least, it could equip them to handle more and more technology-related devices that are now part of our lives.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

The Challenges of a Dick Tracy-like Watch-Phone

Samsung Gear 2
A Galaxy Gear 2 smartwatch sits on display at the Samsung Electronics Co. pavilion on day two of the Mobile World Congress in Barcelona, Spain, on Tuesday, Feb. 25, 2014. Simon Dawson--Bloomberg / Getty Images

I have been testing the Samsung Gear smartwatch for some time now and have actually become a fan of these types of watches. My first smartwatch was the Pebble, but its limited functionality drove me to try out the Samsung Gear since it gives me something that I really wanted in a smartwatch: email alerts and the ability to read my email on the smartwatch itself.

Like many people in the workplace I get hundreds of emails a day, although very few demand immediate action. But given my type of business, if a client emails me, I like to respond to them as fast as possible. So these smartwatch alerts allow me to be highly responsive to client requests. Yes, sometimes they come during a meeting or while I am doing something where I can’t respond to messages immediately, but being aware of these requests as they come in is important to me and plays heavily into how I manage my workday.

Recently, word leaked that Samsung was working on adding a phone feature to a smartwatch, and it got me wondering whether this is a good idea or not. I grew up in the era of Dick Tracy and I have to admit that I thought his watch-phone was really cool — as a kid, I really wanted one. But as I look at this idea now, I really wonder if a watch-phone would work for me in the real world. More importantly, would consumers even want it? The idea of always lifting up my arm to speak into a watch and having everyone around me being able to hear what’s being said to me is just not appealing, even if it seems cool.

Most likely, such a smartwatch could be tied to a Bluetooth headset so a person could handle voice calls more discreetly, but a lot of people are uncomfortable having a headset in their ear all of the time and for many, it makes them look too much like a geek. I also suspect the user interface would be pretty clumsy, even if it was voice controlled.

The idea of adding a phone feature to a smartwatch comes under the heading that many in the industry call feature-creep. Simply put, engineers keep trying to add a bunch of features into small packages, and while sometimes it works, most of the time it does not. One good example is some of the features Samsung threw into its Galaxy S4 smartphone, especially the hover feature that the majority of people never used. Thankfully, the company took that out in the Galaxy S5 and seemed to learn the lesson that in some devices, less is more.

I have now used about seven smartwatches and each one I have used has tried to cram a lot into a very small package. These watch screens are 1.5” in most cases, and while the screens are sharp and easy to read, putting more features and more text into this small space most often does not work well at all. The good news is that with the Pebble watch, the Samsung Gear watch and others, most developers are creating simple apps that can work on a small screen and deliver what we call “snacking data” such as news alerts, message alerts and, in some cases, email headlines. Also, most of these watches so far are tied to smartphones, serving as extensions of the smartphones themselves.

However, I am starting to see a lot of work being done behind the scenes where some companies are trying to make the smartwatch a standalone device. Not being connected to a smartphone would essentially make it a PDA of sorts in its own right, with all of the data and info and apps delivered to the watch. These watches wouldn’t be extensions of smartphones as they are today.

Although Samsung has not actually shared any details about its supposed smartwatch-phone, it would not surprise me if that’s the direction the company might take with this device. While Samsung would still want to sell a lot of standalone smartphones — and a smartwatch-phone would never supplant these — from an engineering standpoint, Samsung and others may want to give consumers the option of having their smartphones on their wrists instead of in their pockets.

But would Samsung and others be doing this simply because they can? Or because consumers really want it? Think of the role your smartphone plays in your life today. Could you dump a great 4” or 5” screen that delivers tons of apps and services and instead use only a smartwatch-phone? I know I could not. That’s why I’m quite happy with my smartwatches being extensions of my smartphones, working together harmoniously.

Sure, there will be some early adopters who take the plunge should a smartphone-watch hit the market. But I am very doubtful that these would ever catch on and be a hit with consumers. Rather, they would likely end up being just an engineering showcase for the companies who make them and, at least in my opinion, will never catch on with the broad consumer market.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Why the Maker Movement Is Important to America’s Future

I grew up in the age of Tinker Toys and Erector Sets. Both were meant to inspire me to be a maker instead of a consumer.

My first real tool was a wood-burning engraver that had such a short chord it was almost impossible to use. When I started using it, I burned myself more than once and nearly started a fire at the house. How in the world they sold this to kids in those days is now a mystery to me.

I was in Silicon Valley in the late 1970s, and I started to get more interested in the Homebrew Computer Club and similar user groups where people could get together and talk about tech-related interests. This was how I first got interested in computers.

Along the way, the idea of creating technology got sidelined as I instead started to write about it, chronicling its history. This led me to eventually become a computer research analyst instead of an engineer. This was probably a good thing, since I loved to take things apart but had very little interest in putting them back together. And I would have been a lousy programmer or tech designer. But this did allow me to watch the birth of the tech industry close up, witnessing how it developed and has impacted our world over the last 35 years.

Fast forward to today, and I am very excited about the Maker Movement. The more I look into it, the more I believe that it’s very important to America’s future. It has the potential to turn more and more people into makers instead of just consumers, and I know from history that when you give makers the right tools and inspiration, they have the potential to change the world.

So what is the Maker Movement? I found Adweek’s definition to be right on the money:

The maker movement, as we know, is the umbrella term for independent inventors, designers and tinkerers. A convergence of computer hackers and traditional artisans, the niche is established enough to have its own magazine, Make, as well as hands-on Maker Faires that are catnip for DIYers who used to toil in solitude. Makers tap into an American admiration for self-reliance and combine that with open-source learning, contemporary design and powerful personal technology like 3-D printers. The creations, born in cluttered local workshops and bedroom offices, stir the imaginations of consumers numbed by generic, mass-produced, made-in–China merchandise.

Over the weekend, I had a chance to go to the granddaddy of Maker Faire events held at the San Mateo County Event Center about 20 miles south of San Francisco. The folks behind the event call Maker Faire the “greatest show and tell on Earth.” Sponsored by Make magazine, the event this year drew well over 120,000 to check out all that’s new in the world of making things, such as robots, drones and mini motherboards and processors that can be used to create all types of tech-related projects.

As I walked the many show floors and looked at the various exhibits, I found out that the maker movement, which started like the Homebrew Computer Clubs of the past, is made up of makers who can be defined as anyone that makes things. While its roots are tech-related, there were people at the show teaching how to crochet, make jewelry, and even one area called Home Grown, where do-it-yourselfers showed how to pickle vegetables, can fruits and vegetables, as well as make jams and jellies. There was another area focused on eco-sustainability, bee keeping, composting and growing your own food.

There are eight Maker Faire flagship fairs, including the one in San Mateo that’s held in mid-May and one in New York City, which will be held Sept 20-21. Other Maker Faires or Mini-Maker Faires happen all over the world, including major faires planned in Paris, Rome and Trondheim, Norway during 2014. The other U.S. states with major Maker Faires are Kansas City, Detroit and Atlanta. Over 280,000 attended these faires around the world last year.

According to Atmel, a major backer of the Maker movement, there are approximately 135 million U.S. adults who are makers, and the overall market for 3D printing products and various maker services hit $2.2 billion in 2012. That number is expected to reach $6 billion by 2017 and $8.41 billion by 2020. According to USA Today, makers fuel business with some $29 billion poured into the world economy each year. For more feedback on the economics of the Maker Movement, check out Jeremiah Owyang’s “Maker Movement and 3D Printing Industry Stats.”

One of the people who really understands the Maker Movement is Zach Kaplan, the CEO of Inventables, which is an online hardware store for designers in the Maker Movement. I think of his site as a kind of Amazon for Makers.

I met Kaplan at the recent TED conference in Vancouver, where he told me about the history of the Maker Movement and its culture. He pointed out that this movement is quite important, saying, “It has the potential of giving anyone the tools they need to become makers and move them from passive users to active creators.” I caught up with him at last weekend’s Make Faire and he told me that he likened the Maker Movement at the moment to where we were with the Apple II back in 1979. He said that in those days, the computer clubs and tech meetings fueled interest in tech and got thousands interested in software programming, semiconductor design and creating tech-related products. Of course, this begat the PC industry and the tech world we live in today.

The Maker Movement has the potential to bring techies and non-techies alike into the world of being creators — some hobby-related, but for many, they could end up making great products and selling them online. In fact, Kaplan pointed out that Etsy has become an eBay-like vehicle for makers to sell their products to users around the world. Of course, eBay and Craigslist are also sources for them to sell their created wares.

Inventables.com has CNC Mills, laser cutters and 3D printers, and people are using them to create all types of products for themselves or to sell. Interestingly, Kaplan told me that over 80% of his customers are women who pick up the tools and supplies to create all types of jewelry and items that they sell on Etsy. He said the hot thing at the moment is to use tools bought from him to create custom-engraved bracelets and jewelry. In his booth, he had examples of people making custom glass frames, 3D printed coffee carafes and was letting people use a $600 CNC mill called the Shapeoko to create engraved wood and metal bottle openers.

I also asked Kaplan about why this is taking off now. He said, “The key driver is that the cost of the tools such as 3D printers, CNC Mills and things like Arduino and Raspberry PI mother boards and other core tech products have come down and are in reach of normal consumers.” You can also see how things like Make magazine, books, podcasts and YouTube videos for do-it-yourselfers have grown exponentially and are getting more and more people interested in being makers of some sort.

This movement has caught the attention of many major players in the tech and corporate worlds. At the San Mateo Maker Faire were companies like Intel, Nvidia, AMD, AutoDesk, Oracle/Java, Ford, NASA, Atmel, Qualcomm, TI, 3D Robotics and many more that see this movement as important and want to support it. I was able to catch Intel’s CEO Brain Krzanich near his booth and asked him why Intel was at the Maker Faire. He said, “This is where innovation is occurring and Intel has a great interest in helping spur innovation.”

As someone who has seen firsthand what can happen if the right tools, inspiration and opportunity are available to people, I see the Maker Movement and these types of Maker Faires as being important for fostering innovation. The result is that more and more people create products instead of only consuming them, and it’s my view that moving people from being only consumers to creators is critical to America’s future. At the very least, some of these folks will discover life long hobbies, but many of them could eventually use their tools and creativity to start businesses. And it would not surprise me if the next major inventor or tech leader was a product of the Maker Movement.

I do have one concern, though: As I walked the floors of the Maker Faire during the first day of the event, I did not see one African American family in the crowds while I was there, and I only saw two Hispanic families with kids checking things out. I actually dedicated an hour to walking all over the grounds looking for people of minority descent during the time I was at the show. I would say the majority of the families there where white, although I also saw a lot of Asian and Indian families with their kids roaming the faire.

While most of the families I saw had boys with them, there were many young girls at the show, too. In fact, I took my 11-year old granddaughter with me and she loved the Maker Faire. Perhaps there were a lot of African American and Hispanic families there on the second day, although I can’t be sure. The Maker Faire is a great show and is highly inclusive, and the Maker Movement itself wants everyone one to participate. But the lack of folks from these two minority communities tells me that we in the industry and those in the Maker Movement need to figure ways to get these groups of folks interested in being makers, too. Without the participation of everyone, regardless of race, the Maker Movement may not reach its full potential, especially here in America.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Scio Pocket Molecular Scanner Is a Google-like Device for Physical Objects

The handheld Scio scanner can detect the molecular makeup of certain objects Consumer Physics

A couple weeks ago I had a fascinating video call with a gentleman named Dror Sharon, the CEO of a company called Consumer Physics. He showed me a product called Scio that just went up on Kickstarter last Tuesday: a hand scanner that can scan physical objects and tell you about their chemical make up.

“Smartphones give us instant answers to questions like where to have dinner, what movie to see, and how to get from point A to point B, but when it comes to learning about what we interact with on a daily basis, we’re left in the dark,” Mr. Dror told me via Skype. “We designed Scio to empower explorers everywhere with new knowledge and to encourage them to join our mission of mapping the physical world.”

Consumer Physics launched a Kickstarter campaign to raise $200,000 for Scio (which is Latin for “to know”) on April 28th, 2014. They reached that goal in 20 hours and raised a total of $400,00 in 48 hours.

At first Scio will come with apps for analyzing food, medication and plants. You could, for instance, use it to refine the ingredients of your home-brewed beer or figure out if an Internet site’s cheap Viagra is fake. Later, the company will add the ability to check cosmetics, clothes, flora, soil, jewels, precious stones, leather, rubber, oils, plastics and even human tissue or bodily fluids.

scio
Early prototypes of the Scio physical object scanner Consumer Physics

Mr. Sharon told me, “The spectrometer figures out what the object is based on an infrared light that reflects back to the scanner. Most objects have different absorption rates as they vibrate at different levels on the molecular scale. The app takes the data and compares it to a cloud-based database of objects in a distant data center. When it gets a match, it sends the results to the user’s smartphone.”

According to Mr. Sharon, “The food app tells you calories, fats, carbohydrates, and proteins, based on your own estimate of the weight of the food you’re about to eat. (With many food packages, you can get the weight from the label). The app could tell dieters exactly how many calories they’re about to consume, while fitness apps can tell them how many calories they’re burning. That helps people figure out exactly how much exercise they need to do in order to burn off the food they’re eating.”

As I understand it, the food app can also gauge produce quality, ripeness, and spoilage for foods like cheeses, fruits, vegetables, sauces, salad dressings, cooking oils and more. It also analyzes moisture levels in plants and tells users when to water them. Mr. Sharon suggested that you could even be able to analyze your blood alcohol level one day, but Scio is not currently approved as a medical device.

What I find most interesting is that as users conduct more tests, the app gets better and better at correctly identifying objects. The more people use it, the richer the database of information will be, which will add to the precision levels of the Scio over time and, more importantly, expand what it can understand. In the demo I saw on an Android smartphone, a ring fills up with circles on your smartphone screen to deliver the proper info, and it takes a matter of seconds to recognize something. Scio has to be about 20 millimeters from an object before it can be used for scanning, and the scanner uses Bluetooth low energy (BLE) to connect with a smartphone, which in turn needs to be running either iOS 5 or Android 4.3 or higher.

He also showed me its ability to scan what looked like a unmarked white pill. Scio correctly identified the chemical makeup of the pill as aspirin and even showed that it was made by Bayer. These are the first types of categories of physical products Scio will target, but eventually it could identify the chemical makeup of just about any object. That is why he likened it to being “Google for physical objects.”

If you are a fan of police procedural TV shows like CSI or NCIS, you already know about things like mass spectrometers and other professional machines that analyze the chemical makeup of objects. These machines can be very large. Although there are some handheld versions available today, they’re all pretty expensive. Scio aims to do similar tasks with a device that can fit into your pocket. And when it ships, it will cost considerably less than professional solutions — as low as $149. Now, I am not suggesting that Scio is as powerful as professional mass spectrometers. However, from what I saw in the demo, it can do similar types of chemical analysis and do it pretty quickly, with the readout showing up on your smartphone.

While I find the idea of a pocket spectrometer interesting, where this could have real impact is if it could be built straight into a smartphone. According to Mr. Sharon, this is ultimately where he sees his technology going. His initial focus is on food, medication and plants, although over time, it could be expanded to cover just about any physical object. Imagine being able to point the scanner in a smartphone at an apple and know exactly how many calories were in it based on its weight. Or if you had a stray pill lying around and you wanted to know what it was before you dare ingested it.

I see this particular device as a game-changer of sorts. Today, all of our searches are being done via text, numbers and through structural databases of some type. But with a consumer-based spectrometer initially designed as a pocketable device that could eventually be built into smartphones, gaining a better understanding of the make up of the physical objects we come into contact with each day would vastly expand a person’s knowledge base. I could imagine it as being part of a set of teaching tools to perhaps get more kids interested in science. Or it could be used in a science-related game as an important tool used to solve a puzzle. At the other extreme, its impact on health-based problems and solutions could be enormous.

This is a technology to watch. As Scio gets smarter as more people use it — and perhaps someday finds its way directly into smartphones — it would add a new dimension to our understanding of the world around us. It could become an important means for connecting us to our physical world in ways we just can’t do today.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Who Needs a Memory When We Have Google?

Brain
Getty Images

I have a confession to make: I’m an infomaniac.

In high school, I was on the debate team and got an early taste of what it’s like to dig deep into information so that I could support my debate arguments. Ever since, I have been hooked on gathering and consuming information as part of my lifestyle. I still get a morning paper delivered to my house and I start my day by checking up on the local news. When I get to the office, I log on to all types of general news and tech sites to catch up on what I missed overnight. Curiosity is in my DNA and my type-A personality drives me to be addicted to information. In my line of work, this is good, but I admit that I overachieve in this area and it sometimes becomes overwhelming.

For most of my early life, this was a manageable problem. In those days, I had newspapers, magazines and a set time to watch the network news every night at 6:00 PM. But from the beginning of the information age and especially with the advent of the Internet, the amount of information sources at my fingertips grew exponentially. I admit that, more often than not, I now have information overload. To put it another way, I have way too many tabs open in my brain at any given time.

It’s almost impossible to keep some that info straight or, even worse, remember most of it. That’s where Google and search engines come in. While I was at the TED conference recently, I talked with a lot of people from various industries. We often compared notes on things we were doing, people we know and items or events that we have been involved in over the years. What’s interesting is that the common denominator in many of these discussions is that when we got stumped on a person’s name, event or item we were talking about, instead of fretting about it, we all took out our smartphones and Googled for the answer. We almost always found what we were looking for, and the conversation continued only slightly interrupted.

In all honesty, that scene happens for me whether with business associates, friends or family. I clearly can’t remember all of the information I take in, so I now rely pretty heavily on Google and other search engines to either find the information I need at any given time or to jog my memory about the topic at hand.

I am sure that this has happened to a lot of people. The role technology plays as an extension of our memory banks has become quite important to us. I have found that when I’m digesting information now, many times I don’t even read the full stories — mostly just the headlines or a quick summary, knowing that if I ever have to recall it, I can just Google it.

In 2008 the Atlantic had a great article by Nicholas Carr titled “Is Google Making Us Stupid?” In this excerpt from this article, Carr says:

For me, as for others, the Net is becoming a universal medium, the conduit for most of the information that flows through my eyes and ears and into my mind. The advantages of having immediate access to such an incredibly rich store of information are many, and they’ve been widely described and duly applauded. “The perfect recall of silicon memory,” Wired’s Clive Thompson has written, “can be an enormous boon to thinking.” But that boon comes at a price. As the media theorist Marshall McLuhan pointed out in the 1960s, media are not just passive channels of information. They supply the stuff of thought, but they also shape the process of thought. And what the Net seems to be doing is chipping away my capacity for concentration and contemplation. My mind now expects to take in information the way the Net distributes it: in a swiftly moving stream of particles. Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.

I am not sure his premise that Google makes us stupid is exactly correct. In fact, I would argue that because of a search engine’s ability to help us quickly find the information we need, it’s actually making us smarter, to a degree. But what Google seems to be doing to me — and perhaps many others — is making our minds lazy. Many times, I may be told something without really concentrating on what is being said, knowing full well that as long as I get the bullet points straight, I can always go back and look up the info.

At first I wanted to chalk some of these memory lapses up to getting older. It’s just part of aging, right? But the more I read about aging, the more I realize that some of this is happening because we are not exercising our brains as much as we should be. More and more often, we’re relying on Google to be a fallback. We concentrate less on what’s in front of us, leaning on Google for anything we can’t remember.

A while back, my wife bought me a Nintendo handheld game system that included a game called Brain Age. It was my first foray into digital brain games, and I found that the more I used it, the more it helped me fine-tune my brain to be much more cognizant of what I was reading and observing. This game came out before everyone had smartphones, and now we have dozens of brain training tools such as my favorite, Lumosity, or Condura, another brain training app.

There are a lot of studies that talk about the Internet’s impact on memory. One that was highlighted in the New York Times in 2011 shared some specific research about this issue. In the article, Patricia Cohen wrote the following:

The widespread use of search engines and online databases has affected the way people remember information, researchers are reporting.

The scientists, led by Betsy Sparrow, an assistant professor of psychology at Columbia, wondered whether people were more likely to remember information that could be easily retrieved from a computer, just as students are more likely to recall facts they believe will be on a test.

Dr. Sparrow and her collaborators, Daniel M. Wegner of Harvard and Jenny Liu of the University of Wisconsin, Madison, staged four different memory experiments. In one, participants typed 40 bits of trivia — for example, “an ostrich’s eye is bigger than its brain” — into a computer. Half of the subjects believed the information would be saved in the computer; the other half believed the items they typed would be erased.

The subjects were significantly more likely to remember information if they thought they would not be able to find it later. “Participants did not make the effort to remember when they thought they could later look up the trivia statement they had read,” the authors write.

Whether our brains have become lazy or not, the Internet has clearly impacted the way we read and digest information, and as stated in the Times’ article, search engines have now become just a part of our memory processes. Search engines are very valuable, but if they become crutches that dull our thinking and make our brains lazy, then I believe people will need to use things like Lumosity and other brain-tuning games to help them stay sharp.

Information overload makes it impossible for many of us to keep up with the constant stream of information that’s available. Because many of us try to consume so much information, most of us are forced to mostly skim highlights and summaries just to keep up. However, I believe we can’t let search engines impact our memory. At least in my case ,I don’t want that to happen, so I’m using these brain games to help me deal with this challenge.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Unlocking Our Digital Sixth Sense with Mobile Technology

sixth sense
Getty Images

We all know about our five basic senses: hearing, seeing, smelling, tasting and touching. But if folks from the tech world have their way, we'll all soon have what's called a digital sixth sense.

The digital sixth sense is one in which our smartphones and tablets unleash various actions triggered by CPUs, wireless radios and sensors — magically delivering all types of info, feedback and content to enhance our digital lifestyles.

Many of the leading tech companies like Qualcomm, Intel, ARM, Apple, Samsung, Microsoft and dozens others believe our mobile devices will serve as powerful tools that interact with our TVs, refrigerators, homes, automobiles and just about anything that can be made smart via these technologies. These enhancements will, in turn, be used to give us this digital sixth sense.

The term “digital sixth sense” has been tossed around in the industry for decades: I used it in a speech to 500 tech execs in 1995 at the Agenda conference when I described my vision of a connected refrigerator. This connected refrigerator would scan our products as we put them in, and extract information from the refrigerator’s built-in database as they were used. Any time you were almost out of a product, it could be automatically put on a digital shopping list that was sent to an online grocery store for home delivery. I knew at the time that I was way ahead of the curve with this prediction, but I felt pretty sure that we were eventually going to move to a digitally connected world in the not too distant future. Little did I know that it would take almost 20 more years to even get this type of vision really moving forward.

Although the digital sixth sense idea has been around for a while, the leadership at Qualcomm has made it part of their rallying cry, and have been working very hard to create the CPUs, wireless radios and sensors to actually make this happen. The company has been creating complimentary products such as AllJoyn to help enable device-to-device communication in the home, using the smartphone as a hub to send and receive data between devices as part of a digital sixth sense vision.

My connected refrigerator vision in 1995 made the fridge a digital island unto itself. However, in our connected world as envisioned by many industry companies today, all devices should be able to talk, interact and through various means, communicate with each other and our mobile devices to deliver to us this digital sixth sense experience. A good example of this would be when a person walks into their home, their smartphone would “sense” all the other connected devices, and these devices would know that this smartphone has entered the home. This would trigger the lights to turn on, the thermostat to move to the optimal temperature, the house alarm to turn off, and the home owner’s favorite music or TV show to start playing.

Technically, the digital sixth sense fits under the broader theme of the Internet of Things, which basically says that anything that can utilize a CPU, wireless radio or sensors could be made “smart,” either connecting to similar things or the cloud to deliver all types of digital functionally.

I recently spoke with Intel CEO Brian Krzanich at Intel headquarters about his vision of the Internet of Things (IOT) and the idea that CPUs, wireless radios and sensors are an important part of any digital sixth sense vision. He told me that “creating technologies for mobile devices and products for IOT are a top priority for Intel.” Indeed, in various sessions at a recent Intel analysts day, we were shown how Intel plans to be very aggressive in creating low-power processors and integrating wireless radios into new system-on-a-chip (SOC) designs, working with their customers to make all types of mobile products that help connect one’s life to the digital world in new ways.

Of course, for any real connected world or digital sixth sense vision to become reality, there needs to be much more than CPUs, wireless radios and sensors. The industry itself needs to work more closely together to create next generation wireless communications standards so that connected devices from Apple or Samsung work well with appliances in your home, the system in your car and the devices in your office. At the moment, many appliances only speak to their own kind or to the cloud directly — not to other devices in the home or to our mobile devices. In a perfect digital sixth sense world, all devices work together via industry-standard protocols so that no appliance or connected device is kept out of the digital discussion.

We will also need to see the software and services community get behind this idea, creating applications and services to tie devices together at the software level so that they interact with each other properly. Software and services are really key to making this happen.

So, how close are we to actually seeing this digital sixth sense vision become a reality? Qualcomm and Intel are in a strong position to create the kinds of technologies that drive this concept forward at the hardware level. At the moment, Qualcomm has the greatest amount of intellectual property in the ARM processor camp to enable its vision of a digital sixth sense with its own clients. However, I would never count Intel out of being able to deliver similar hardware to its clients, enabling it to eventually deliver an x86 version of its digital sixth sense vision.

Apple, Microsoft , Samsung (by way of Google) are the big players who could try and keep any digital sixth sense vision within their own software and hardware camps — a move that would keep any cross platform, cross device ideas from ever realizing their full potential. I just hope that all of the folks working on this important IOT and digital sixth sense vision find a way to make all of their devices and software work together seamlessly.

If they do, the realization of a broad, interconnected, digital sixth sense-enabled could be only about three to five years away. If not, it could take another six to eight years to flesh this out before mobile technology truly unlocks our digital sixth sense.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Why TED Matters

TED Snowden
Edward Snowden is interviewed by TED Curator Chris Anderson (L) via a BEAM remote presence system during the 2014 TED conference March 18, 2014 in Vancouver, Canada. Steven Rosenbaum / Getty Images

TED has become a launching point for some interesting soul searching, creating within me a desire to learn more about things that matter outside of my own world.

Many folks are familiar with TED talks. TED stands for “Technology, Entertainment, Design” and is a conference series that was the brainchild of Richard Saul Wurman, who began TED in 1984. In 2001, TED was acquired by Chris Anderson, who oversees all of the TED conferences today and acts as the curator for the thousands of TED talks that can be viewed at TED.com.

I recently attended a special TED@Intel event, in the process becoming more aware of TED and its mission, goals and unique format. Speakers at TED events have a maximum of 18 minutes to share their messages, which are delivered in highly polished, succinct speeches. The mission is to inspire, challenge, and be thought provoking and, in some cases, to evoke awe and wonder.

So when the opportunity to request an invite to attend this year’s TED conference — which happened to be its 30th anniversary — came across my desk, I jumped at it. In the past, they’ve accepted 1,500 people for the main conference held in Vancouver, B.C. each year, but this year they reduced the number of attendees to 1,200. That meant that all attendees had to be pre-screened and accepted. I passed muster and was one of the 1,200 invited to Vancouver last week.

One thing that became clear to me while attending TED was that to really get the most out of the big TED conference, any attendee had to have a pretty good working knowledge of math, science, medicine, architecture, economics, geography, education, literature, law, history, technology and politics. This pretty much explains why almost all who attend the conference are highly educated. Most are world travelers who have seen the world — both the good and bad — in person. They also have to have personalities that are friendly and open to networking, something that is a big part of any TED event.

The conference itself was expensive and almost all who went to TED were relatively well off. This has lead some to suggest that TED is an elitist event, but Anderson contended this is probably the least elitist conference that exists since every TED talk is posted online for free. Those paying to attend the conference make it possible for the folks from TED to actually deliver these talks online at no cost.

My good friend Lise Buyer, a principal at Class V Group, suggested that the folks attending TED might be elite in the same way Navy Seals are elite because of their disciplined training. This analogy became evident as I talked with dozens of people at the TED: pretty much everyone at TED were what I would call overachievers.

Many were doctors, lawyers, physicists, scientists, authors, movie stars, educators, or captains of industry, and while each were at the top of their field, they all had other important things they wanted to do with their lives. This included things like working on world problems, climate change issues, philanthropy, social injustice, education and governmental reform, efficiently feeding the planet, diagnosing infectious deceases and providing vaccines.

What is most interesting to me was that as I talked to these people about these big issues, I realized that all of them had the money and influence to actually force change in the world, and they each felt passionate enough to support one of these big issues. This is one of the big reason’s TED matters. Through these TED talks, audiences are exposed to speakers who are doing cutting edge work across numerous causes. The speakers often demonstrate constructive ways that people can help with these causes on a personal level.

For many speakers, TED matters a lot. Salman Khan — founder of Khan Academy, whose goal is to bring free, world-class online education to everyone in the world — told us that before his TED speech, his site had drawn only 6.9 million students. After TED, interest and demand in the courses grew like wildfire: Today, more than 140 million students take these courses online, and the site is growing by 10 million new students each month. Speaker after speaker in the afternoon TED All-Star sessions — which were 4 minute speeches from past TED speakers — told us about how their lives and causes were impacted dramatically for the better after their TED speeches.

Perhaps the biggest reason TED matters is the actual impact it has had on millions of lives — especially over the past eight years since TED talks have been posted online for free. TED organizers said they’ve received thousands of letters and emails from people telling them how one or more TED talks have impacted their lives for the better. While the attendees of last week’s TED conference got to see all of the speakers in person, all of these speeches will find their way onto TED.com over the next few months.

In fact, two TED speeches of major importance to the world are already online.

One from NSA whistleblower Edward Snowden:

And a rebuttal to Snowden’s talk from Richard Ledgett, Deputy Director of the NSA:

Another one that really challenged my thinking was from a supermodel named Geena Rocero, who gave an impassioned talk about equality for transgender individuals. As a techie, the talk that will have the most serious impact on my work and thinking came from Bran Ferren, co-founder of Applied Minds. Anyone interested in the future impact of technology on our cities needs to read how he sees the driverless car being a catalyst for major changes.

I also loved Nicholas Negroponte’s speech that chronicled his many talks since the first TED conference, showing how many of his predictions turned out to be right on the money. In his session he made one rather startling future prediction. “My prediction is that we are going to ingest information,” said Negroponte. “We’re going to swallow a pill and know English and swallow a pill and know Shakespeare. It will go through the bloodstream and it will know when it’s in the brain and, in the right places, it deposits the information.”

Here is a list of all the speakers that were at last week’s TED conference. It’s worth bookmarking this page so you can look for these talks when they are posted over the next few months.

TED also matters for the winner of the $1 million TED prize given out each year. This year’s prize winner was Charmian Gooch. According to the entry on TED’s blog:

As head of Global Witness, a UK-based non-profit that has been campaigning for transparency for the last 20 years, Gooch is a long-time champion for human rights. In recent years, she has focused on uncovering the owners of anonymous companies structured that way in order to hide the identity of corrupt politicians and businessmen who use them to loot resource-rich developing countries and move the money through banks around the world. And it isn’t just the corrupt who use these companies to launder money—arms traffickers, drug smugglers, tax evaders and even terrorists use anonymous companies to facilitate their crimes.

As for myself, this year’s TED conference mattered to me in many ways. I live and breathe technology, and because I spend so much time in this discipline, I don’t often have a chance to learn more about the other things in this world that should matter to me beyond technology. My personal passion — one that I have been involved with since 1995 — has been championing the role of the Internet and computers in education, applying it to the learning process in many parts of the world. But this year’s TED really challenged me to re-think my view of the world and how I could make a difference even outside of my field of choice. For me, TED has become a launching point for some interesting soul searching, creating within me a desire to learn more about things that matter outside of my own world.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser