TIME Big Picture

San Francisco 49ers Go Long on STEM Education at Levi’s Stadium

In 2010, when the San Francisco 49ers’ brain trust was drawing up the plans for what is now Levi’s Stadium, they went to one of the tallest buildings in the area and looked out over Silicon Valley.

According to Joanne Pasternack, director of community relations and the 49ers Foundation, these executives could see Google, Intel, Apple, HP, Facebook and many of the leading tech companies in the world laid out right in front of them.

It was at that point that they made the commitment to somehow use the new stadium to help create tech leaders of tomorrow. As one of the 49ers execs told me recently, they wanted to “help develop the people who will someday engineer and create greater features for Levi’s Stadium and develop innovative technologies that can impact the planet in the future.”

Educational Roots

The 49ers have had a long history of supporting education. “Our family has always been interested in education,” said Dr. John York, co-chairman of the San Francisco 49ers. “My father-in-law, Ed DeBartolo, Sr., always felt that if you could give people an education, they can make a way for themselves and their lives. And the 49ers Foundation’s mission has been to keep kids safe, on track and in school.”

“My mother was a school teacher, my father was the son of Italian immigrants,” said Denise DeBartolo York, co-chairman of the San Francisco 49ers. “They always thought that education could level the playing field with at-risk students that were disadvantaged. Once you enable them to get an education, it’s an even playing field.” Mrs. York also told me that she and her husband, Dr. York, have contributed significantly to various underprivileged children’s causes and Title I school initiatives, as well as programs for at-risk kids.

The 49ers organization’s philanthropic contributions — much of which is focused on education — are at least $3.3 million per year. For years, the organization has supported what is called the 49ers Academy in East Palo Alto, CA. According to the academy’s website:

The San Francisco 49ers Academy was established through a partnership with Communities in Schools (CIS) in 1996. CIS started as a small grassroots movement led by Bill Milliken, one of the nation’s foremost pioneers in the movement to help young people graduate from high school and go onto rewarding careers. The 49ers Academy is a unique partnership – a public school, supported by a private non-profit agency. The 49ers are the major underwriter of this program.

Cultivating STEM

However, what they are doing in STEM education at Levi’s Stadium itself is amazing. STEM stands for Science, Technology, Engineering and Math and is a dedicated educational program to get kids interested in these disciplines, eventually guiding them into related career endeavors.

“On and off the field, talent alone will not lead to success,” said Dr. York. “The game changer for promising future leaders is to provide a stimulating environment where their natural talent and drive will be fed by motivating mentors, meaningful activities and academic enrichment. The 49ers STEM Leadership Institute’s vision is to be a leader in STEM education, preparing and inspiring talented learners to meet the challenges of the global society through innovation, collaboration and creative problem solving.”

Budding Brains

The 49ers STEM Leadership Institute program will bring 20,000 students to Levi’s Stadium for daylong programs that tie sports and education around the STEM focus. Each day during the school year, 60 kids from one of the various schools in the Bay Area are brought to Levi’s Stadium in one of the 49ers’ official team buses. They are then broken up into three different groups of 20 each to rotate through three distinct activities.

The first activity features a full tour of the stadium, focusing on the engineering involved with creating a stadium. It shows off the green aspects of the stadium, including a visit to the garden on the roof as well as a look at the solar panels and how they’re used to create energy. The tour also demonstrates how clean technology is used to irrigate the field in order to care for the grass and turf. The kids also get to see the visiting team’s locker room, the field and many of the public areas of the stadium.

The second activity takes place in the new 49ers Museum and includes lessons using various games and interactive screens. Students learn how engineering and math are used to create 49ers football equipment, and how physics is applied to things like passing, kicking and running. The day I was there, they also included a section on careers in math and science. By the way, a trip to the 49ers Museum is highly recommended. It’s one of the best sports museums in the U.S. They use Sony Xperia tablets and various technologies to really enhance the overall museum experience — and for those of us in the Bay Area, it evokes some great memories of five 49ers Super Bowl wins.

The third activity takes place in an actual high-tech classroom that’s built into the new 49ers Museum. This classroom has multiple screens as well as half a dozen touch-based video worktables created by Cortina Productions. They serve as interactive teaching tools that the students can use to do various projects.

49ers STEM
Students receive instructions from teacher Matt Van Dixon while sitting at interactive video tables made by Cortina Productions at the 49ers STEM Leadership Institute at Levi’s Stadium Terrell Lloyd / San Francisco 49ers

I was privileged to attend the inaugural class where they were studying the engineering principles of making a football. Using all of the materials needed to make a football, each group got to assemble a football from scratch, sew it up, inflate it and then test it in a special kicking area where the students could see how each ball performed based on how well they created it.

49ers STEM
Denise DeBartolo York helps students assemble a football at the 49ers STEM Leadership Institute at Levi’s Stadium Terrell Lloyd / San Francisco 49ers
49ers STEM
Students assemble a football at the 49ers STEM Leadership Institute at Levi’s Stadium Terrell Lloyd / San Francisco 49ers

Many of the 49ers star players become the students’ tutors and team captains via video at each workstation table, giving instructions and encouragement for each project.

The interactive lessons vary: One class might teach how a helmet is engineered. Another might be on the physics of throwing a ball, explaining how a physical object like a football deals with airflow, throwing mechanics and force, and how each impacts the direction and length of a throw. There are even lessons on engineering your plate, including nutrition facts and a fitness class that uses the 49ers’ training camp as an example.

The class on applied mathematics explains angular attack and game geometry as well as teaching about statistics, using the Super Bowl and its various Roman-numeral numbering schemes as part of the lesson plan. All lessons are designed to emphasize how math, science, technology and engineering are used in everything from building a stadium to creating sports equipment to the math and physics that go into playing the game of football.

The teacher of the class is Matt Van Dixon, who is the education program manger for the 49ers Museum. Matt is one of the most dynamic teachers I have ever observed, his teaching style grabbing the kids from the beginning of each class. I was extremely impressed with how he developed the lesson plans to integrate the role of engineering and math into all of the sports examples. He and his team created various simulations to make the class interactive and highly entertaining. I asked a couple of kids who were in this inaugural class what they thought about the program and each gave it a huge thumbs up.

49ers STEM
Matt Van Dixon instructs students at the 49ers STEM Leadership Institute at Levi’s Stadium Terrell Lloyd / San Francisco 49ers

Branching Out

The 49ers STEM Leadership Institute has also been implemented in the Cabrillo Middle School in Santa Clara, CA, which is just down the street from Levi’s Stadium. With the 49ers’ support and big help from the Chevron Corporation, who created the STEM labs at the school, 60 students from the Santa Clara Unified School District are selected each year to go through a six-year program designed to inspire and prepare students with high academic potential to pursue STEM majors at top-tier universities and become future leaders in their fields. In addition to enriched math and science instruction, students have regular access to the Chevron STEMZone, a tech lab equipped with a laser cutter, 3D printers and other fabrication tools.

Steve Woodhead, Chevron’s global social investment manager, told me that when the 49ers approached them to help with the STEM Institute, they were glad to be involved and worked hard to create the learning labs used in these special education programs.

Another important partner in this program is the Silicon Valley Education Foundation. SVEF’s charter is to be a resource and advocate for students and educators. They provide advocacy, programs and resources to help students reach their full potential in the critical areas of science, technology, engineering and math. According to Muhammed Chaudhry, president and CEO of SVEF, his non-profit group played an important role in advising the 49ers and Chevron on STEM studies and helped with the development of the curriculum used in the institute’s educational programs.

What the 49ers are doing is using sports — a subject that most kids understand and can relate to — and tying it to math, science, technology and engineering in a way that brings these disciplines to life, making learning these subjects fun and entertaining. Getting to see this program in action was truly enlightening. I saw how the 49ers’ STEM Leadership Institute could help create future tech leaders, the major goal of their vision and program from the start.

I hope that all of the folks in the sports industry school themselves on the 49ers’ pioneering STEM education program and how it takes full advantage of the role sports can play in teaching STEM-related disciplines.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Meet Levi’s Stadium, the Most High-Tech Sports Venue Yet

Levi's Stadium
A general view during a preseason game between the San Francisco 49ers and Denver Broncos at Levi's Stadium on August 17, 2014 in Santa Clara, California Ezra Shaw / Getty Images

Most people have heard of smartphones, smart cars and smart homes. Say hello to the smart stadium.

Set in the heart of Silicon Valley, Levi’s Stadium — home to the San Francisco 49ers — is now the most high-tech stadium anywhere in the world.

It’s in the center of the tech universe, of course, so it’s only natural that 49ers management decided to devote a significant sum of money to building high-tech infrastructure. The stadium will allow all 70,000+ fans to connect to Wi-Fi and 4G networks to take advantage of personalized services, making the event experience more enjoyable.

I had the privilege of attending the inaugural event at Levi’s Stadium, where the San Jose Earthquakes took on the Seattle Sounders in an MLS league game. About 49,000 people attended that event, well below the stadium’s 70,000+ seat capacity, so the game served as a dry run to work out some of the kinks. I also attended the first NFL game to be played in the stadium: the Denver Broncos came to town to help the 49ers christen the stadium in a preseason game on Aug 17. The first regular-season NFL game will be held there on Sept 14, and will serve as the official grand opening of the stadium.

Turning Downtime Into Screen Time

What I discovered from these two experiences is that the 49ers’ stadium is indeed the most tech-advanced stadium in the world, using technology to make the fan experience much richer and more entertaining. Al Guido, the COO of the 49ers, told me that one challenge that’s been an issue in the NFL is that the amount of action that takes place in a football game only about amounts to about 15 minutes. People want access to things like stats, replays and other media when live play isn’t taking place.

During that downtime, the 49ers organization wanted to deliver all types of new ways to enjoy the game, turning to technology to deliver it through a connected experience. According to Mr. Guido, “The 49ers wanted to transform the in-stadium fan experience and make it possible to see the action live but still have the similar features that a fan has at home while watching the game on TV.”

Cables, Routers and Bandwidth Aplenty

So how did the 49ers and their tech partners achieve the goal of enhancing the fan experience by harnessing technology for this purpose?

According to Dan Williams, the VP of technology for Levi’s Stadium, they laid out 400 miles of cabling, 70 miles of which are just dedicated to connecting the 1,200 distributed antenna systems that serve the Wi-Fi routers that are placed to serve every 100 seats throughout the stadium. Levi’s Stadium features a backbone of 40 gigabits per second of available bandwidth, easily scalable to accomodate event attendance, which is 40 times more Internet bandwidth capacity than any known U.S. stadium, and four times greater than the standard for NFL stadiums that’s been mandated by the league to be in place by 2015.

Levi's Stadium Router
Access points are spread throughout the stadium every 100 seats, serving up wireless Internet service to fans during the games Ben Bajarin for TIME
Levi's Stadium Repeater
Repeaters placed throughout Levi’s Stadium pass Internet service along from section to section Ben Bajarin for TIME

The stadium also has about 1,700 high-tech beacons. Using the latest version of the Bluetooth Low Energy standard, these beacons can be used to give people pinpoint directions to their seats as well as to any other place in the stadium. They can also be used to send them alerts about specials from concession stands and other promotions from time to time.

Tech Partnerships

One of the companies that contributed to the overall strategy and execution of some the stadium’s high-tech features is Sony. Sony’s technology is at the center of the stadium’s control room, which manages all of the video for the over 2,000 Sony TVs that have been placed around the venue, as well as the 70 4K TVs found in most of the suites and the two giant LED displays in each end zone.

When I asked Mike Fasulo, the president and COO of Sony Electronics, about his company’s involvement in the new Levi’s Stadium, he told me, “Our partnership with the San Francisco 49ers and the new Levi’s Stadium goes well beyond technology and products. This is truly a one-of-a-kind fan experience, with the world’s greatest showcase of 4K technology from the best of Sony’s professional and consumer products. For every event, every fan will be immersed in the pinnacle of entertainment and technology to enhance their experience.”

Other major sponsors from the tech world include Intel, SAP, Yahoo and Brocade.

An App to Tie It All Together

There’s also a Levi’s Stadium smartphone and tablet app, which offers the following features:

  • The app can guide people to the parking lot entrance closest to their seats, and then once inside, guide them to their actual seats.
  • Fans can watch up to four replays at a time during the game, seeing the exact replays shown by the studio as if they were watching at home on their TV. A fan can actually watch the game live on this app as well. They can also get stats and other info related to the game via this app.
  • It can guide fans to the closest bathroom with the shortest lines, which I predict will become the most used feature at any game.
  • Fans can connect either by Wi-Fi or to one of the 4G networks from the major carriers. Each of the big telecom networks has expanded its antenna service to enhance its customers’ wireless connections within the stadium.
  • Fans can order food and drink from any seat in the stadium and it will be delivered directly to their seats. People also have the option of ordering food from their seats and going to an express line at the concession stands to pick up their food in person, too.

The painstaking attention to tech detail that the 49ers and its partners have integrated into Levi’s Stadium is sure to be the envy of NFL stadiums throughout the U.S. For the time being, it’s the gold standard in high-tech stadiums and one that’s sure to be copied by many sports facilities around the world.

The Valley Advantage

However, I suspect that by being in the heart of Silicon Valley, this stadium may keep the lead in high-tech wizardry for some time. Keep in mind that the tech companies partnered with the 49ers on Levi’s Stadium because it also provided them a showcase for their technology. As Sony’s Fasulo stated above, it provided the company with a major showcase for its 4K professional and consumer products. Intel loves the fact that all of the servers that are used to power the networks show off the power of Intel processors, and Brocade’s networking technology is showcased as a world- class solution.

Silicon Valley is also the center of tech innovation. As people in the industry continue to create new technologies that can be used to enhance the sports experience, where do you think they will take it first? Since the 49ers have already shown a commitment to using technology for delivering the ultimate in-stadium fan experience, the organization will most likely be open to all sorts of new technology to help it deliver an even greater experience in the future. Think of this symbiotic relationship between Silicon Valley’s tech companies and the 49ers as home field advantage for both.

It’s probably not a stretch to say that the pioneering efforts of the 49ers to make Levi’s Stadium a truly smart stadium will force other NFL stadiums to follow the team’s lead, striving to make all of their stadiums smarter. It will also serve as a potential blueprint for other sports stadiums around the world. Being in Silicon Valley does have its advantages, though: With the kinds of tech sponsors and partners that are in its back yard, I suspect that Levi’s Stadium will continue to get smarter and smarter.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Intel Promises Faster, Lighter and Thinner with Its New Processor Line

Intel
Justin Sullivan / Getty Images

Intel looks to prove that Moore’s law is alive and well almost half a century later.

In 1965, Intel co-founder Gordon Moore stated that the number of transistors per square inch on integrated circuits had doubled every year since their invention. The doubling of transistors and chip performance about every 12-18 months became known as Moore’s law and is one of the laws that has guided the innovation in computers and technology for almost five decades.

Over the years, however, Moore’s law as come under some heat, with detractors saying that Moore’s law will soon run out of steam, mostly due to basic physics arguments. These detractors can’t see how you can cram more and more transistors into such tiny silicon wafers, so they expect Moore’s law to peter out. The only problem is that the engineers at Intel scoff at these detractors and continue to drive Moore’s law forward year after year.

This is highly evident in the company’s newest processor line, code-named “Broadwell Y.” Broadwell Y uses a 14-nanometer manufacturing process and is poised to change the power and size of all types of mobile devices. It will be branded “Core M.”

Intel’s Recent Processor Technologies

Intel’s major journey to extend Moore’s law, especially to mobile computers, started in 2010 with the introduction of its Core i3, i5 and i7 line of processors. The first generation of some of these processors was codenamed “Westmere” and used Intel’s 32-nanometer manufacturing process to produce ultra-low-voltage processors for mobile devices.

The lower the voltage of a processor, the longer the battery life can be. However, while people want long battery life, they also want powerful processing and great graphics. By using a 32-nanometer manufacturing process and doubling the amount of transistors found in previous Intel processors, the company made it possible to deliver lighter and more powerful laptops with longer battery life.

The next year, Intel introduced its next 32-nanometer chips, code-named Sandy Bridge. These processors were even faster and more power-efficient than Westmere chips, with graphics integrated onto the chips themselves. These chips drove Intel’s “Drive to Thin” campaign, with Intel and its partners bringing out even thinner and lighter laptops.

In 2012, Intel moved to 22-nanometer processor manufacturing technology, introducing Ivy Bridge chips. The transistor count basically doubled, giving us even faster processors with lower power draw and even thinner and lighter laptops. This chip also included integrated 3D graphics and support for Direct X11, making imaging laptops even better and paving the way for laptops with modern touchscreens. In 2013 Intel, still using the 22-nanometer manufacturing process, introduced a chip code named Haswell, which extended the battery life mobile computers by 2X, and had a 20X idle power reduction and very low latency idle states. This allowed for even thinner and lighter ultrabooks and the introduction of what Intel and partners call two-in-ones.

Today: Broadwell Y

Now enter Broadwell Y chips and the Core M brand name. This will mark the next big leap in manufacturing process, using 14-nanometer technology. By using the 14-nanometer manufacturing process, Intel again basically doubles the amount of transistors on a chip, yet delivers a processor that runs only at about four to five watts and uses very low voltage. This again extends battery life further on these products and at the same time makes them thinner, lighter and more powerful.

For our geekier readers, Broadwell delivers the following:

  • 14-nanometer, second-generation Tri-Gate transistors
  • Thermal reduction that enables nine-nanometer-and-smaller fabless designs
  • System-optimized dynamic power and thermal management
  • Reduction in system-on-a-chip idle power and increased dynamic range operating
  • Next-generation graphics, media and display features
  • A lower-power chipset, voice features and faster storage

This means that hardware makers can create even more efficient devices using Intel’s newest x86 semiconductor designs. During this journey to extend Moore’s law aggressively to mobile that started in 2010, these new processors have enabled Intel and its partners to bring down the thickness of a laptop from 26 millimeters to 7.2 millimeters. They have reduced heat dissipation by 4X and increased graphics by 7X. Intel’s core architecture has doubled while battery size has been halved, yet Intel is promising that the battery life of the laptops and tablets that use these new 14-nanometer Broadwell Y processors will double.

The Not-Too-Distant Future

What’s amazing to me is that Intel has no intentions of slowing down the progress of Moore’s law anytime soon. I spoke with Intel chairman Andy Bryant recently and he assured me that Intel will not stop innovating with the 14-nanometer process. In fact, he said that engineers are already working on next-generation processors using 10-nanometer technologies, and have plans to create chips using seven- and even five-nanometer manufacturing processes over the next 10 years. It seems to me that given the accomplishments Intel has achieved with its 14-nanometer Broadwell Y chips, the company clearly has the capability of extending Moore’s law for at least another decade.

So why would anyone want a processor with more transistors that uses lower voltage to power them? The simple answer is to create laptops and tablets that are even thinner, lighter, last longer and still have enough power to handle any task we throw at them. However, a bigger reason is that while we’re used to navigating these devices via keyboards, trackpads and touchscreens, these new processors will eventually let companies create new devices that add greater 3D imaging, voice navigation, real-time translation, and new types of games and applications. In other words, the more power we have on these devices, the less we’re limited by what they can do for us.

Intel is shipping these new 14-nanometer Broadwell chips to their customers in volume now, and we should see the first generation of laptops and two-in-ones with these processors around the holidays. Imagine having a MacBook Air that is even thinner, lighter and faster than the ones out today. Or a two-in-one that’s ultra thin and ultra light, making today’s Surface Pro 3 seem large.

And all of them will have even better battery life than those on the market today. That’s what people can expect once Broadwell Y/Core M laptops and two-in-ones hit the market, showing that Moore’s law is alive and well almost half a century later.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Questions About a 5.5-inch iPhone

There’s already a bit of controversy surrounding the launch of Apple’s new iPhones this fall.

Most informed sources seem to all agree that Apple will introduce an iPhone 6 sporting a 4.7-inch screen, as compared to the 4-inch screen on today’s iPhone 5s and 5c models. But there are several rumors coming from the supply chain that suggest Apple is also preparing to release a 5.5-inch version of its newest iPhone, too.

The possibility that Apple could be making a 5.5-inch iPhone leads to a few important questions.

Why make a giant iPhone?

The first: If Apple really wants the 4.7-inch model to be what we in the industry call the “hero” model — one that would drive the majority of iPhone sales going forward — why even make a 5.5-inch model at all?

While we will sell about a billion smartphones this year, fewer than 70 million will feature screens larger than five inches. However, the answer to this question is actually pretty simple: While demand for smartphones larger than five inches is minimal in the U.S. and Europe, there is great interest in smartphones in the 5.5- to 5.7-inch range in many parts of Asia.

For example, well over 80% of smartphones sold in Korea have screens that are at least five inches and above. They have also become big hits in China and other parts of Asia where larger smartphones double as a small tablets, thus driving demand in these regions of the world for what are called “phablets.”

I suspect that if Apple is making a larger iPhone 6 in the 5.5-inch range, it will most likely be targeted at these Asian markets where demand for large smartphones is relatively strong. This is not to say that Apple wouldn’t offer a 5.5-inch iPhone in the U.S. — I believe there could be some interest in one of this size — but like most of my colleagues in the research world, we believe that the lion’s share of those buying the new iPhone would want the 4.7-inch version if indeed this is the size of it when it comes out.

Would you buy it?

The second question: If Apple does bring a 5.5-inch iPhone 6 to the U.S. market, would you buy one?

For the last month or so, I have been carrying three smartphones with me of various screen sizes all day long, and have learned a lot about my personal preferences. In my front pocket is an iPhone 5 that has a four-inch screen. In my back pockets are a Galaxy Note 3, which has a 5.7-inch screen and the new Amazon Fire, which sports a 4.7 inch screen — the same size that is purported to be on the new iPhone 6 when it comes to market.

Here are my observations. Keep in mind they are personal observations, but I suspect that my preferences are pretty close to what the majority of the market may prefer when it comes to the screen sizes in a larger smartphone.

I like to keep my primary smartphone with me all of the time, so my iPhone 5 is in my front pocket. The screen size is very important in this case and, at four inches, it easily fits in my right-front pants pocket and is easy to access as I need it. The other thing that is important about the four-inch screen is that I can operate it with one hand. From a design point, one-handed operation has been at the heart of all iPhones to date, as Steve Jobs was adamant that people wanted to be able to use their phones with one hand. So the idea of possibly moving up to a new iPhone with a 4.7-inch screen has intrigued me, as I wondered if a smartphone with this size screen would fit in my pocket and still be usable with one hand.

So when I got to test the 4.7-inch Amazon Fire phone, I immediately put it in my front pocket. Thankfully, it fit well and continued to be just as easy to access as the smaller iPhone 5s with its four-inch screen. Also, while I had been skeptical that I could still use it with one hand since I have medium-sized hands, I found that I could still operate the Amazon Fire with one hand easily. The other thing about a 4.7-inch screen is that the text is larger; for my aging eyes, this is a welcome upgrade. However, on these two issues, the Galaxy Note 3, with its 5.7-inch screen, flunked both tests. This phablet-sized smartphone did not fit in a front pocket, nor could I use it for one-handed operation.

That led me to wonder if a Samsung Galaxy S% smartphone, with its five-inch screen, would work in these similar scenarios. So I took a Galaxy S5 that I have, put it into my front pocket and tried to use it with one hand. To my surprise, it also worked well. But I had another smartphone with a 5.2-inch screen and, amazingly, that failed both tests. On the surface, at least for me, a smartphone up to five inches did fit in my pocket and allowed me to use it one-handed, but any screen larger than that was a bust.

I also did this test with some of the women in our office. We have a very casual workplace and most wear jeans to work, so I had them try the 4.7-inch Amazon Fire. They were also surprised that it fit O.K. in their front pockets and could still be used in a one-handed operation mode. However, like me, a screen larger than five inches did not fit in pockets and was impossible to use with one hand for all of them. These women did point out to me though that for most women, it’s less likely that they would carry a smartphone in their pockets as more keep them in a purse or handbag. That being the case, at least for the women in our office, a smartphone with a 5.5-inch screen was acceptable to them, although one person said she would prefer the smaller 4.7-inch smartphone if push came to shove.

Ultimately, it probably comes down to personal preference, yet I suspect that an iPhone with a 4.7-inch screen would take the lion’s share of Apple’s iPhones sales if this is indeed the size of the company’s new iPhone.

What about tablets?

But a 5.5-inch smartphone begs a third question that, at the moment, has stymied many of us researchers: Would a 5.5- or 6-inch smartphone eat into the demand for a small tablet?

I find that in my case, even though I do use the 5.7-inch Galaxy Note 3 often for reading books while out and about or while standing in line, my iPad Mini is still my go-to tablet due to its size. I also have a 9.7-inch iPad Air with a Bluetooth keyboard, but I almost exclusively use that tablet for productivity and less for any form of real data consumption.

Some researchers have suggested that, especially in parts of the world where larger smartphones or “phablets” are taking off, this has really hurt the demand for smaller tablets — and that’s partially why demand for tablets has been soft in the last two quarters. Unfortunately, the data is still inconclusive on this, but my gut says that “phablets” are at least having some impact on demand for tablets in many regions of the world.

With the expected launch of Apple’s new larger-screen iPhones just around the corner, those planning to buy a new iPhone might want to keep my experience in mind. There’s a very big difference between how a person uses smartphones that are less than five inches and smartphones that have larger screens. For those who keep them in their pockets and/or want to use them with one hand, they have only one real choice. For them, a smartphone smaller than five inches is their best bet.

But for those that don’t keep their smartphones in their pockets, the virtue of a larger screen is that it delivers much more viewing real estate. Consequently, it’s much easier to use when reading books, web pages and for other tasks where a large screen can deliver a real benefit. The good news is that if these Apple rumors are true, people will have better options coming from Apple. For the first time in the iPhone’s history, Apple might give users multiple screen sizes to choose from.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Understanding Apple’s ‘Continuity’ Strategy

Apple Continuity
Getty Images

Imagine all your screens working together harmoniously.

For years, I have been writing about the many screens in our lives. We have at least three primary screens we use almost on a daily basis: a TV, a PC (laptop or tablet) and a smartphone.

And lately, more screens have been showing up in our cars, appliances and wearable devices. However, even when it comes to major companies’ operating systems, too often the screens’ user interfaces and data are different on each device.

For example, the Mac’s user interface is different than the user interface on Apple’s iOS devices. And Google’s Android user interface on its tablets and smartphones is different than what’s found on the company’s Chromebooks. Same goes for these companies’ TV products. Also, some of your data is stored locally, so it’s not shared with or available on any other device you own.

At Apple’s recent Worldwide Developers Conference in San Francisco, the company introduced a concept it calls “continuity.” What this basically means is that in the future, the new Mac operating system, called Yosemite, will look and feel much more like an iOS-based device. In fact, if the continuity theme plays out as I expect it will, Apple will eventually make all of its products — including Apple TV and Apple CarPlay and any wearable devices — have the same look and feel, making it very easy to go from one device to another seamlessly. Also, in this continuity idea, everything would be in sync. That means if you change something on one device, it would be changed and updated on any other Apple device you had tied to the company’s ecosystem of apps and services.

Over at Re/code, my good friend Walt Mossberg did a great piece called “How the PC Is Merging With the Smartphone.” In it, he talks about Apple’s continuity approach to make the PC act, look like and work like an iPhone or iPad. He also mentions how Google is doing something similar with Chromebooks and Android.

For many years, I have thought that in an ideal digital world, people would ultimately have many screens as part of their lifestyles. In that vision, I also had this idea that all of these screens would be connected, work together seamlessly and, perhaps more importantly, would always be in sync with one another. The other part of this vision is that the user interface on each of these devices would be the same. I have always felt that people would be more likely to use new devices if each device worked the same as any other device they already had.

In a sense, I think Apple’s continuity strategy pretty much maps to this vision I have written about for two decades. Now, lest you think I am a serious visionary when it comes to these types of connected ecosystems, keep in mind that this vision came out of my own need for something like this. For most of my career, I really only had to deal with one computing screen — that being the one on a personal computer.

However, my digital life became more complicated when I got my first feature phone. It, too, had apps on it, albeit very limited ones. But the operating system and user interface on my feature phone were completely different than the ones on my PC. I had to learn how to use it from scratch. Then, as early as 1990, I started to use tablets. Again, because of the form factors and designs, the operating systems and user interfaces on my first three or four tablets were all different. I had another set of learning curves to contend with before I could use them with any sense of ease. Also, all of the data on these devices was local and none of these devices talked to each other.

What I wanted was for all of my devices to work together seamlessly, talk to each other, have the same operating system and user interface, and to always be in sync. Interestingly, we have had the technology to deliver on this vision for over five years, but only now have the big companies started to really move us in this direction. If Apple’s overall continuity strategy is fully realized, it would mean that every one of my Apple devices will look and act alike, talk to each other and always be in sync. If I get a new device that is part of Apple’s portfolio, I would have no new learning curve.

For consumers, this would is a big deal. First, if you learned the user interface on one device, it would be the same on all of your devices. Second, the apps and data would all be the same or extremely similar, and available on most of the screens you would be using. The exception would be wearables. These screens bring limitations, so any interface and operating system would be highly streamlined. However, even in this case, they would work very much like the other devices and, more importantly, would be connected to these devices either directly or through the cloud. And third, all of the data on all of the devices would be in sync and, at least in theory, would work together seamlessly.

Apple is not the only one driving us in this direction. Microsoft and Google are similar in that all of their respective devices will eventually look, feel and work in similar ways, tying directly into their cloud-driven ecosystems. The goal, of course, is to hook consumers into one particular ecosystem, making it hard to leave once you’re invested in the products that are tied to their respective apps and services. At the moment, it appears to me that Apple has the broader ability to deliver on this “continuity” concept since it owns the devices, processors, interfaces and services layer, making it easier to make all of its devices work together with a look and feel that’s similar across all of the company’s products.

Google would like to do the same, but there is still too much fragmentation in the Android world at the moment. But over time, I suspect it will achieve a similar level of device continuity. Microsoft’s concept would be the most challenging to deliver due to its various operating systems. And with the acquisition of Nokia, Microsoft adds Android to its product line, which has a completely different ecosystem tied to it. However, all three companies are working hard to deliver on this continuity vision, and as they succeed over time, it should make it easier for customers to better fit these companies’ devices into their digital lifestyles.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Where Wearable Health Gadgets Are Headed

fitbit
A person wearing a Fitbit fitness band types on a laptop Getty Images

Every once in a while, I’m shown a tech product and I can’t figure out why it was created. One great example of this was a two-handed mouse I was shown at large R&D-based company many years ago.

I was asked to review it to see if they should bring it to market. After trying to use it and viewing the complicated things you had to do to make it work, I told them it would never succeed. However, the engineer behind it was convinced he had created the next great mouse and was determined to try and get it to market. Thankfully, the management at this company killed it, as it would have been a complete failure and provided no real value to any customer. However, the technology was available to create it and this engineer did it because he could.

In the world of tech, most successful products address serious needs that people have. This is very much the case behind the current movement to create all types of wearable devices designed to make people healthier.

Folks behind products like the Jawbone Up, Nike Fuel, Fitbit and others have solid backgrounds in exercise and exercise science. They wanted to create stylish wearable products that could be used to monitor steps, count calories and track various other fitness metrics. Other products such as ones from iHealth, which has created a digital blood pressure device and a blood glucose testing kit that are tied to smartphones, were designed by people close to the health industry who saw a need to create products that could utilize digital technology to power new health monitoring tools.

At a personal level, I’m pleased that these folks are utilizing key technologies like accelerometers, sensors, Bluetooth low-energy radios and new types of semiconductors to create products that aim to impact people’s health. Readers of this column may remember that two years ago I suffered a heart attack and had a triple bypass. As you can imagine, this provided a serious wake up call to me about taking better care of myself. Since then, my Nike Fuelband has been my 24-hour wearable companion: I check its step-monitoring readout religiously to make sure I get the 10,000 steps in each day that my doctor has required of me as part of my recovery regimen.

While I would like to think that these tech folks are doing it for the altruistic reasons, the bottom line is that there is a lot of money to be made in health-related wearables. The folks from IHS published a good report last year on the market for wearables, which are mostly driven by health-related apps.

Most researchers that track this market believe that the wearable health market will represent at least $2 billion in revenue worldwide by 2018. In many developed countries around the world, people are becoming much more health conscious. Reports seem to come out daily, talking about the good or bad effects some foods have on our lives. And more and more, we hear that we need to exercise to either maintain our health or to improve it.

So a combination of the right technology becoming available and an increased awareness for better health has created this groundswell of health-related wearable devices and digital monitoring tools designed to help people have healthier lives. But there is another major reason that we are seeing more and more health-related wearables and digital monitoring products come to market now. This is driven by most healthcare providers and is one of their major initiatives: In simple terms, it’s cheaper to keep a person healthy than to cover their costs in the hospital when they’re sick.

Almost all the major health care providers have created web sites with all types of information about managing one’s health. These sites have information and programs for cancer patients, diabetics, and many other health issues that help people better manage these diseases. Health insurers are also really getting behind the various digital monitoring tools and health wearables, too, viewing them as vital tools that can help their customers stay healthier and keep them out of the hospital as much as possible.

Interestingly, as I talk to many of the executives of these health-related wearable companies, many of them claim to be on a mission. Yes, they admit there is money to be made, but most I speak with are serious about giving people the technology to help them keep themselves healthy. In fact, in at least two cases, the executives I have talked to have special funds they personally set aside to donate to major health causes as part of their personal commitment to using technology to make people healthier.

While there is some chatter about the market for wearable technology not being a sustainable one, I suspect that it will stay on track to eventually become integrated into everyday objects such as watches, hats and even clothes, becoming part of a broader trend called “self-health monitoring.” This trend basically says that people will want to have more and more information about calories the number of calories they’ve burned, the number of steps they’ve steps taken, their pulse and other metrics. Thanks to these new technologies, this data would be available to them in a variety of ways.

Of course, not everyone may want to know these health-related data points, but the research shows that at least one-fourth of U.S. adults have these types of health-related wearable monitoring devices on their personal radars. The fact that this market is growing around 20% or more each year suggests that we could continue to see growth for at least another three years. As these devices become part of our wardrobes, they could eventually fade into the background while still providing health-related info that many people may need to stay motivated. This is the goal that the tech world has embraced wholeheartedly, providing more and better tools for this purpose.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Why Basic Coding Should Be a Mandatory Class in Junior High

kids computers
Getty Images

One of the roles our education system is supposed to play is to prepare kids to be responsible citizens, with the skills needed to be successful in adulthood. All of the various classes — starting in kindergarten, where they lay out the fundamentals of reading,writing, sharing and even early math — are designed to be a set of building blocks of knowledge. Each consecutive year introduces new blocks in kids’ education, designed to get them ready for life so that they’re capable of earning a living.

For some reason, all of the classes I took from about third grade forward are still burned into my mind. Even today, I can go back in time and remember how my fifth-grade teacher got me interested in math or how my seventh-grade teacher’s method of teaching Spanish crippled my ability to learn that language due to his “repetitive” teaching methods.

However, one class in seventh grade has become very important to me, as I use the skills I learned in that class every day of my life: That class was my typing class. I can still envision that class as if it were yesterday, with my seat in the middle of the first row, learning to touch-type on an IBM Selectric typewriter. I even remember the line I had to type over and over again as part of a test to determine how fast I typed: “Now is the time for all good men to come to the aid of their country.” I can still touch-type that sentence today in about five seconds. Back then, the goal was to touch type at about 90 words per minute.

While the typewriter is now a thing of the past, typing and keyboards remain highly relevant today. In most cases, they’re the main way most of us enter data into our computers. And understanding the QWERTY layout is important when using a touch keyboard or even when programming our set-top boxes or other devices that use a keyboard for input.

Now, one could argue that kids these days seem to intuitively know how to use technology. Even at an early age, they start touching screens and keyboards, quickly learning how to navigate around all types of digital devices. The need for kids to learn how to code isn’t important, right? While that’s true to some extent, fundamentally understanding how these technologies work and how they can ultimately be customized for even greater functionality would enhance kids’ experiences with digital devices and could become much more important to them later in life.

Anyone that has taken an introductory programming class will tell you that at the very least, it helped them understand basic programming logic, structure and design. Even those who did not go on to become software engineers say that the fundamentals of programming a computer at the coding level has helped them shape how they think logically, has sharpened their common sense and, in a lot of cases, has helped them apply what they have learned to getting more out of their smartphones, tablets, computers and other devices that now populate their lives.

We live in a digital age in which technology plays a role in much of what we do every day. We use technology at the office, at school and at home. Digital devices are all around us. However, in many cases, we barely scratch the surface of what technology can do for us. We pretty much accept the fundamental role technology plays in our lives and mostly use the basic functionality of each of our digital devices.

Yet, when hardware and software designers create devices, they usually add a great deal of features and functions that most people barely use. That’s O.K. in a broad sense, since we “hire” our devices to handle things like phone calls, messaging, music and entertainment. But as technology has evolved, especially mobile technology, we are now holding in our hands real personal computers that can do much more than these fundamental functions. Even our TVs and appliances are becoming multipurpose devices designed to be more than meets the eye.

While most people will never get under the hood to try and change the code of an appliance or device they use, by learning the fundamentals of creating the software code that runs our devices, a person will gain a greater understanding of how their devices work, and would be more inclined to go beyond their devices’ basic functionality.

A coding class would also help them gain a greater understanding of how technology is designed and how software serves as the medium for triggering all of a device’s capabilities. This type of knowledge could be important in a future working environment where they’re called upon to use technology as part of their overall job.

It goes without saying, but understanding how technology works makes it much easier for a person to get the most out of it.

In an important article on GreaterSchools.org, author Hank Pellissier includes a comment from a recognized authority on programming. Douglas Rushkoff, author of Program or Be Programmed and evangelist for Codeacademy, is one of the nation’s leading digital crusaders. He argues that our schools need to incorporate computer programming into the core curriculum or get left behind. “It’s time Americans begin treating computer code the way we do the alphabet or arithmetic,” he writes.

Mr. Rushkoff sees the need to teach coding in order to create more hardware and software engineers to meet the rising demands for skilled tech workers. I agree wholeheartedly with this, since the U.S. is far behind in having a robust technical workforce created within our own borders. We rely heavily on coders in China, India and other parts of the world to meet the high demand for programming skills. I also agree that coding is just as important as other basic learning skills, since technology is now an important part of all of our lives. Understanding coding would give our kids a foundation in understanding how technology works, serving them well even if they do not become professional programmers.

One of my passions has been to help bring technology into the education system: I have worked on the sidelines with the State of Hawaii to champion the role of personal computers in education for decades. It has been rewarding to see how computers have impacted the educational process throughout the U.S., with every school system in America now having some type of computer aided learning programs in use today.

But it’s time for schools to realize that technology is now a part of our lifestyle. Helping our kids understand how technology works at the ground level and how it can be used to its fullest potential needs to be a building block that’s added to the educational curriculum. At best, it could get kids interested in tech as a career. At the least, it could equip them to handle more and more technology-related devices that are now part of our lives.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

The Challenges of a Dick Tracy-like Watch-Phone

Samsung Gear 2
A Galaxy Gear 2 smartwatch sits on display at the Samsung Electronics Co. pavilion on day two of the Mobile World Congress in Barcelona, Spain, on Tuesday, Feb. 25, 2014. Simon Dawson--Bloomberg / Getty Images

I have been testing the Samsung Gear smartwatch for some time now and have actually become a fan of these types of watches. My first smartwatch was the Pebble, but its limited functionality drove me to try out the Samsung Gear since it gives me something that I really wanted in a smartwatch: email alerts and the ability to read my email on the smartwatch itself.

Like many people in the workplace I get hundreds of emails a day, although very few demand immediate action. But given my type of business, if a client emails me, I like to respond to them as fast as possible. So these smartwatch alerts allow me to be highly responsive to client requests. Yes, sometimes they come during a meeting or while I am doing something where I can’t respond to messages immediately, but being aware of these requests as they come in is important to me and plays heavily into how I manage my workday.

Recently, word leaked that Samsung was working on adding a phone feature to a smartwatch, and it got me wondering whether this is a good idea or not. I grew up in the era of Dick Tracy and I have to admit that I thought his watch-phone was really cool — as a kid, I really wanted one. But as I look at this idea now, I really wonder if a watch-phone would work for me in the real world. More importantly, would consumers even want it? The idea of always lifting up my arm to speak into a watch and having everyone around me being able to hear what’s being said to me is just not appealing, even if it seems cool.

Most likely, such a smartwatch could be tied to a Bluetooth headset so a person could handle voice calls more discreetly, but a lot of people are uncomfortable having a headset in their ear all of the time and for many, it makes them look too much like a geek. I also suspect the user interface would be pretty clumsy, even if it was voice controlled.

The idea of adding a phone feature to a smartwatch comes under the heading that many in the industry call feature-creep. Simply put, engineers keep trying to add a bunch of features into small packages, and while sometimes it works, most of the time it does not. One good example is some of the features Samsung threw into its Galaxy S4 smartphone, especially the hover feature that the majority of people never used. Thankfully, the company took that out in the Galaxy S5 and seemed to learn the lesson that in some devices, less is more.

I have now used about seven smartwatches and each one I have used has tried to cram a lot into a very small package. These watch screens are 1.5” in most cases, and while the screens are sharp and easy to read, putting more features and more text into this small space most often does not work well at all. The good news is that with the Pebble watch, the Samsung Gear watch and others, most developers are creating simple apps that can work on a small screen and deliver what we call “snacking data” such as news alerts, message alerts and, in some cases, email headlines. Also, most of these watches so far are tied to smartphones, serving as extensions of the smartphones themselves.

However, I am starting to see a lot of work being done behind the scenes where some companies are trying to make the smartwatch a standalone device. Not being connected to a smartphone would essentially make it a PDA of sorts in its own right, with all of the data and info and apps delivered to the watch. These watches wouldn’t be extensions of smartphones as they are today.

Although Samsung has not actually shared any details about its supposed smartwatch-phone, it would not surprise me if that’s the direction the company might take with this device. While Samsung would still want to sell a lot of standalone smartphones — and a smartwatch-phone would never supplant these — from an engineering standpoint, Samsung and others may want to give consumers the option of having their smartphones on their wrists instead of in their pockets.

But would Samsung and others be doing this simply because they can? Or because consumers really want it? Think of the role your smartphone plays in your life today. Could you dump a great 4” or 5” screen that delivers tons of apps and services and instead use only a smartwatch-phone? I know I could not. That’s why I’m quite happy with my smartwatches being extensions of my smartphones, working together harmoniously.

Sure, there will be some early adopters who take the plunge should a smartphone-watch hit the market. But I am very doubtful that these would ever catch on and be a hit with consumers. Rather, they would likely end up being just an engineering showcase for the companies who make them and, at least in my opinion, will never catch on with the broad consumer market.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Why the Maker Movement Is Important to America’s Future

I grew up in the age of Tinker Toys and Erector Sets. Both were meant to inspire me to be a maker instead of a consumer.

My first real tool was a wood-burning engraver that had such a short chord it was almost impossible to use. When I started using it, I burned myself more than once and nearly started a fire at the house. How in the world they sold this to kids in those days is now a mystery to me.

I was in Silicon Valley in the late 1970s, and I started to get more interested in the Homebrew Computer Club and similar user groups where people could get together and talk about tech-related interests. This was how I first got interested in computers.

Along the way, the idea of creating technology got sidelined as I instead started to write about it, chronicling its history. This led me to eventually become a computer research analyst instead of an engineer. This was probably a good thing, since I loved to take things apart but had very little interest in putting them back together. And I would have been a lousy programmer or tech designer. But this did allow me to watch the birth of the tech industry close up, witnessing how it developed and has impacted our world over the last 35 years.

Fast forward to today, and I am very excited about the Maker Movement. The more I look into it, the more I believe that it’s very important to America’s future. It has the potential to turn more and more people into makers instead of just consumers, and I know from history that when you give makers the right tools and inspiration, they have the potential to change the world.

So what is the Maker Movement? I found Adweek’s definition to be right on the money:

The maker movement, as we know, is the umbrella term for independent inventors, designers and tinkerers. A convergence of computer hackers and traditional artisans, the niche is established enough to have its own magazine, Make, as well as hands-on Maker Faires that are catnip for DIYers who used to toil in solitude. Makers tap into an American admiration for self-reliance and combine that with open-source learning, contemporary design and powerful personal technology like 3-D printers. The creations, born in cluttered local workshops and bedroom offices, stir the imaginations of consumers numbed by generic, mass-produced, made-in–China merchandise.

Over the weekend, I had a chance to go to the granddaddy of Maker Faire events held at the San Mateo County Event Center about 20 miles south of San Francisco. The folks behind the event call Maker Faire the “greatest show and tell on Earth.” Sponsored by Make magazine, the event this year drew well over 120,000 to check out all that’s new in the world of making things, such as robots, drones and mini motherboards and processors that can be used to create all types of tech-related projects.

As I walked the many show floors and looked at the various exhibits, I found out that the maker movement, which started like the Homebrew Computer Clubs of the past, is made up of makers who can be defined as anyone that makes things. While its roots are tech-related, there were people at the show teaching how to crochet, make jewelry, and even one area called Home Grown, where do-it-yourselfers showed how to pickle vegetables, can fruits and vegetables, as well as make jams and jellies. There was another area focused on eco-sustainability, bee keeping, composting and growing your own food.

There are eight Maker Faire flagship fairs, including the one in San Mateo that’s held in mid-May and one in New York City, which will be held Sept 20-21. Other Maker Faires or Mini-Maker Faires happen all over the world, including major faires planned in Paris, Rome and Trondheim, Norway during 2014. The other U.S. states with major Maker Faires are Kansas City, Detroit and Atlanta. Over 280,000 attended these faires around the world last year.

According to Atmel, a major backer of the Maker movement, there are approximately 135 million U.S. adults who are makers, and the overall market for 3D printing products and various maker services hit $2.2 billion in 2012. That number is expected to reach $6 billion by 2017 and $8.41 billion by 2020. According to USA Today, makers fuel business with some $29 billion poured into the world economy each year. For more feedback on the economics of the Maker Movement, check out Jeremiah Owyang’s “Maker Movement and 3D Printing Industry Stats.”

One of the people who really understands the Maker Movement is Zach Kaplan, the CEO of Inventables, which is an online hardware store for designers in the Maker Movement. I think of his site as a kind of Amazon for Makers.

I met Kaplan at the recent TED conference in Vancouver, where he told me about the history of the Maker Movement and its culture. He pointed out that this movement is quite important, saying, “It has the potential of giving anyone the tools they need to become makers and move them from passive users to active creators.” I caught up with him at last weekend’s Make Faire and he told me that he likened the Maker Movement at the moment to where we were with the Apple II back in 1979. He said that in those days, the computer clubs and tech meetings fueled interest in tech and got thousands interested in software programming, semiconductor design and creating tech-related products. Of course, this begat the PC industry and the tech world we live in today.

The Maker Movement has the potential to bring techies and non-techies alike into the world of being creators — some hobby-related, but for many, they could end up making great products and selling them online. In fact, Kaplan pointed out that Etsy has become an eBay-like vehicle for makers to sell their products to users around the world. Of course, eBay and Craigslist are also sources for them to sell their created wares.

Inventables.com has CNC Mills, laser cutters and 3D printers, and people are using them to create all types of products for themselves or to sell. Interestingly, Kaplan told me that over 80% of his customers are women who pick up the tools and supplies to create all types of jewelry and items that they sell on Etsy. He said the hot thing at the moment is to use tools bought from him to create custom-engraved bracelets and jewelry. In his booth, he had examples of people making custom glass frames, 3D printed coffee carafes and was letting people use a $600 CNC mill called the Shapeoko to create engraved wood and metal bottle openers.

I also asked Kaplan about why this is taking off now. He said, “The key driver is that the cost of the tools such as 3D printers, CNC Mills and things like Arduino and Raspberry PI mother boards and other core tech products have come down and are in reach of normal consumers.” You can also see how things like Make magazine, books, podcasts and YouTube videos for do-it-yourselfers have grown exponentially and are getting more and more people interested in being makers of some sort.

This movement has caught the attention of many major players in the tech and corporate worlds. At the San Mateo Maker Faire were companies like Intel, Nvidia, AMD, AutoDesk, Oracle/Java, Ford, NASA, Atmel, Qualcomm, TI, 3D Robotics and many more that see this movement as important and want to support it. I was able to catch Intel’s CEO Brain Krzanich near his booth and asked him why Intel was at the Maker Faire. He said, “This is where innovation is occurring and Intel has a great interest in helping spur innovation.”

As someone who has seen firsthand what can happen if the right tools, inspiration and opportunity are available to people, I see the Maker Movement and these types of Maker Faires as being important for fostering innovation. The result is that more and more people create products instead of only consuming them, and it’s my view that moving people from being only consumers to creators is critical to America’s future. At the very least, some of these folks will discover life long hobbies, but many of them could eventually use their tools and creativity to start businesses. And it would not surprise me if the next major inventor or tech leader was a product of the Maker Movement.

I do have one concern, though: As I walked the floors of the Maker Faire during the first day of the event, I did not see one African American family in the crowds while I was there, and I only saw two Hispanic families with kids checking things out. I actually dedicated an hour to walking all over the grounds looking for people of minority descent during the time I was at the show. I would say the majority of the families there where white, although I also saw a lot of Asian and Indian families with their kids roaming the faire.

While most of the families I saw had boys with them, there were many young girls at the show, too. In fact, I took my 11-year old granddaughter with me and she loved the Maker Faire. Perhaps there were a lot of African American and Hispanic families there on the second day, although I can’t be sure. The Maker Faire is a great show and is highly inclusive, and the Maker Movement itself wants everyone one to participate. But the lack of folks from these two minority communities tells me that we in the industry and those in the Maker Movement need to figure ways to get these groups of folks interested in being makers, too. Without the participation of everyone, regardless of race, the Maker Movement may not reach its full potential, especially here in America.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

TIME Big Picture

Scio Pocket Molecular Scanner Is a Google-like Device for Physical Objects

The handheld Scio scanner can detect the molecular makeup of certain objects Consumer Physics

A couple weeks ago I had a fascinating video call with a gentleman named Dror Sharon, the CEO of a company called Consumer Physics. He showed me a product called Scio that just went up on Kickstarter last Tuesday: a hand scanner that can scan physical objects and tell you about their chemical make up.

“Smartphones give us instant answers to questions like where to have dinner, what movie to see, and how to get from point A to point B, but when it comes to learning about what we interact with on a daily basis, we’re left in the dark,” Mr. Dror told me via Skype. “We designed Scio to empower explorers everywhere with new knowledge and to encourage them to join our mission of mapping the physical world.”

Consumer Physics launched a Kickstarter campaign to raise $200,000 for Scio (which is Latin for “to know”) on April 28th, 2014. They reached that goal in 20 hours and raised a total of $400,00 in 48 hours.

At first Scio will come with apps for analyzing food, medication and plants. You could, for instance, use it to refine the ingredients of your home-brewed beer or figure out if an Internet site’s cheap Viagra is fake. Later, the company will add the ability to check cosmetics, clothes, flora, soil, jewels, precious stones, leather, rubber, oils, plastics and even human tissue or bodily fluids.

scio
Early prototypes of the Scio physical object scanner Consumer Physics

Mr. Sharon told me, “The spectrometer figures out what the object is based on an infrared light that reflects back to the scanner. Most objects have different absorption rates as they vibrate at different levels on the molecular scale. The app takes the data and compares it to a cloud-based database of objects in a distant data center. When it gets a match, it sends the results to the user’s smartphone.”

According to Mr. Sharon, “The food app tells you calories, fats, carbohydrates, and proteins, based on your own estimate of the weight of the food you’re about to eat. (With many food packages, you can get the weight from the label). The app could tell dieters exactly how many calories they’re about to consume, while fitness apps can tell them how many calories they’re burning. That helps people figure out exactly how much exercise they need to do in order to burn off the food they’re eating.”

As I understand it, the food app can also gauge produce quality, ripeness, and spoilage for foods like cheeses, fruits, vegetables, sauces, salad dressings, cooking oils and more. It also analyzes moisture levels in plants and tells users when to water them. Mr. Sharon suggested that you could even be able to analyze your blood alcohol level one day, but Scio is not currently approved as a medical device.

What I find most interesting is that as users conduct more tests, the app gets better and better at correctly identifying objects. The more people use it, the richer the database of information will be, which will add to the precision levels of the Scio over time and, more importantly, expand what it can understand. In the demo I saw on an Android smartphone, a ring fills up with circles on your smartphone screen to deliver the proper info, and it takes a matter of seconds to recognize something. Scio has to be about 20 millimeters from an object before it can be used for scanning, and the scanner uses Bluetooth low energy (BLE) to connect with a smartphone, which in turn needs to be running either iOS 5 or Android 4.3 or higher.

He also showed me its ability to scan what looked like a unmarked white pill. Scio correctly identified the chemical makeup of the pill as aspirin and even showed that it was made by Bayer. These are the first types of categories of physical products Scio will target, but eventually it could identify the chemical makeup of just about any object. That is why he likened it to being “Google for physical objects.”

If you are a fan of police procedural TV shows like CSI or NCIS, you already know about things like mass spectrometers and other professional machines that analyze the chemical makeup of objects. These machines can be very large. Although there are some handheld versions available today, they’re all pretty expensive. Scio aims to do similar tasks with a device that can fit into your pocket. And when it ships, it will cost considerably less than professional solutions — as low as $149. Now, I am not suggesting that Scio is as powerful as professional mass spectrometers. However, from what I saw in the demo, it can do similar types of chemical analysis and do it pretty quickly, with the readout showing up on your smartphone.

While I find the idea of a pocket spectrometer interesting, where this could have real impact is if it could be built straight into a smartphone. According to Mr. Sharon, this is ultimately where he sees his technology going. His initial focus is on food, medication and plants, although over time, it could be expanded to cover just about any physical object. Imagine being able to point the scanner in a smartphone at an apple and know exactly how many calories were in it based on its weight. Or if you had a stray pill lying around and you wanted to know what it was before you dare ingested it.

I see this particular device as a game-changer of sorts. Today, all of our searches are being done via text, numbers and through structural databases of some type. But with a consumer-based spectrometer initially designed as a pocketable device that could eventually be built into smartphones, gaining a better understanding of the make up of the physical objects we come into contact with each day would vastly expand a person’s knowledge base. I could imagine it as being part of a set of teaching tools to perhaps get more kids interested in science. Or it could be used in a science-related game as an important tool used to solve a puzzle. At the other extreme, its impact on health-based problems and solutions could be enormous.

This is a technology to watch. As Scio gets smarter as more people use it — and perhaps someday finds its way directly into smartphones — it would add a new dimension to our understanding of the world around us. It could become an important means for connecting us to our physical world in ways we just can’t do today.

Bajarin is the president of Creative Strategies Inc., a technology industry analysis and market-intelligence firm in Silicon Valley. He contributes to Big Picture, an opinion column that appears every week on TIME Tech.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser
Follow

Get every new post delivered to your Inbox.

Join 46,492 other followers