After Apple’s WWDC keynote last month, some reporters asked me if Apple’s new additions to Siri are reactionary. They assumed that since Amazon, Google and Microsoft have upped the intelligence of their voice assistants, Apple was forced to make Siri more competitive.
But to think that Apple’s Siri improvements are reactionary shows a lack of understanding about the company’s work in artificial intelligence (AI). Apple has been working on speech and voice AI solutions for decades. In fact, in 1992, I got involved with the earliest version of its voice technology research, which was tied to an early AI and machine learning engine. Apple has been building on that technology ever since.
Back then, there were many reasons for the company’s minimal success with voice recognition and speech. Mostly it was due to technological constraints. Also, that original work involved the company’s ill-fated Newton handset, and processing power in mobile devices was even worse at the time than it was on desktops.
Read more: Everything you know about Siri is wrong
However, Apple has had a major AI project going on ever since. Only in 2011 did Apple believe the iPhone had enough horsepower to deliver the first generation of Siri. (Apple acquired the company behind Siri in 2010.) When Apple introduced its version of Siri, I wrote that it would become the underlying data gathering engine for Apple, and could even become a threat to Google’s dominance in search. To some degree that has been true. As people have used Siri, it has gathered all types of data points. It started building its own knowledge database that was tied to an already impressive database Apple had.
Since then, Apple has been using new forms of AI and machine learning to expand that knowledge base. It has also tapped into other data sources to give Siri a much larger base of info to pull from when answering questions. The iPhone’s increasing processing power and Apple’s growing expertise at quickly answering questions have meant Siri has become smarter and faster with each passing year.
While watching demos of Apple’s latest version of Siri at WWDC, it became clear that Siri’s contextual skills have increased significantly. Users can now ask unstructured questions and get answers in a more contextual and conversational manner. To do this, Apple is using new AI and machine learning techniques that give Siri greater range and the ability to respond more accurately.
This is the one major area where Apple rivals Google, Amazon and Microsoft all have raised their game as well. They are all spending billions on voice assistants, as whoever has the best version that delivers the most accurate information in a highly conversational manner has the chance to not only expand their user base but to attract even more customers. In Apple’s case, AI work is vital to the company’s long term success. Apple will deploy Siri across its myriad platforms, turning voice into an important new form of user interface.
Apple will continue to enhance its AI and machine learning technology, and possibly make more acquisitions that could increase Siri’s range and accuracy over time. But Apple has been doing AI based research for a long time. Expect that to continue well into the future, powering innovations across Apple’s hardware, software and services.
Tim Bajarin is recognized as one of the leading industry consultants, analysts and futurists, covering the field of personal computers and consumer technology. Mr. Bajarin is the President of Creative Strategies, Inc and has been with the company since 1981 where he has served as a consultant providing analysis to most of the leading hardware and software vendors in the industry.