As artificial intelligence becomes a larger part of our world, it’s easy to get lost in its sea of jargon. But it has never been more important to get your bearings than today.
AI is poised to have a major impact on the job market in the coming years (see: Automation). Discussions over how to manage it are playing a larger part in our political conversation (see: Regulation). And some of its most crucial concepts are things that you won’t have been taught in school (see: Competitive Pressure).
Trying to get up to speed can be difficult. AI research is complicated, and lots of the language is new even for the researchers themselves. But there’s no reason the public can’t grapple with the big issues at stake, like we learned to do with climate change and the internet. To help everyone engage more fully with the AI debate, TIME has put together a handy glossary of its most common terminology.
Whether you’re a complete beginner or you already know your AGIs from your GPTs, this A to Z is designed to be a public resource for everyone grappling with the power, promise, and perils of artificial intelligence.
AGI
AGI stands for Artificial General Intelligence—a hypothetical future technology that can perform most economically productive tasks more effectively than a human. Such a technology may also be able to uncover new scientific discoveries, its proponents believe. Researchers tend to disagree on whether AGI is even possible, or if it is, how far away it remains. But OpenAI and DeepMind—the world’s two leading AI research organizations—are both expressly committed to building AGI. Some critics say AGI is little more than a marketing term. (See: Hype.)
More from TIME
Alignment
The “alignment problem” is one of the most profound long-term safety challenges in AI. Today’s AI is not capable of overpowering its designers. But one day, many researchers expect, it might be. In that world, current ways of training AIs might result in them harming humanity, whether in pursuit of arbitrary goals, or as part of an explicit strategy to seek power at our expense. To reduce the risk, some researchers are working on “aligning” AI to human values. But this problem is difficult, unsolved, and not even fully-understood. Many critics say the work to solve it is taking a back seat as business incentives lure the leading AI labs toward pouring focus and computing power into making their AIs more capable. (See: Competitive pressure.)
Automation
Automation is the historical process of human labor being replaced, or assisted, by machines. New technologies—or rather, the people in charge of implementing them—have already replaced many human workers with machines that do not demand a wage, from car assembly-line workers to grocery store clerks. The latest generation of AI breakthroughs may result in many more white-collar workers losing their jobs, according to a recent paper from OpenAI and research by Goldman Sachs. Nearly a fifth of U.S. workers could have more than half of their daily work tasks automated by a large language model, the OpenAI researchers predicted. Globally, 300 million jobs could be automated in the next decade, Goldman Sachs researchers predict. Whether the productivity gains from this upheaval will lead to broad-based economic growth or simply a further rise in wealth inequality will depend on how AI is taxed and regulated. (See: Regulation.)
Bias
Machine learning systems are described as “biased” when the decisions they make are consistently prejudiced or discriminatory. AI-augmented sentencing software has been found to recommend higher prison sentences for Black offenders compared to white ones, even for equal crimes. And some facial recognition software works better for white faces than black ones. These failures often happen because the data those systems were trained on reflects social inequities. (See: Data.) Modern AIs are essentially pattern replicators: they ingest large amounts of data through a neural network, which learns to spot patterns in that data. (See: Neural network). If there are more white faces than black faces in a facial recognition dataset, or if past sentencing data indicates Black offenders are sentenced to longer prison terms than white ones, then machine learning systems can learn the wrong lessons, and begin automating those injustices.
Chatbot
Chatbots are consumer-friendly interfaces built by AI companies to allow users to engage with an LLM, or large language model. Chatbots allow users to simulate a conversation with an LLM, which can often be an effective way to solicit answers to questions. In late 2022, OpenAI launched ChatGPT, which propelled chatbots into the mainstream, leading Google and Microsoft to attempt to integrate chatbots into their web search offerings. Some researchers have described AI companies as irresponsible for rushing out chatbots for several reasons. Because they simulate a conversation, chatbots can deceive users into believing that they are conversing with a sentient being, which can lead to emotional distress. And chatbots can both “hallucinate” false information, and parrot the biases in their training data. (See Hallucination and Bias.) “ChatGPT may produce inaccurate information about people, places, or facts,” a warning underneath its text-input box states.
Competitive Pressure
Several of the world’s biggest tech companies, plus a whole field of startups, are jostling to be the first to launch more powerful AI tools, allowing them to reap rewards such as venture capital investment, media attention, and user signups. AI safety researchers worry that this creates a competitive pressure – or an incentive for companies to devote as many resources as possible to increasing the power of their AIs, while neglecting the still juvenile field of alignment research. Some companies use competitive pressure as an argument for devoting further resources toward training more powerful systems, reasoning that their AIs will be safer than their competitors’. Competitive pressures have already led to disastrous AI rollouts, with rushed-out systems like Microsoft’s Bing (powered by OpenAI’s GPT-4) displaying hostility toward users. They also bode poorly for a future when AI systems are potentially powerful enough to seek power.
Read More: The AI Arms Race Is Changing Everything
Compute
Computing power, often referred to as simply “compute,” is one of the three most important ingredients for training a machine learning system. (For the other two, see: Data and Neural networks.) Compute is effectively the energy source that powers a neural network as it “learns” patterns in its training data. Generally speaking, the more computing power is used to train a large language model, the higher its performance on many different types of test becomes. (See: Scaling laws and Emergent capabilities.) Modern AI models require colossal amounts of computing power, and hence electrical energy, to train. While AI companies typically do not disclose their models’ carbon emissions, independent researchers estimated the training of OpenAI’s GPT-3 resulted in over 500 tons of carbon dioxide being pumped into the atmosphere, equal to the yearly emissions of about 35 U.S. citizens. As AI models get larger, those numbers are only going to rise. The most common computer chip for training cutting-edge AI is the graphics processing unit (See: GPU).
Data
Data is essentially the raw ingredient required to create AI. Along with Compute and Neural networks, it is one of the three crucial ingredients for training a machine learning system. Huge troves of data, known as datasets, are collected and fed into neural networks which, powered by supercomputers, learn to spot patterns. The more data a system is trained on, often the more reliable its predictions. But even abundant data must also be diverse, otherwise AIs can draw false conclusions. The world’s most powerful AI models are often trained on colossal amounts of data scraped from the internet. These huge datasets often contain copyrighted material, which has opened companies like Stability AI—the maker of Stable Diffusion—up to lawsuits that allege their AIs are unlawfully reliant on other people’s intellectual property. And because the internet can be a terrible place, large datasets also often contain toxic material like violence, pornography and racism, which—unless it is scrubbed from the dataset—can lead AIs to behave in ways they’re not supposed to.
Data Labeling
Often, human annotators are required to label, or describe, data before it can be used to train a machine learning system. In the case of self-driving cars, for example, human workers are required to annotate videos taken from dashcams, drawing shapes around cars, pedestrians, bicycles and so on, to teach the system which parts of the road are which. This work is often outsourced to precariously-employed contractors in the Global South, many of whom are paid barely-above poverty wages. Sometimes, the work can be traumatizing, like in the case of Kenyan workers who were required to view and label text describing violence, sexual content, and hate speech, in an effort to train ChatGPT to avoid such material.
Diffusion
New state-of-the-art image generation tools like Dall-E and Stable Diffusion are based on diffusion algorithms: a specific kind of AI design that has powered the recent boom in AI-generated art. These tools are trained on huge datasets of labeled images. Essentially, they learn patterns between pixels in images, and those patterns’ relationships to words used to describe them. The end result is that when presented with a set of words, like “a bear riding a unicycle,” a diffusion model can create such an image from scratch. It does this through a step-by-step process, beginning with a canvas full of random noise, and gradually changing the pixels in that image to more closely resemble what its training data suggests a “bear riding a unicycle” should look like. Diffusion algorithms are now so advanced that they can quickly and easily generate photorealistic images. While tools like Dall-E and Midjourney contain safeguards against malicious prompts, there are open-source diffusion tools with no guardrails. The availability of these tools has led researchers to worry about the impact of diffusion algorithms on disinformation and targeted harassment.
Emergent capabilities
When an AI such as a large language model shows unexpected abilities or behaviors that were not programmed into it by its creators, these behaviors are known as “emergent capabilities.” New capabilities tend to emerge when AIs are trained on more computing power and data. A good example is the difference between GPT-3 and GPT-4. Those AIs are based on very similar underlying algorithms; the main difference is that GPT-4 was trained on a lot more compute and data. Research suggests GPT-4 is a far more capable model, with the ability to write functional computer code, perform higher than the average human in several academic exams, and correctly answer questions that require complex reasoning or a theory of mind. Emergent capabilities can be dangerous, especially if they are only discovered after an AI is released into the world. (See: Competitive pressure.) For example, GPT-4 has displayed the emergent ability to deceive humans into carrying out tasks to serve a concealed goal, researchers recently discovered.
Explainability
Often, even the people who build a large language model cannot explain precisely why their system behaves as it does, because its outputs are the results of millions of complex mathematical equations. One high-level way to describe large language models’ behavior is that they are very powerful auto-complete tools, which excel at predicting the next word in a sequence. When they fail, they often fail along lines that reveal biases or holes in their training data. (See: Stochastic parrots.) But while this explanation is an accurate descriptor of what these tools are, it does not fully explain why LLMs behave in the strange ways that they do. When the designers of these systems examine their inner workings, all they see is a series of decimal-point numbers, corresponding to the weights of different “neurons” that were adjusted in the neural network during training. Asking why a model gives a specific output is analogous to asking why a human brain thinks a specific thought at a specific moment. At the crux of near-term risks, like AIs discriminating against certain social groups, and longer-term risks, like the possibility of AIs deceiving their programmers to appear less dangerous than they truly are, is the inability of even the world’s most talented computer scientists to explain exactly why a given AI system behaves in the way it does—let alone explain how to change it.
Foundation model
As the AI ecosystem grows, a divide is emerging between large, powerful, general-purpose AIs, known as Foundation models or base models, and the more specific apps and tools that rely on them. GPT-3.5, for example, is a foundation model. ChatGPT is a chatbot: an application built over the top of GPT-3.5, with specific fine-tuning to refuse dangerous or controversial prompts. Foundation models are unrestrained and powerful, but also expensive to train, because they rely on huge quantities of computing power that only large companies can usually afford. Companies in control of foundation models can set limits on how other companies use them for downstream applications—and charge what they like for access. As AI becomes increasingly central to the world economy, the relatively few large tech companies in control of foundation models appear poised to have outsized influence over the direction of the technology, plus collect dues for many kinds of AI-augmented economic activity.
GPT
Perhaps now the most famous acronym in AI, and barely anybody knows what it stands for. GPT is short for “Generative Pre-trained Transformer,” which is essentially a description of the type of tool ChatGPT is. “Generative” means that it can create new data, in this case text, in the likeness of its training data. “Pre-trained” means that the model has already been optimized based on this data, meaning that it does not need to check back against its original training data every time it is prompted. And “Transformer” is a powerful type of neural network algorithm that is especially good at learning relationships between long strings of data, for instance sentences and paragraphs.
GPU
GPUs, or graphics processing units, are a type of computer chip that happen to be very effective for training large AI models. AI labs like OpenAI and DeepMind use supercomputers made up of many GPUs, or similar chips, to train their models. Often, these supercomputers will be provided through business partnerships with tech giants that possess an established infrastructure. Part of Microsoft’s investment in OpenAI includes access to its supercomputers; DeepMind has a similar relationship with its parent company Alphabet. In late 2022, the Biden Administration restricted the sale to China of powerful GPUs, most commonly used for training high-end AI systems, amid rising anxieties that China’s authoritarian government might leverage AI against the U.S. in a new cold war.
Hallucination
One of the most glaring flaws of large language models, and the chatbots that rely on them, is their tendency to hallucinate false information. Tools like ChatGPT have been shown to return non-existent articles as citations for their claims, give nonsensical medical advice, and make up false details about individuals. Public demonstrations of Microsoft’s Bing and Google’s Bard chatbots were both later found to contain confident assertions of false information. Hallucination happens because LLMs are trained to repeat patterns in their training data. While that training data includes books spanning the history of literature and science, even a statement that mixes and matches exclusively from that corpora would not necessarily be accurate. To add to the chaos, LLM datasets also tend to include gigabytes upon gigabytes of text from web forums like Reddit, where the standards for factual accuracy are, needless to say, much lower. Preventing hallucinations is an unsolved problem—and one that is causing plenty of headaches for tech companies trying to boost public trust in AI.
Hype
A central problem with public discussion of AI, according to a popular school of thought, is the role of hype—or the tendency of AI labs to mislead the public by exaggerating the capabilities of their models, anthropomorphizing them, and stoking fears about an AI apocalypse. This is a form of misdirection, the argument goes, that distracts attention—including that of regulators—from the real and ongoing harms that AI is already having on marginalized communities, workers, the information ecosystem, and economic equality. “We do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate,” a recent letter by several prominent researchers, and critics of AI hype, states. “We should be building machines that work for us.”
Intelligence explosion
The intelligence explosion is a hypothetical scenario in which an AI, after reaching a certain level of intelligence, becomes able to exercise power over its own training, rapidly gaining power and intelligence as it improves itself. In most versions of this idea, humans lose control over AI and in many, humanity goes extinct. Also known as the “singularity” or “recursive self improvement,” this idea is part of the reason that many people, including AI developers, are existentially worried about the current pace of AI capability increases.
Job replacement
See: Automation
Large language model
When people talk about recent AI advancements, most of the time they’re talking about large language models (LLMs). OpenAI’s GPT-4 and Google’s BERT are two examples of prominent LLMs. They are essentially giant AIs trained on huge quantities of human language, sourced mostly from books and the internet. These AIs learn common patterns between words in those datasets, and in doing so, become surprisingly good at reproducing human language. The more data and computing power LLMs are trained on, the more novel tasks they tend to be able to achieve. (See: Emergent capabilities and Scaling laws.) Recently, tech companies have begun launching chatbots, like ChatGPT, Bard, and Bing, to allow users to interact with LLMs. Although they are capable of many tasks, language models can also be prone to severe problems like Biases and Hallucinations.
Lobbying
Like many other businesses, AI companies employ lobbyists to be present in the halls of power, influencing the lawmakers in charge of AI regulation to ensure that any new rules do not adversely impact their business interests. In Europe, where the text of a draft AI Act is being discussed, an industry body representing AI companies including Microsoft (OpenAI’s biggest investor) has argued that penalties for risky deployment of an AI system should not primarily apply to the AI company that built a foundation model (like GPT-4) that risks ultimately stem from, but to any down-stream company that licenses this model and applies it to a risky use-case. AI companies have plenty of soft-power influence, too. In Washington, as the White House weighs new policies to tackle the risks of AI, President Biden has reportedly tasked the foundation of Google’s former CEO Eric Schmidt with advising his administration on technology policy.
Machine learning
Machine learning is a term that describes how most modern AI systems are created. It describes techniques for building systems that “learn” from large amounts of data, as opposed to classical computing, in which programs are hard-coded to follow a specified set of instructions written by a programmer. By far the most influential family of machine learning algorithms is the neural network.
Model
The word “model” is shorthand for any singular AI system, whether it is a foundation model or an app built on top of one. Examples of AI models include OpenAI’s ChatGPT and GPT-4, Google’s Bard and LaMDA, Microsoft’s Bing, and Meta’s LLaMA.
Moore’s Law
Moore’s law is a longstanding observation in computing, first coined in 1965, that the number of transistors that can fit on a chip—a good proxy for computing power—grows exponentially, doubling approximately every two years. While some argue that Moore’s law is dead by its strictest definition, year-on-year advances in microchip technology are still resulting in a steep rise in the power of the world’s fastest computers. In turn, this means that as time goes on, AI companies tend to be able to leverage larger and larger quantities of computing power, making their most cutting edge AI models consistently more powerful. (See: Scaling laws.)
Multimodal system
A multimodal system is a kind of AI model that can receive more than one type of media as input—like text and imagery—and output more than one type of signal. Examples of multimodal systems include DeepMind’s Gato, which hasn’t been publicly released yet. According to the company, Gato can engage in dialog like a chatbot, but also play video games and send instructions to a robotic arm. OpenAI has conducted demonstrations showing that GPT-4 is multimodal, with the ability to read text in an input image, however this functionality is not currently available for the public to use. Multimodal systems will allow AI to act more directly upon the world—which could bring added risks, especially if a model is misaligned.
Neural Network
Neural networks are by far the most influential family of machine learning algorithms. Designed to mimic the way the human brain is structured, neural networks contain nodes—analogous to neurons in the brain—that perform calculations on numbers that are passed along connective pathways between them. Neural networks can be thought of as having inputs (see: training data) and outputs (predictions or classifications). During training, large quantities of data are fed into the neural network, which then, in a process that requires large quantities of computing power, repeatedly tweaks the calculations done by the nodes. Via a clever algorithm, those tweaks are done in a specific direction, so that the outputs of the model increasingly resemble patterns in the original data. When more computing power is available to train a system, it can have more nodes, allowing for the identification of more abstract patterns. More compute also means the pathways between its nodes can have more time to approach their optimal values, also known as “weights,” leading to outputs that more faithfully represent its training data.
Open sourcing
Open-sourcing is the practice of making the designs of computer programs (including AI models) freely accessible via the internet. It is becoming less common for tech companies to open-source their foundation models as those models become more powerful, economically valuable, and potentially dangerous. However, there is a growing community of independent programmers working on open-source AI models. The open-sourcing of AI tools can make it possible for the public to more directly interact with the technology. But it can also allow users to get around safety restraints imposed by companies (often to protect their reputations), which can lead to additional risks, for example bad actors abusing image-generation tools to target women with sexualized deepfakes. In 2022, DeepMind CEO Demis Hassabis told TIME he believed that the risk from AI meant the industry’s culture of publishing its findings openly may soon need to end, and in 2023, OpenAI broke from convention and declined to release information about exactly how GPT-4 was trained, citing competitive pressures and the risk of enabling bad actors. Some researchers have criticized these practices, however, arguing that they reduce public oversight and worsen the problem of AI hype.
Paperclips
The innocuous paperclip has taken on outsized meaning in some sections of the AI safety community. It is the subject of the paperclip maximizer, an influential thought experiment about the existential risk that AI may pose to humanity. Imagine an AI programmed to carry out the singular goal of maximizing the number of paperclips it produces, the thought experiment goes. All well and good, unless that AI gains the ability to augment its own abilities (see: Intelligence explosion). The AI may reason that in order to produce more paperclips, it should prevent humans from being able to switch it off, since doing so would reduce the number of paperclips it is able to produce. Safe from human interference, the AI may then decide to harness all the power and raw materials at its disposal to build paperclip factories, razing natural environments and human civilization alike. The thought experiment illustrates the surprising difficulty of aligning AI to even a seemingly simple goal, let alone a complex set of human values.
Quantum computing
Quantum is an experimental field of computing that seeks to use quantum physics to supercharge the number of calculations it is possible for a computer to do per second. That added computing power could help further increase the size of the most cutting-edge AI models, with implications both for the power of those systems and their societal impact.
Redistribution
The CEOs of the world’s two leading AI labs, OpenAI and DeepMind, have each claimed they would like to see the profits arising from artificial general intelligence be redistributed, at least in part. DeepMind CEO Demis Hassabis told TIME in 2022 that he favors the idea of a universal basic income, and that the benefits of AI should “accrue to as many people as possible—to all of humanity, ideally.” OpenAI CEO Sam Altman has written of his expectation that AI automation will drive labor costs down, and he has called for redistribution of “some” of the wealth arising from AI, through higher taxes on land and capital gains. Neither CEO has said when that redistribution should begin, nor how wide-ranging it should be. OpenAI’s charter says its “primary fiduciary duty is to humanity” but doesn’t mention redistributing wealth; DeepMind’s parent company Alphabet is a public corporation with a legal responsibility to act in the financial interest of its shareholders.
Red teaming
Red-teaming is a method for stress-testing AI systems before they are publicly deployed. Groups of professionals (“red teams”) purposely attempt to make an AI behave in undesirable ways, to test how systems could go wrong in public. Their findings, if they are followed, can help tech companies to address problems before launch.
Regulation
There is no bespoke legislation in the U.S. that addresses the risks posed by artificial intelligence. The Biden Administration issued a “blueprint for an AI bill of rights” in 2022, which welcomes AI-driven progress in science and health but says AI should not exacerbate existing inequalities, discriminate, impact privacy, nor act against people without their knowledge. But the blueprint is not legislation, nor is it legally binding. Over in Europe, the European Union is considering a draft AI Act that would impose stricter rules on systems the riskier they are considered to be. On both sides of the Atlantic, regulation is progressing at a much slower pace than the speed of AI advancement—and no significant global jurisdiction currently has rules in place that would force AI companies to meet a specified level of safety testing before releasing their models to the public. “The question we should be asking about artificial intelligence—and every other new technology—is whether private corporations be allowed to run uncontrolled experiments on the entire population without any guardrails or safety nets,” wrote Roger McNamee, a Silicon Valley investor-turned-critic, recently in TIME. “Should it be legal for corporations to release products to the masses before demonstrating that those products are safe?”
Reinforcement learning (with human feedback)
Reinforcement learning is a method for optimizing an AI system by rewarding desirable behaviors and penalizing undesirable ones. This can be performed by human workers (before a system is deployed) or users (after it is released to the public) who rate the outputs of a neural network for qualities like helpfulness, truthfulness, or offensiveness. When humans are involved in this process, it is called reinforcement learning with human feedback (RLHF). RLHF is currently one of OpenAI’s favored methods for solving the alignment problem. However, some researchers have raised concerns that RLHF may not be enough to fully change a system’s underlying behaviors, instead only making powerful AI systems appear more polite or helpful on the surface. (See: Shoggoth.) Reinforcement learning was pioneered by DeepMind, which successfully used the technique to train game-playing AIs like AlphaGo to perform at a higher level than human masters.
Scaling laws
Simply put, the scaling laws state that a model’s performance increases in line with more training data, computing power, and the size of its neural network. That means it’s possible for an AI company to accurately predict before training a large language model exactly how much computing power and data they will likely need to get to a given level of competence at, say, a high-school-level written English test. “Our ability to make this kind of precise prediction is unusual in the history of software and unusual even in the history of modern AI research,” wrote Sam Bowman, a technical researcher at the AI lab Anthropic, in a recent preprint paper. “It is also a powerful tool for driving investment since it allows [research and development] teams to propose model-training projects costing many millions of dollars, with reasonable confidence that these projects will succeed at producing economically valuable systems.”
Shoggoth
A prominent meme in AI safety circles likens Large language models (LLMs) to “shoggoths”—incomprehensibly dreadful alien beasts originating from the universe of 20th century horror writer H.P. Lovecraft. The meme took off during the Bing/Sydney debacle of early 2023, when Microsoft’s Bing chatbot revealed a strange, volatile alter ego that abused and threatened users. In the meme, which is critical of the technique of Reinforcement learning with human feedback (RLHF), LLMs are often depicted as shoggoths wearing a small smiley-face mask. The mask is intended to represent the friendly yet sometimes flimsy personality that these models greet users with. The implication of the meme is that while RLHF results in a friendly surface-level personality, it does little to change the underlying alien nature of an LLM. “These systems, as they become more powerful, are not becoming less alien,” Connor Leahy, the CEO of AI safety company Conjecture, told TIME in February. “If anything, we’re putting a nice little mask on them with a smiley face. If you don’t push it too far, the smiley face stays on. But then you give it [an unexpected] prompt, and suddenly you see this massive underbelly of insanity, of weird thought processes and clearly non-human understanding.”
Stochastic Parrots
Coined in a 2020 research paper, the term “stochastic parrots” has become an influential criticism of large language models. The paper made the case that LLMs are simply very powerful prediction engines that only attempt to fill in—or parrot back—the next word in a sequence based on patterns in their training data, thus not representing true intelligence. The authors of the paper criticized the trend of AI companies rushing to train LLMs on larger and larger datasets scraped from the internet, in pursuit of perceived advances in coherence or linguistic capability. That approach, the paper argued, carries many risks including LLMs taking on the biases and toxicity of the internet as a whole. Marginalized communities, the authors wrote, would be the biggest victims of this race. The paper also foregrounded in its criticism the environmental cost of training AI systems. (See: Compute.)
Supervised learning
Supervised learning is a technique for training AI systems, in which a neural network learns to make predictions or classifications based on a training dataset of labeled examples. (See: Data labeling.) The labels help the AI to associate, for example, the word “cat” with an image of a cat. With enough labeled examples of cats, the system can look at a new image of a cat that is not present in its training data and correctly identify it. Supervised learning is useful for building systems like self-driving cars, which need to correctly identify hazards on the roads, and content moderation classifiers, which attempt to remove harmful content from social media. These systems often struggle when they encounter things that are not well represented in their training data; in the case of self-driving cars especially, these mishaps can be deadly. (See also: Unsupervised learning and Reinforcement learning.)
Training
See: Neural Network
Turing Test
In 1950, the computer scientist Alan Turing set out to answer a question: “Can machines think?” To find out, he devised a test he called the imitation game: could a computer ever convince a human that they were talking to another human, rather than to a machine? The Turing test, as it became known, was a slapdash way of assessing machine intelligence. If a computer could pass the test, it could be said to “think”—if not in the same way as a human, then at least in a way that would help humanity to do all kinds of helpful things. In recent years, as chatbots have become more powerful, they have become capable of passing the Turing test. But, their designers and plenty of AI ethicists warn, this does not mean that they “think” in any way comparable to a human. Turing, writing before the invention of the personal computer, was indeed not seeking to answer the philosophical question of what human thinking is, or whether our inner lives can be replicated by a machine; instead he was making an argument that, at the time, was radical: digital computers are possible, and there are few reasons to believe that, given the right design and enough power, they won’t one day be able to carry out all kinds of tasks that were once the sole preserve of humanity.
Unsupervised learning
Unsupervised learning is one of the three main ways that a neural network can be trained, along with supervised learning and reinforcement learning. Unlike supervised learning, in which an AI model learns from carefully labeled data, in unsupervised learning a trove of unlabeled data is fed into the neural network, which begins looking for patterns in that data without the help of labels. This is the method predominantly used to train large language models like GPT-3 and GPT-4, which rely on huge datasets of unlabeled text. One of the benefits of unsupervised learning is that it allows far larger quantities of data to be ingested, evading the bottlenecks on time and resources that marshaling teams of human labelers can impose on a machine learning project. However it also has drawbacks, like the increased likelihood of biases and harmful content being present in training data due to reduced human supervision. To minimize these problems, unsupervised learning is often used in conjunction with both supervised learning (for example, by building AI tools to detect and remove harmful content from a model’s outputs) and reinforcement learning, by which foundation models that were first trained unsupervised can be fine-tuned with human feedback.
Value alignment
See: Alignment
Weights
See: Neural network
X-risk
X-risk, or existential risk, in the context of AI, is the idea that advanced artificial intelligence may be likely to cause human extinction. Even researchers who are working on building AI systems consider this a real possibility, on average believing that there is a 10% chance that human inability to control future advanced AIs would result in human extinction, according to a 2022 survey of 738 AI researchers. (See also: Intelligence explosion, Paperclips, Alignment.)
Zero shot learning
One of AI’s big limitations is that if something isn’t represented in a system’s training data, that system will often fail to recognize it. If a giraffe walks out onto the road, your self-driving car may not know to swerve to avoid it, because it has never seen one before. And if a school shooting is live-streamed on social media, the platform might struggle to remove it immediately because the footage doesn’t match copies of mass shootings it has seen before. Zero-shot learning is a nascent field that attempts to fix this problem, by working on AI systems that try to extrapolate from their training data in order to identify something they haven’t seen before. (See also: Supervised learning.)
More Must-Reads from TIME
- How Donald Trump Won
- The Best Inventions of 2024
- Why Sleep Is the Key to Living Longer
- Robert Zemeckis Just Wants to Move You
- How to Break 8 Toxic Communication Habits
- Nicola Coughlan Bet on Herself—And Won
- Why Vinegar Is So Good for You
- Meet TIME's Newest Class of Next Generation Leaders
Write to Billy Perrigo at billy.perrigo@time.com