Nearly a quarter of organizations polled in a recent McKinsey report said they had experienced negative consequences from generative AI’s inaccuracy. Guardrails, released last fall by Israel-based startup Aporia, adds a collection of small language models between a chatbot and users that work together to intercept inaccurate, inappropriate, or off-topic responses while giving companies better privacy controls. It also prevents users’ attempts to manipulate AI, by, for example, stopping users who pressure a chatbot into giving them a discount. Liran Hason, Aporia’s co-founder and CEO, says the company’s goal is ensuring humanity “can really trust AI." Guardrails’s early clients include insurance giant Munich Re and rental car company Sixt.
More Must-Reads from TIME
- Introducing the 2025 Closers
- Colman Domingo Leads With Radical Love
- Trump and Musk Have All of Washington on Edge
- Why, Exactly, Is Alcohol So Bad for You?
- The Motivational Trick That Makes You Exercise Harder
- 11 New Books to Read in February
- How to Get Better at Doing Things Alone
- Column: Trump’s Trans Military Ban Betrays Our Troops
Contact us at letters@time.com