Nearly a quarter of organizations polled in a recent McKinsey report said they had experienced negative consequences from generative AI’s inaccuracy. Guardrails, released last fall by Israel-based startup Aporia, adds a collection of small language models between a chatbot and users that work together to intercept inaccurate, inappropriate, or off-topic responses while giving companies better privacy controls. It also prevents users’ attempts to manipulate AI, by, for example, stopping users who pressure a chatbot into giving them a discount. Liran Hason, Aporia’s co-founder and CEO, says the company’s goal is ensuring humanity “can really trust AI." Guardrails’s early clients include insurance giant Munich Re and rental car company Sixt.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com