By Alex Fitzpatrick
March 25, 2016

A senior Microsoft executive is apologizing after the company’s artificial intelligence chatbot experiment went horribly awry.

Microsoft’s software, called “Tay,” was designed to interact with Twitter users in part by impersonating them. But online pranksters quickly realized they could manipulate Tay to send hateful, racist messages.

Microsoft pulled Tay offline just a few hours after it launched Wednesday morning.

Peter Lee, Corporate Vice President at Microsoft Research, posted the following apology and explanation on a company blog:

Write to Alex Fitzpatrick at alex.fitzpatrick@time.com.

SPONSORED FINANCIAL CONTENT

Read More From TIME

EDIT POST