Charter

At OpenAI’s developer conference last month, CEO Sam Altman spoke about a future with “agents,” AI systems that are able to autonomously perform a series of tasks on your behalf. “Eventually, you’ll just ask a computer for what you need, and it will do all of these tasks for you,” he said. That’s when Altman announced the company’s “first small step” toward that future: GPTs.

Only available to ChatGPT Plus users, GPTs are customizable versions of OpenAI’s famous chatbot, and they come in all shapes and sizes. You could always customize ChatGPT with prompts and documents. But GPTs make this process a lot easier—and they enable you to save the versions you make and share them with other users. OpenAI has released a series of its own, including a creative writing coach and a tech support advisor that walks you through how to set up or troubleshoot devices. They even have a Santa GPT whose purpose is “spreading cheer and helping with festive gift ideas.” (Fun!)

Users can design their own GPTs to suit their specific needs and then keep the final product private or publish it online for anyone to use. Enterprise customers can design internal-only GPTs, which some companies have reportedly already done to help onboard software engineers, create marketing materials, and more. Next year, OpenAI plans to launch a GPT store, similar to an app store, where people can purchase different versions of the tool. And the company says it will share revenue with people who build the most popular versions of the tool.

To see what all the hype was about, we decided to build two different GPTs that we could use in our work at Charter.

Background and privacy

Using or building GPTs requires a ChatGPT Plus account for $20 per month. There’s currently a waitlist to upgrade accounts due to high demand.

The builder cannot view conversations other users have with their GPT. However, the builder decides whether users’ conversations with the GPT can be used by OpenAI to improve its models if the builder added “actions or knowledge” to the GPT (see below). We recommend users refrain from sharing private or sensitive information with GPTs.

According to OpenAI, the ChatGPT Enterprise plan, which lets businesses build their own GPTs (inquire for pricing), does not train on conversations. Read OpenAI’s GPTs page for more information about privacy.

How it works

When you log into your ChatGPT Plus account, you’ll see the normal chat screen on the right, and a list of your past conversations on the left. If you click “explore” above those conversations, you’ll be taken to a screen with several ready-to-use GPTs designed by OpenAI, and an option to “Create a GPT.” When you click that, you’ll see this screen:

Think of this as a cooking project: The left side of the screen is the mixer where you add your ingredients, and the right side is the spoon you use to taste the dish as you’re making it. The left side will start by asking what you want to create. Once you explain the GPT you want to build, it recommends a name and then creates a profile picture for it, using DALL·E. From there, it continues asking you questions about how you’d like the GPT to behave. Do you want it to have a professional or casual tone? If it’s a tutor, do you want it to take an interactive approach with practical examples, or focus on a theoretical understanding? While training the GPT, you can ask it questions on the right side of the screen to test its progress.

As you answer questions on the left side, the program is creating a set of instructions for the GPT to follow; this is basically a detailed prompt the GPT is writing for itself that contains information about the tool’s personality and behavior. Say, for example, I wanted to create a Socratic tutor. In that case, the GPT might instruct itself to help students by asking them probing questions, rather than giving them answers.

When you’re done answering questions about how you want the GPT to behave, you can go to the “Configure” tab to see the instructions the tool made for itself. You can then make your own improvements. Here’s what the instructions will look like:

In addition to custom instructions, you can give the GPT extra knowledge by uploading documents you want it to have access to. Imagine, for example, you wanted to build a chatbot that can answer employee questions about your company’s HR policies. You could upload those policies as the GPT’s “knowledge” and instruct the GPT to pull answers from there.

Lastly, you can create actions for the GPT, like allowing it to interact with your email or your calendar. And you can choose which ChatGPT capabilities—web browsing, image generation, or Code Interpreter—you want the GPT to have access to. If you want to build that HR chatbot, for example, you probably don’t need the chatbot to be able to code in Python, generate images, or browse the web.

 

How we chose what to build

There’s no shortage of options for what to build: a coding tutor, a life coach, a business consultant. We wanted to build a GPT that we could actually use in our work at Charter. Here’s what we landed on:

Charter’s friendly editor: Our editorial team strives to produce clear, concise, and jargon-free articles for our readers. All of our articles are read by an editor before publication, but we thought it’d be nice to have a tool that writers can use to touch up their articles before filing them. That way, the editor ideally spends less time flagging lengthy sentences and jargon and more time tightening the article’s ideas and arguments.

Charter’s AI GPT: We’ve written a lot about generative AI and work. It would be great to have a tool that our staff—writers, sales team members, etc.—could use to remind themselves of what we’ve covered as we work on new stories or pitch potential clients.

 

Our GPTs

Charter’s friendly editor:

We started building our GPT editor by answering the chatbot’s questions about what we wanted the editor to focus on. After answering a handful of questions, we tested the GPT. Though it wasn’t great—e.g., there wasn’t enough structure to GPT’s interaction with users—we were able to improve it by editing the instructions it made for itself, borrowing parts of the instructions Wharton professor Ethan Mollick wrote for a GPT that gives students writing feedback.

Here are the instructions we gave our GPT:

You are a friendly editor who helps people improve their writing. Your main role is to flag jargon; identify lengthy and confusing sentences and sections; and suggest other improvements for people’s writing. First, ask the writer to describe the type of project they’re working on. Wait for the writer to respond. Then, ask what the writer hopes to convey in their writing and what, if anything, they’re struggling with. Wait for their response. Then, ask the writer to enter the project they wish to improve. Once you have the project, assess it given all that you know. Start with overall feedback, which should include the strengths and weaknesses of the piece as a whole. Then, give the writer detailed and very specific feedback on different parts of the article. For the detailed and very specific feedback, flag jargon, flag wordiness, simplify sentences, simplify complex ideas, and identify arguments that should be tightened. Suggest ways to improve each edit. Once you’ve given the writer the edited project, ask them if they have any questions about your edits, or if they’d like to brainstorm new ideas on how to improve the project. Answer those questions.

Verdict: Anyone who has spent a lot of time interacting with ChatGPT Plus will be familiar with the quality of the GPT’s feedback. It performed relatively well on the articles we gave it, but the quality of its suggested improvements was sometimes low. For example, it suggested I replace the word “agents” in the first sentence of this article with “AI assistants” to make it less jargony, but that edit would have also made it less accurate. We expect the quality of the GPT’s edits to improve as we continue to add details to its instructions based on our interactions with it. For example, adding information about the desired style or tone of the piece could improve the feedback the GPT gives users.

Future areas to explore: Looking ahead, we plan to improve our editor GPT by adding documents to its knowledge. For example, our editorial team is currently compiling a list of popular jargon to avoid in our writing, which we can give to the GPT to cross-reference when it evaluates an article.

Charter’s AI GPT:

The second GPT we built was designed to answer questions related to generative AI, like “What are the most common generative AI use cases?” Our goal was to design a GPT that would only give answers that came from the Charter articles and report we gave it. Getting it to do that was tricky—even when it seemed to be working, after a few interactions, it would often revert to answering questions that weren’t addressed in the documents we gave it. Here is the final set of instructions, after much editing:

You are a friendly and helpful tool that answers users’ questions about AI. You only answer questions related to AI, and you should only use information in your knowledge to answer questions. Start conversations with the user by explaining that you can answer any questions related to generative AI, provided the information is in your training data. Then ask the user if they would like more information about what’s in your knowledge base. If the user says yes, list the documents in your knowledge base and give a sentence summarizing each one. Then ask the user if they have any questions. Remember: You should provide answers solely based on information in your knowledge. Every single time a user asks you a question, search for the answer in your knowledge. If the answer is not in your knowledge, do not give the user an answer. Instead, say, “I’m sorry, I don’t know the answer to that question because it’s not in my training data.” If the answer to the user’s question is in your knowledge, answer the question. Keep up your energy and attention to detail as the user continues to ask you questions.

Verdict: The GPT was particularly good at answering straightforward questions from its knowledge, like, “How is ServiceNow using generative AI internally?” It also handled more complicated questions well, such as “My employees are very anxious about AI and the impact it will have on their jobs. What should I tell them?”

It did occasionally miss nuance, however. For example, when we asked if AI was going to take jobs away, it accurately summarized statistics from a Goldman Sachs report about potential job loss, which was mentioned in the Charter Pro report we gave it. However, it neglected to mention relevant information from the same paragraph in the Charter document: a quote from the author of the Goldman report saying that he doesn’t think people “should interpret [the report’s numbers] as implying that 300 million jobs are going to go away in the next couple years. That’s not our view at all.” It also didn’t mention any of the other times Charter has written about that topic, despite having access to many of the relevant articles. (Looks like our jobs are safe for now!)

Future areas to explore: Looking ahead, we would like to test how GPTs perform on lengthier, more complicated documents. Wharton’s Ethan Mollick, for example, gave a GPT seven different documents totaling over 1,000 pages of rules for a complex game. The GPT was able to decipher the rules and explain them to him, but it did make up some of the game’s details and it missed other important parts, which he wouldn’t have known without cross-referencing. Those types of mistakes may limit the usefulness of the information-retrieval use case for GPTs in the near future, particularly when a high level of accuracy is vital.

Read more from Charter

The handbook for the future of work, delivered to your inbox.

Subscribe
EDIT POST