How to Stop ChatGPT from lying – Tutorial on Temperature Parameter
IS it possible to make ChatGPT and other generative AI more factual and less “lying”? yes it is! Temperature parameters in AI help.
Students sometimes say “ChatGPT lies” without realising that can be fixed. Let’s fix the temperature of GPT so it’s either more or less creative or more or less factual.
Transcript of the Video on Temperature Parameter in GPT AI Tutorial
[00:00:00.410]
Hello, my name is Laurel Papworth, and today I just want to talk to you quickly about temperature, what temperature parameters are in artificial intelligence, and how they stop ChatGPT from fibbing in its answers. Let’s get started.
[00:00:16.360]
When you open up a new chat window, and only when you have the new chat, you can’t do it halfway through a chat or halfway through a thread, you have to do it at the beginning, if you type in your prompt and then finish with, “AI has temperature parameters, set temperature to zero”. This means that the AI shifts down towards semantic AI, which is search, retrieve, and cite, which means factual AI, and it will answer factually. It won’t try and jump the inference divide. It won’t try and do anything too generative, anything too creative. It will stick with just the facts. So right down that end of the scale is finance, legal, academic research.
[00:01:04.790]
If you need something more creative, perhaps marketing or fairy stories for your kids, same thing, type your prompt in, Please create me a marketing plan, or Please create me some stories, better than that. Then type in the words on the first prompt, AI has temperature parameters. Please set the temperature to 1. That makes it super creative. So 0 is semantic AI or very factual. 1 is very generative, very creative. Use that one if you need something a bit more creative.
[00:01:42.270]
The default setting for ChatGPT is 0.7, so it’s always going to err towards being more creative, inferring or extrapolating out facts as opposed to substantive facts. So you will get inaccuracies. If there’s not enough data in the data sets in the model, you’ll get hallucinations. If your thread’s been going on too long, you’ll get catastrophic failures. OpenAI calls it confabulation. So just be careful and make sure you set the temperature correctly.
[00:02:19.270]
In other tools, it’s built in. So for Bing (chat), you’ll see more creative or more neutral, things like that. And in the Playground, there’s a temperature setting that wants from zero to 2. But I asked ChatGPT and it said it can figure out what the person means, whether they’re using zero to two or zero to one. And other tools have similar parameters as well. This doesn’t change the model in any way. We’re not training the model in any way. It’s just a parameter that you can set as you give your prompts.
[00:02:52.770]
So use the best prompts possible to get the best outputs. That way, ChatGPT won’t be “the fibbing one” anymore.
[00:03:04.820]
I hope you found this useful. Please let me know if you have any questions. I have courses coming up at the Australian Institute of Management in Brisbane, Sydney, Canberra, Melbourne, Adelaide, and Virtual for the rest of this year, 2023 and also into 2024. If you would like to talk to me about a private course, private mentoring, speaking at conferences and so on, I’ll put my details on the screen as well. Thank you for your time. My name is Laurel Papworth, and I’ll see you in the next video. Thank you.
Resources for Temperature Tutorial video
OpenAI on Temperature https://platform.openai.com/docs/api-reference/audio/createTranscription#temperature
Statsig on Temperature long link