In November I was on the opening plenary in the morning and then moderated a panel in the afternooon for the Financial Advisors of Australia Association annual conference. Later that day I was interviewed for the FAAA podcast.
Transcript of Artificial Intelligence and Financial Advisors Australia Association, Adelaide 2023 podcast.
[00:00:00.000] – Frank FAAA
Hello, and welcome back to the FAAA podcast, here in the beautiful countryside of Adelaide, actually in the convention centre here in Adelaide, Laurel Papworth, welcome to The Conversation.
[00:00:15.730] – Laurel Papworth
Thank you for having me.
[00:00:16.760] – Frank FAAA
Thank you for being here. Now, of course, you’ve been presenting today on a couple of sessions, mostly around the hottest topic on Earth at the moment, which is AI, a relatively new conversation, just over a year or so old, but been around for a long period of time. But you do all sorts of lecturing. You run courses on this. You’ve spoken today about it. You’ve shown up here to the podcast. And thank you so much.
[00:00:44.570] – Laurel Papworth
[00:00:45.270] – Frank FAAA
Do you want to give the listeners a quick overview of you, your business, and what you provide so far?
[00:00:50.770] – Laurel Papworth
Okay, so artificial intelligence has been around for a long time, and I’ve been pretty committed to it, at least since 2008, if not before. But I do think that ChatGPT in particular, which I like to call ChatBro, has democratised A.I. because up until last year, if I was teaching AI, I had to go through formulas and weights and biases and technical constructs in order for people to even start to play with AI. With ChatGPT, they suddenly get it because that intelligent part of generative AI, the ability to jump the inference divide, the ability to be given a couple of data points and extrapolate out a whole strategy from that is the thing that intrigues them, because it’s thinking. It’s not. It’s certainly not thinking heuristically like we do. It’s around transformers and neural networks. But I can see how even my 78-year-old mother is like, Oh, I like this generative AI thing. And I’m like, Yeah, cool, you wouldn’t have liked it a year ago.
[00:02:04.960] – Frank FAAA
Yeah, there’s certainly been a lot of people getting involved because it’s a little bit of fun, a little bit of interesting. It’s a bit that variable reward of what’s going to pop up. And actually some of it makes sense, which is great. Let’s talk about some of the things that you’ve been talking about here at the conference. I think you and I were just talking about the whole of the industry as a whole. We might start with things like the whole ethics to do with AI. We might talk about the big picture before we start getting into the nitty gritty.
[00:02:33.590] – Laurel Papworth
But I think given today, and this has only happened since the weekend, but there’s been a break in all the major companies providing responsible AI directions. So Microsoft and Apple folded their responsible AI ethics committees. OpenAI imploded a couple of days ago because their board, which is focused on human first, they’re not looking at investors, they’re really looking at what are the ethics of AI, hit a wall, basically. About 80 % of the staff have talked about moving to Microsoft now, and then Facebook Meta this morning to dissolve their responsible AI committee. So my challenge going forward is if the big companies with the big AIs and Google as well have folded their responsible AI are no longer looking at ethics and responsible AI, then what do we know as consumers and end users about how ethically AI is going to behave going forward? And here’s the challenge. We’re not talking about a naughty child here. We’re talking about something that can decide to finish the human race or that we should no longer have certain services or might decide to dissolve certain things and only allow a small percentage of the population to have them because AI will tend to follow what it’s learnt from us. So we’ve not always been the best teachers. Just put it that way.
[00:04:07.810] – Frank FAAA
Oooh. Big news that’s coming out. Those firms, of course, set those departments up a year or so ago before they launch.
[00:04:14.680] – Laurel Papworth
To be transparent. Now, there’s something in AI that’s called System Cards, Model Cards and Data Sheets for Data sets, which is… I know the audience likes to geek out.
[00:04:24.200] – Frank FAAA
-i’m geeking out right now.
[00:04:25.380] – Laurel Papworth
-you’re geeking out. It’s nutrition cards or nutrition labels. So when you buy a pack of TimTams, it tells you the calories and the sugars and the carbs. You, of course, decide never to read those nutrition labels and eat them anyway, but that’s your choice. AI comes with these. For instance, when I’m working on social media newsfeeds, marketing people don’t always understand that if they go into the technical area of Instagram or Facebook, they can look at the system cards and the model cards on how the algorithms work. So the algorithms are documented and the data sets where the data came from is documented. It’s one way to make sure that whatever system you’re putting in-house doesn’t have that WestPAC hijacked, hacked data, and suddenly you’ve got unclean, what’s called grey or dark AI data going in. We need to make sure that the data in is clean and that it’s had all the approvals. One of the things system cards and model cards those nutrition labels do is they say this data is approved for this use, so the data is approved for my hospital and my GP to have access to. It’s not approved for medical or finance or any other old job applications. They can’t come and have a look at my health records to determine if I should get a job or not. I would suggest at the moment we have very few regulations to protect our data at that level. Normally there’s something called ClickWrapper, which is – and the courts do frown on them – it’s where people are forced to click, I agree to terms and conditions to get past that wall, without reading the terms and conditions, without recognising or understanding what it is they’re giving away and what the implications of that would be. I would also suggest that as system cards and model cards for AI become normal and we read them and we check them like we do on packages of food, maybe we want to put that on-chain, meaning on the blockchain. It just means in a digital vault with smart contracts so that we can make sure that it’s verified on-chain, make sure that it’s verified so that somebody can’t come in and sneak additional material in or hide things because AI on chain can’t be taken out to a boozy lunch and convinced to do something with the data it shouldn’t be able to do like certain organisations and institutions in Australia have recently been caught doing and are now facing severe penalties. AI just won’t allow it. You can’t convince AI on-chain to do something that you might be able to convince somebody in an organisation to do. That’s creating a verified ledger of the fact that it was set at the time, and if you make any changes, it’s going to be blaringly obvious.
[00:07:22.970] – Laurel Papworth
Yeah, and if you are a finance institution and you’re trying to look up somebody’s health records, the AI will say that information has not been released to you, end of. There’s no taking somebody out for a boozy Price Waterhouse Coopers lunch.
[00:07:37.620] – Frank FAAA
No names. Mentioned here, by the way.
[00:07:39.800] – Laurel Papworth
Yeah, you can edit that out. That’s fine. Not that it’s not. Anyway.
[00:07:44.340] – Frank FAAA
Not that it’s not known already. Okay, fantastic. And so, yeah, lots of interesting points inside of this conversation around where AI is heading in this space. Before we get into the where is it heading? Because that’s probably the unknown or the dark hole or whatever it might be. How have you been working with businesses and how have you been working with people and educating them on to where AI has come from and where it is now?
[00:08:11.820] – Laurel Papworth
I’ve taken a multipronged approach. I had to look at the whole industry and then make some decisions. One area I focus on a lot is structured prompting because it’s just a hot topic. How do I get better output by improving my input? I know we say a poor work woman blames her tools, but obviously it’s usually the prompts. So even I sometimes prompt, like a ChatGPT or Generative AI doesn’t have to be a GPT, and then go, Oh, that’s rubbish. And then I have to look at it and go, Actually, my prompt was rubbish. That’s why the output was rubbish. People want to know how to, for instance, roleplay or have the AI become a mentor for them. So that takes more structuring than just going, Can you write me a finance plan? Can you write me a HR strategy? The second part that I try to focus on is encouraging people to train their own AI because it’s actually really easy to do now. I cannot believe I’m saying this because a year ago this would not have been easy to do. It’s like, I’m sorry, but you need at least four engineering degrees and three, blah, blah, knowledge acquisition psychology courses behind you. And now, of course, it’s like, yeah, download, instal, start running. Fine tuning of an AI is it is still a skill, but it’s one that if you’re willing to take the time to think through, and if you are a subject matter expert, a passionate, unique, committed subject matter expert, finetuning in AI, it just takes thought. How do you think through your processes and then just show the AI because it recognises patterns. This is how I think. There’s something called few shot prompting, which is where you say, Here’s an example.
[00:09:54.920] – Laurel Papworth
Actually, I have a bit of a joke. Students say to me, What’s few shot prompting? And I say, Let me show you an example. And they go, No, what’s few shot prompting? I go, Let me show you an example. It’s not a very good joke, but that idea that you can show some examples to an AI, it’s able to say, “Yeah, I get it. I can do 500 of those now because I’ve seen what one example looked like, let me show you the rest”. Once they’ve trained wrappers or specific AIs, we also look at running a private model on your laptop because people think that it needs very powerful machines and it needs high tech, but GPT-4, LLaMA, Stable Diffusion, all of these are free. They’re able to be downloaded, they’re open source. A LLaMA is actually from Facebook Meta, but it’s available for commercial use. And then you can start uploading your business documents because it’s behind the firewall, right? So you’re not going to be thinking, Oh, it’s going to connect and start telling Facebook all my stuff. It’s behind the firewall, so you can upload your documents. Again, though, you do need to be careful because if you’ve not been given permission to use that client’s information in your AI, you might be using it in ways that was not approved initially.
[00:11:14.940] – Frank FAAA
Which is exactly what we’re talking about before, about the terms and conditions, knowing your own terms and conditions of what permissions you’ve been given to use of the data that you’re collecting.
[00:11:23.500] – Laurel Papworth
Yes, absolutely. You might think, initially, I just want to use it specifically for something that the client said, Yeah, that’s fine. I’ll give you that data in return, you’ll give me a financial plan. And then the next thing you know, you’re thinking, Oh, I’ll do this to personalise email marketing. But then that personalised email marketing is based on data sets that were drawn out of the clients’ records because that’s how you personalise it. So now that’s not something they agreed to. And the GDRP, or whatever it’s called in Europe, will be all over that and other countries as well. So yeah, there’s some issues there.
[00:11:58.930] – Frank FAAA
And whatever happens with our Privacy Act next year, when are they reviewing it and comes out? It could actually be a problem with that as well.
[00:12:04.940] – Laurel Papworth
Yeah, absolutely. And this is one of the issues is because once the AI is being trained, if it does have grey data, dirty data, anything that’s like synthetic data, you cannot clean it. Or you can, but it becomes extremely difficult. It’s much harder to go in. I guess it’s like if you go back to Web2 and you think of an email list and then realise that out of your 5,000 names, 500 of them shouldn’t be on there, you’ll have to go through manually and find each of those, verify that the other 4,500 are correct and appropriate. Who wants to do that? Not me.
[00:12:42.530] – Frank FAAA
That doesn’t sound like a fun way to spend the weekend.
[00:12:44.320] – Laurel Papworth
[00:12:44.580] – Frank FAAA
Now, I want to go back to what you said before around the concept that AI will be or can be on your computer disconnected to the internet. Can you talk a little bit more about how you might have it on your phone or your computer without that connexion?
[00:13:01.940] – Laurel Papworth
I think there’s two areas here. One is if you download it and put it on your laptop, and then let’s say you put it on a server behind the firewall, and then you train the AI, the model that you’ve downloaded on standard operating procedures or HR policies or anything, staff could then query without having to navigate through a taxonomic menu system. I need to go to the HR menu, then I need to go down to policies, and then they’re unreadable, right? If they could instead say, I don’t know, am I allowed to date the cute guy in the cubicle next to me? It will go and check those HR policies. That’s a poor example. Can you think of another one? Let’s use another one.
[00:13:44.270] – Frank FAAA
Well, I’m thinking of the concept of a client. Let’s say that you have provided some advice to… I know you don’t work in the financial advice space per se, but let’s say a financial advisor provides the client with some advice, and that might be around particular strategies or their fees and their product selection, etc, and there’s reasons why that relates back to their situation, puts them in a better position, etc, then if that client can log into the advisor’s GPT that’s just for them and say, Why was that recommended again? Would it be able to then pull up that information from that conversation and say, Oh, it was because blah, blah, blah?
[00:14:22.520] – Laurel Papworth
Yes. There is an issue with something called Chain of Thoughts, which is where the AI doesn’t always explain step-by-step why it made a recommendation, so you actually have to use a special prompt called Chain of Thoughts, which is just a fancy way of saying step-by-step. Otherwise, you get a, should I’m a finance advisor. Should I be on Instagram, Facebook or LinkedIn? If you get the answer LinkedIn, and then sometimes you say, Why LinkedIn? The AI goes, I don’t know, and it acts like a teenager. I don’t remember why I said that. But if you actually say, Should I be on Facebook, Instagram or LinkedIn? Tell me step-by-step why, your reasoning. It will say, You don’t have a visual product, therefore Instagram is not a good fit. Facebook is mostly blah, blah, whereas LinkedIn is B2B. You work very much in a B2B space, so therefore LinkedIn is why I recommended it. There is that issue with the exact example you’re giving, but there’s functionality within AI which says that the customer can log in and say, What products should I be using now? What should I be looking at, let’s say, January next year? And the AI is using a reasonably high, what’s it’s called, temperature, which means that it’s able to extrapolate out and make a recommendation and then query you. Would you like me to rerun this again in another couple of weeks to make sure that information is still correct? So it will continually update and make sure that you’re getting the best advice. I’m actually doing some work with procurement at the moment, and one of the things that we looked at was there’s a case study of where a factory in Taiwan burnt down and the only news clips was in Taiwanese. And the only person I know that knows Taiwanese is ChatGPT. And it read that news clips and then understood the whole logistics of that there’s a little chip that’s made in that factory that then goes to Taiwan that is then put in something larger, which then goes to China, which is then put in something that then comes to Australia, which is put in farming machinery. And then the AI was able to say, In six months, your warehouse will be empty of this particular device. Therefore, you should shift to this other one. When asked why, it was like, Well, no. But if you were able to dig in, it was very clear that it knew that the factory was going to impact something six months down the track.
[00:17:05.610] – Laurel Papworth
Even weather patterns can be brought in by an AI because if you think about big data or what we like to call too much data, too much for a human being. Its velocity, super fast, volume, meaning such a lot, veracity, its truthiness actually comes from the amount of data that it has. So it’s not looking at outliers. I once had a client that once did this. It’s not really in its vocabulary. There’s some really interesting things about mixing up real-time data sets with a GPT, which is pre-trained GPT means it’s already got a mass of data. Then you bring in live data or you bring in live feeds and it’s able to make incredible, many, many data point conclusions that our little brains can’t compete with. Or my little brain can’t. I don’t know about yours. I’m AI, I’m average intelligence.
[00:18:09.080] – Frank FAAA
I’m not sure you can say that you’ve got a little brain. That doesn’t quite compute to me. Laurel, thank you so much for coming and chatting with us on the podcast. You do run courses and you have courses, and as you said, you’ve been doing this a lot longer than most of the experts on LinkedIn that have been doing it for six months after Crypto died and they changed. But tell us a little bit about those courses, how people can get involved and if they want to get a bit more technical.
[00:18:34.930] – Laurel Papworth
Yeah, I have a course on artificial intelligence called AI Is My Copilot. Copilot means in Web3, the same that developer did in Web2. So if you think about a Facebook developer or a web developer or an app developer in Web2, a copilot is often around AI or something similar, so it’s just another term. And it’s really a foundation course. I licence that course to the Australian Institute of Management and I run it once a month, once every couple of months, or several times a month, I should say, but in different locations. So Sydney, Brisbane, Melbourne, occasionally Adelaide, occasionally Canberra, and quite often virtual as well. And I’m also working with other organisations to provide industry-specific courses, and I’m looking to partner up with people in the different industries so that I can ensure that I bring AI to the table, they bring their subject matter to the table, and then we find a way to say, How is it going to work specifically in your industry? Because as you would know, something like a finance industry is going to be very different than maybe a hairdressing industry or doctors and health care and government and things like that.
[00:19:51.370] – Frank FAAA
Yes, 100%. People want to jump onto your course, Australian Institute of Management. Was that?
[00:19:56.530] – Laurel Papworth
Australian Institute of Management. I’m also talking to the FAAA about running a specific finance advice AI course with, actually, Jacqui.
[00:20:07.190] – Frank FAAA
Wonderful. Fantastic. Yeah, that’s good. That’s just amazing. Thank you so much for stopping by at the podcast booth and sharing your immense knowledge. I’m sure you’ve got plenty more that you didn’t share, but I really, really appreciate your time and thank you for stopping by.
[00:20:20.310] – Laurel Papworth
Thank you for having me.
The Video of the plenary is available if you are a member of FAAA. And if you are interested in Marketing and AI please go to my workshop.