005 Artificial Intelligence, Thought and Thinking
Artificial intelligence and human thought, cartesian theatre and Dennett, Socrates elenchus and more
This episode is on Cartesian Theatre, Socrates Elenchus, GAN and tokens, alcohol and the Divine Masculine. Bit of a ramble really.
How artificial intelligence And human intelligence converge. Or do not. I wanted to start to tackle artificial intelligence, thought and thinking. We humans are not convinced we understand how we think and have spent philosophical lifetimes debating and cogitating. (Rene Descartes Cogito ergo sum, or Dennetts draft mode or quantum consciousness?)
While AI may one day move from a sequential process to a concurrent one, the Divine Masculine will lurch toward a prompt-output approach while the Divine Feminine swims in deep waters. Both exist, both can be right.
GAN (adversarial networks) are similar to Socrates Elenchus – only by debating what we know and understand do we truly KNOW. The GAN approach will try and fool the AI into thinking the image is not AI generated. It will build muscle by going back to the drawing board and trying again. A Pros and Cons list.
Simone de Beauvoir and Jean-Paul Sartre, Camus and Baudelaire, Aldous Huxley, Carlos Castaneda and others took the view that Bacchanalia, Dionysian that alcohol can clarify thinking. Or at least mean a good time while philosophising!
Resources on my website laurelpapworth.com/alchemy as well as my AI Is My CoPilot course (in Sydney, Brisbane, Melbourne, Canberra and Virtual) please go to the Australian Institute of Management website if interested
This is a stream of consciousness recording. It has errors as my brain attempts to compete with generative AI and suffers catastrophic failures (not enough tokens) and hallucinations (outside of ingested dataset) Hey ho.
Cover art by Dall-e for a change. (steampunk clock).
Transcript of artificial intelligence and human intelligence:
[00:00:05.520]
2023 07 14 Alchemy of Innovation. So I’ve been thinking about the nature of thinking. And because you jump down a rabbit hole when you start to think about thinking, we end up in all sorts of problems, well, I did. So I’m going to take you through my process. We’re going to go on a journey together and see what we think about thinking. Oh, yes. And chatGPT will come along as our companion.
[00:00:54.380]
So one of the questions I have, and I have many, because questions are thoughts, is, was Rene Descartes correct in saying, Cogito Ergo Sum? “I think, therefore I am”. It’s never sat well with me. I understand the concept of having a manager, an executive in your brain that controls or responds to lower level thoughts like, I’m hungry, I need a cup of tea, or I wonder if there’s any Tim Tams left. And then high level thoughts, such as, is a requirement for Tim Tams, an addiction, or prelude to an addiction. One can spend hours musing and pondering on the nature of Tim Tams. One could spend hours musing and pondering on many things. As I am AI, average intelligence, in a changing world of artificial intelligence, the nature of thought and of intelligence seems to be coming more important.
[00:02:30.800]
If you don’t think that “I think, therefore, I am”, Cogito ergo sum is part of your path, then what is? Daniel Dennett, in “Consciousness Explained”, was not a fan of the Cartesian theatre: Descartes insistence of an overseer or manager controlling things. He talked about an infinite regress problem, something like our loop last week. If there’s a little person in your head watching your life, who is in their head watching their life? And so on. Now, strangely, I don’t have an issue with that. The idea that spirituality is a taxonomy, a hierarchy is definitely not a new idea. Danté, Plato, so many philosophers have tried to explain the spheres. I like to think of it as the Lands at the Top of the Tree. Hat tip to Enid Blyton.
[00:03:58.080]
Dennett suggests that there are various aspects of cognitive processes that have a simultaneity and they don’t come together in a single central point. This makes for a very fractured consciousness that perhaps only those of us that swim regularly in the darkness of the Divine Feminine can comprehend. We work with multiple paths of consciousness. The model where all sensory inputs are processed in parallel and all paths are open to us at the same time.
[00:04:41.800]
This made me think of artificial intelligence and its insistence on the Divine Masculine of pushing forward, always forward, always in a direction to a goal. Never mind meandering, musings, scraps of poetry, and useless post note thoughts that trip and fall into my brain and then wander out again in a drunken haze. By the way, I don’t drink, at least not alcohol or not often. And yet there are philosophies that say alcohol is the way to think. Dionysian, Bacchanalian, the Existentialists liked to party and party hard. Their thinking processes were clarified by alcohol, amphetamines, and orgies. Me, I stick with my cup of tea and my Tim Tams. They are vice enough. Was it the Phoenicians who said one must debate twice, once sober and once drunk, because in a closed loop, a closed environment, it’s extremely hard to ingest new data and to break our thinking algorithms in a new way, in the same way that artificial intelligence is limited by its very machine learning. We too are limited in our thinking.
[00:06:22.290]
If you’re unfamiliar with machine learning, it has to do with the inference divide where we must infer an answer. You’re probably already familiar with setting temperature in GPT. So chat GPT set the temperature to zero for a Semantic AI. Search, retrieve, and Cite. Suitable for lawyers, academics, accountants, and those with no imagination. I’m trolling you, by the way. Temperature of one, or if you’re using the API of Open AI, set the temperature to two, is the maximum creativity. If you set this temperature to zero, think of it as somebody in a suit, skirt and blazer, sensible court shoes, dark rimmed glasses, and a briefcase with notes in it. That’s temperature zero, GPT. If you crank the temperature up to one, think of it as a girl on the beach in a floaty boho dress with flowers in her hair singing songs, picking shells, and not focusing on anything other than the creative experience of where she’s at. GPT at temperature one is very woowoo, very creative, fabulous for limericks and
children’s bedtime stories. Less good for legal cases. And our thoughts vary between zero and one as well. We all know that person who sets the temperature to one and then cranks it higher, and we say drama queens. They live into a deep emotional existence of their thoughts. Emotion and thought are not separate for them. They are combined. What they feel is their knowledge. What they know, they feel. We also know people who set the temperature to zero all the time. If you told them, “The end of the world is nigh, you’re going to die”, they would correct the spelling and the grammar of “you’re” making sure the Apostrophe was in the right place. I have a deep love for both types of thinking, both types of people. But it’s not how I think.
[00:09:11.280]
And while we can frame it in the terms of Id and Ego, the idea that we step into our thoughts and step out of them seems the most useful analogy for me. There is a river of consciousness. I swim with it. I swim against it. I swim with it again. As AI surges ahead, taking the divine masculine path of getting to an outcome, its neural network occasionally makes an error, and then it must undertake backpropagation, which is a mathematical method to train artificial neural networks in calculating the gradient of the loss function with respect to the weights and biases of the network. It then updates those weights and gradually improves the network performance on a specific task.
[00:10:14.320]
But now, what does that mean in plain English? Let’s say that the network is being trained and it identifies or infers that it has reached an error. The outcome is incorrect. It must backpropagate. It must go back through its own thinking and ask every node, correct? Or incorrect? And each node, like an agent, like a bot, is responsible for one part of a bigger piece, of a bigger picture. And eventually, it finds the node that says, “Oops, I was wrong”. The AI has now identified the incorrect node and reruns the function having changed the weight of the node towards a specific outcome and hopefully achieves its goal of a correct outcome. Because back propagation needs a very clear and specific centralised processing. It’s easy. You’re heading towards a goal. It was wrong. Backtrack, fix, progress again. But then it says that’s not how human consciousness works. We jump around, we change our goals. Sometimes we see a node is wrong, we decide to go with it. So we change the outcome and decide what was wrong is now right. It must be. Machine learning, human learning, our acquisition of knowledge, how we think.
[00:12:15.380]
Many years ago, I worked as a barista in a restaurant. I actually very much enjoyed that job. I like system two thinking. I organised my little area to the nth degree. I’m a systems person, trust me. I like to crash timelines. If I make this coffee, and this tea, and this coffee, I can do three jobs. But if I do this tea and this coffee, I can only do two and then wait to do the third one. So I’m continually processing what’s called a tree of thoughts. This is something that native chatGPT does not do without very sophisticated prompting. Although I suspect with LangCHAIN, we might reach that prompting sooner rather than later as the templates become the norm. Let’s say you taught how to make a Moccachino. You taught how to make a decaf latte, and you taught how to make a long black with a dash of soy. That’s structured learning. Good job: you learned three drinks. And then someone asked for a soy decaf Moccachino. We use tree of thoughts to consider the proposition. Something has a dash, latté has a lot. Moccachino has chocolate. Long black doesn’t. And we’re able to make an approximation. We’re able to infer, we’re able to be creative and make the drink and then check with the coffee drinker, if that’s the correct term. Is this what you wanted? Is this okay? Does this taste how you expected it to taste? We inferred from our previous experiences, and this is what generative AI does.
[00:14:26.670]
Not semantic AI, but generative AI. It generates answers based on tokens and inference. Of course, if it doesn’t taste how the customer was expecting, then we need to use backpropagation, go back through our neural network, find the one piece that was missing. Whoops, not enough chocolate. Uh oh, forgot the decaf. And then the customer’s happy. In a neural network, data flows in one direction, from input to output, and the only time it goes back is for backpropagation. Us humans, we jump around.
[00:15:08.620]
And if we have alcohol or drugs or some other stimuli, including books, movies, traditional media news, doomscrolling on Facebook, our thoughts and our patterns change. If we are learning from experience, both humans and artificial intelligence, then we can improve from experience. In humans, this is a natural cognitive process. While in AI, it’s a matter of refining algorithms based on new data. Our Soy Decaf Moccachino. I think one area that artificial intelligence excels and that we struggle with is pattern recognition. And there are many, many, many examples of AI recognising patterns that humans, even specialists in those fields, had not yet identified. Some AI systems, particularly those used in robotics, have been designed to mimic human senses. For instance, computer vision AI attempts to replicate human vision perception. Then there are AI systems designed to process auditory or tactile information. It’s a whole world of haptics out there. Let’s talk about computer vision for a moment. I think of generative adversarial networks. They’re called GANs, and they’re predominantly used in image identification, object identification, image generation as being something like the Socratic Elenchus. The generator becomes skilled at producing realistic data. And the discriminator, that’s the adversarial agent, becomes better at spotting fakes. So this is combative. The generator says, Look at this. And the discriminator says, No, that’s a fake. You can do better. Socrates’ Elenchus used questioning and cross examination to expose flaws and inconsistencies in the debater’s argument, forcing them to revise and refine their beliefs. Is this not how a generator in GAN learns how to create better and better samples based on the feedback from the discriminator agent? Through the process of questioning and refutation, individuals might realise inconsistencies in their beliefs, or perhaps not. And as a result, modify their viewpoints and arrive at a more refined understanding. It is the very process of the debate, elenchus, that allows philosophical truths to become known. It’s also the reason why the esoteric nature seems so esoteric. To be TOLD a truth is not to KNOW the truth. To discover the thought for yourself is to know it. There’s an aha moment. So the GAN moment. By pursuing a tree of thoughts, role playing, yes, I should do this. No, I shouldn’t. Yes, this is right. No, it’s not. By fighting with ourselves, using an adversarial neural network, we can sometimes reach an aha moment. I’m not convinced this is what happened to Archimedes, who recognised displacement in water while he was in the bath, leapt out and ran his soapy naked body through the streets of Syracuse, screaming “Eureka”. By the way, I think that might have been an urban myth. What is that gap, though? The gap of debating with yourself, fighting with yourself, and then going for a walk, doing some exercise, playing with the dog, playing with the children, making dinner. And then you have a Eureka moment. You weren’t thinking about it.
[00:19:13.100]
You weren’t in an adversarial position. You weren’t even in the Cartesian theatre of thinking about thinking, about thinking, about that thought. It just came to you. AI doesn’t have that creativity in that way. Remember, it’s following a single path, and it’s important to understand the difference between knowledge and intelligence and spending tokens. For example, if I say to you that blah is 1, blah blah is 2, and oh blah is 3, and then asked you, what is, oh blah, blah blah, blah blah? You would respond, after a moment’s thought, with 3, 2, 2. But if I asked you, what does, oh blah, blah blah, blah blah mean? You don’t know. In many ways, that’s how artificial intelligence works. It spends tokens to calculate the pattern. A token is more or less a word, unless it’s a foreign word or an unusual word, in which case could be two tokens. But it spends those tokens working out what o blah, blah, blah, blah, blah means. In numbers, not in knowledge.
[00:20:41.060]
When I’m constructing prompts for chatGPT, I start with a base prompt. It’s also known as a system prompt. And this is the deeper contextual knowledge on which I’m going to build the premise of the prompt and output of the query and answer. So I may say, I want to build the to discuss Apple computer. This is their revenue. These are their products. These are the number of employees they have. These are the court cases against them. These are their patents. These are their location all the contextual information. That’s my base prompt. The context is broad and yet specific enough to set the scene. Like setting a stage for a play. I don’t assume that my thinking has been accurate so I must check the base prompt. But once I’ve done that, I can use it for every query for that company going forwards. I ask ChatGPT to summarise it or reframe it or something.
[00:21:44.070]
My second prompt is immediate contextual. I’m running an Apple event. I’m considering investing in Apple. Today, I’ve seen a job advertised at Apple computer. I want to discuss it with you, GPT. So the base or the system prompt is the big picture. Context is getting closer to what I actually want to talk about. I call this two steps back, one step forward. It’s like the tree of thoughts when you make a column of pros and cons. Why would I take a job at Apple? And why wouldn’t I? When you’re going to ask ChatGPT to help you with it.
[00:22:25.430]
Now, I can upload the job description and ask ChatGPT to tell me the best person for that job. Then I can upload my CV and ask ChatGPT, what are my strengths and what are my weaknesses for the job we just discussed? Then I can ask ChatGPT to rewrite my CV staying factual, we don’t want the temperature to be too high for the job, at Apple. In between each prompt, all on the same chat thread, don’t create new threads, you’ll have to do the whole thing all over again. On the same thread, I check between each one. Is this correct? Are we on the right track? So once ChatGPT understands what Apple computer is, sees the current context that there is a job available, reads the job description, and then rewrites my CV appropriately for Apple, the next thing I can do ask it to roleplay 3 candidates in an interview. This is tree of thoughts. I’m not just asking for some questions for the interview. I’m actually asking for a role play so that I can see the kind of candidates that the HR or recruitment manager would be considering for that role. Perhaps ChatGPT will choose someone with a broad knowledge of all of the requirements. And then another candidate has in-depth knowledge on specific areas, but not everything. And then another one is the surprise, the unique. You can always ask chatGPT to regenerate with more unique more unusual answers. When you’re copiloting thought processes with artificial intelligence, you must remain aware of how it’s different than you, how its neurodiversity matches your neurodiversity.
[00:24:32.740]
Here’s the final thing I want to point out. That perhaps thoughts come from outside of our mind in the same way that prompts come from outside of AI. And yes, I’m aware that super AI or AGI is able to create agents or bots to prompt it back. And maybe that’s what we are, our relationship with other people, our relationship with the so-called real world it exists to prompt us to make us think, not just base level thinking of I’m hungry or I’m tired, but the deeper thinking, not the emotional thinking of I’m angry, I’m upset, deeper.
[00:25:14.210]
I’m a big fan of Sir Roger Penrose, the physicist I think is in his 90s now and still going strong. I love his videos on YouTube. He talks about quantum consciousness, or ORCH OR, I think it’s pronounced, which involves quantum processes occurring within the brain neurons. Maybe our minds are inextricably linked to the fundamental workings of the Universe. Quantum entanglement means it’s a connection between me and a mote of light somewhere far across the universe. Does it control me? Do I control it? Are we connected in a way that is meaningful? I don’t know, but I like the idea of it. For thoughts that are outside of our consciousness, we have no control over or eventually they’ll give up. I want a cigarette. I want a whiskey. I want to chase dreams that hurt me eventually. At some point, those thoughts background themselves; I don’t think they ever go completely.
[00:26:20.620]
If we don’t have control over our thoughts, what do we have control over? I don’t know. I’m learning, struggling, I guess. I would love to be as confident as ChatGPT, even when it’s fibbing like a five year old because the temperature is set too high, especially when it’s fibbing like a five year old. And yet that confidence is so often humanity’s undoing.
[00:26:47.130]
We must think about thinking. We must examine those thoughts. If you ever find yourself overthinking, join in with humanity. Coming back to alcohol, when Simone de Beauvoir said “in songs, laughter, dances, eroticism, and drunkenness, one seeks both an exaltation of the moment and a complicity with other men”. The freedom and joy and sense of shared connection with others comes from that existential belief that we must grasp our humanness and own the Agency within our own being. I’m not an existentialist by any means, but I do believe in co-creation, co responsibility. Charles
Baudelaire, the renowned French poet, said, “One should always be drunk. That’s all that matters. But with what? With wine, with poetry, or with virtue? As you choose, but get drunk”.
[00:27:53.480]
What’s he talking about? Probably that our thoughts have a transformative power. If we grasp them, we own them, and we experience them. Do not run away from your thoughts. Dive deep. Go into the dirt. Experience them. I’ll have more to say on this, particularly around Maryam de Magdalena and her gospel that walks us through the afterlife and makes us face those thoughts which we ran away from in life.
[00:28:26.910]
But not today, as I have spoken too long already, on thinking, creativity, inference, alcohol, existentialism, and other matters that are crowding my brain. Thank you for joining me today. And remember, stay human.
List of Resources: (coming soon)
- Socrates Elenchus
- Dennetts Consciousness Explained
- Renes Descartes link
- Simone De Beauvoir quote
- Spirituality and Alcohol articles