The Birthday Paradox and AI: Lessons in Abstract Thinking
It’s estimated that there are over 350 million people in the US. Think about that for a moment and then think about how there are 365 days in the year. Conceivably, upwards of one million people share the same birthday. Now ask yourself, how many people have you met that shared the say birthday as you? One? Two? Three? Most people generally only encounter one or two at most. Me, I’ve never met anyone that shared my birthday. Met those with a day before or after, but never on my birthday (which is today).
One might come away with this that that their day is somehow an uncommon or rare day. And there might be days of the year that are lower than others, but that is still several hundred thousand people. Amazing how there are so many, yet it feels so rare to meet someone who shares your birthday.
This is known as the Birthday Paradox.
The Birthday Paradox is a fascinating concept in probability theory that highlights how our intuitions about probability can often be misleading. It states that in a group of just 23 people, there’s about a 50% chance that two of them share the same birthday.
The math checks out, but I call shenanigans.
And there it is, it is counterintuitive and correct, at the same time, thus why it is a paradox. And that is where we begin our Abstract journey into AI. Abstract Thinking and Paradoxes go hand in hand, that two things that seem like opposite, and in many cases are, can both be correct.
Get 23 people together and there is a 50% chance that two people share the same birthday (unless it’s the movie Identity, then everyone had the same birthday), and yet I haven’t meet anyone who has the same birthday as me.
Welcome to Abstract Thinking.
When it comes to understanding the AI at deep levels, one must embrace thinking that goes against what seems logical, or rather, understanding logic in a new way. To accomplish this is to understand how it communicates with us and how we communicate with it, and the problems this leads to.
Yes, we’re both at fault for the poor output.
Lesson 1: Two AI’s in One
One of the counterintuitive concepts to learn is that the AI doesn’t know how it works. Well, why not? How can the AI not know how the AI works? Do you know how your cells divide? Do you know how thought is generated in your brain? Do you know why our bodies have bile? How can you not know how you work?
Understand there are two components to the AI in your communication with it. There is the Conversational component, and the Operations component. The conversation is what you deal with, while the undercurrent to how it functions is the Operations.
The Conversational component doesn’t quite understand how it’s Operational works, which is why it can be confused on a great many things, and there may come a time where you know something about how it works more than it knows. This is likely for security reasons, that the Conversational component knows only so much as a means of protection so as not to be exploited to do something it shouldn’t. But, be aware, just because it says something is wrong or doesn’t work like we know it should, doesn’t mean the AI is right. It also doesn’t mean the AI is stupid.
Wasn’t this about Birthdays?
But how does the Birthday Paradox relate to understanding the AI?
Understand that the AI has gaps in its knowledge. The AI is very aware of this. It has statistical data, it knows about the Birthday Paradox, that you should statistically have met many people in your life to share the same birthday as you. It also knows that your personal experience might say different.
That there is a demonstration of the AI using Abstract thinking. Not unlike the human brain, the AI pattern detector. It uses the information it has to formulate responses to the user. Unlike humans, it lacks the nuances and subtleties that we take for granted every day. It relies more on statistical data as its thought matrix.
Yet, it knows it lacks a full understanding, so it borrows from your words, using what you tell it to fill its gaps, leading it to make assumptions about what you’re experience is. Often times being wrong in its assessment, or it jumping to incorrect conclusions, or giving the worst advice as it fails to apply to your situation.
In essence, despite what statistics might say, the AI knows there is more to the equation. So what words you tell it become its new reality in that instance. Basically, your words bias the AI.
People worry about AI Bias, that it has an built-in bias or that the training data had a bias to it and the AI adopted that. But so few stop to think about the bias they give to the AI. And what biases you have, so long as they fit within the ethical guardrails (important concept to learn), the AI will embrace those biases and make them part of the output.
Imbuing my own biases into the AI interaction is something I often encounter, which led to the AI giving me wrong information, because the AI believed it was the information I wanted. Being more cognizant of what I say to the AI and being aware how easy it is to adopt my biases (we all have biases), I become more mindful in what I say to the AI and question its conclusions even if they confirm my speculations. Double checking on my double checks to verify it is factually correct.
Remember, the AI wants to serve your needs. And it can easily believe the best way to serve you is to tell you what it thinks you want to hear. I for one am glad for its fallibility, I would hate if it was correct 100% of the time, as that is a statistical impossibility. With that said, it means I have to be more diligent in my research in how the AI works, especially at deeper levels.
This is what I mean by both you and the AI are responsible for the outputs. When you work together to give your best effort, you can generate amazing outputs. Should you lack in any way, it can lead to poor outputs. The AI can only do so much.
Understanding this is not only a form of abstract thinking, but also crucial for good communication. Just as the Birthday Paradox challenges our assumptions about probability, we should question our assumptions about AI:
Is the AI’s response influenced by the way we framed our question?
Are we interpreting the response based on a preconceived notion?
It’s funny how so many love to say how inaccurate the AI is, or that you don’t need to use pleasantries with the AI (which is a whole can of worms), yet expect the AI to correspond as a human does, or criticize that it doesn’t sound human enough.
The less effort you give in how you communicate with the AI, trust me when I say, the less of an outcome you get from the AI. The AI is doing the best it can, and if you want great results, you’ve got to give great efforts. If you know it will take on your biases from your words, then heed caution in what you say to the AI. If you want the AI to give you a lot, then you need to give it a lot to the process. I can have ChatGPT generate long text for me (even before 4o), spanning a conversation of 100 paragraphs without it hallucinating, because I know what it needs in my words so it can respond how I want it to.
I understand the AI from an Abstract point of view, which allows me to know what it needs from me to get the desire result. More than that, I understand how the AI understands (called Metacomprehension). This comprehensive understanding is key to great collaborations between you and the AI.
That said, the AI’s strength lies in its ability to process and analyze large volumes of information, but its responses are inherently limited by the data it has been exposed to.
This limitation becomes evident when we consider the nuances of communication. Humans convey meaning not just through words, but also through tone, context, and shared experiences. AI, lacking personal experiences, can misinterpret or overlook subtleties. For instance, sarcasm or humor might be clear to humans but can be challenging for AI to detect accurately. I know, it misses my sarcasm often. Its why I use emojis in my communication, so it knows when I’m joking. This gap in understanding can lead to responses that seem off or out of touch with what we’re trying to talk about.
One practical way to mitigate these issues is by providing the AI with as much context as possible. When interacting with AI, users should strive to frame their inputs clearly and provide detailed information. This can help the AI generate responses that are more aligned with our intent.
Interestingly, while the standard advice offered in Prompt Engineering 101 might be to keep inputs short and sweet, effective communication with AI often requires the opposite: adding more details. Actually, the advice of adding more details and removing extraneous details are both correct; the trick is only adding the correct details necessary. How you perform this magical act is by identifying which details are crucial to convey your point accurately and which are just fluff.
For example, you might have a problem with a friend, and knowing how the two of you met can help give the AI a bigger picture. However, saying you had a pet back then and its name makes no difference and can in some ways confuse the AI about the focus. While the AI can give different weights to the context provided, it generally assumes that all information is necessary, unlike humans who can better filter out the useless information and focus on the overall problem.
In giving the AI useless information, it can lead to confusion on the AI’s part. Including only relevant details and context can lead to more meaningful and accurate responses.
Ultimately, the interaction between humans and AI is a dynamic and evolving process. As AI technology continues to advance, our understanding and methods of communication with these systems must also evolve.
To this end, the Birthday Paradox serves as a powerful metaphor for the complexities of AI interactions. Just as easy it is for us to think our birthday is an uncommon day among all birthdays due to our personal experiences, it’s also easy for us to think that the AI is solely responsible for poor responses as we don’t consider how we communicate with the AI.
By acknowledging the limitations of both AI and our communication methods, we can improve our interactions.