What do we value? The answer depends on the person you ask.
First, a little story. When I was 17 I was driving in the snow with friends. We were no more than 100 yards from our house when my friend lost control of the car and we smashed
into a tree. This was the only tree on the road for at least a mile. But we hit it. Head on.
More on that later.
I'm thinking more and more about Crowd Wisdom these days. Groupthink allows for like-minded people to come together in a value-shared environment and strengthen their beliefs and resolve while finding agreement around those beliefs.
We also know from extreme examples that groupthink can lead to disaster (Jonestown being an example). And there are those who fear that humans are becoming more silo-ed these days, huddling in their corners, being fed information that they agree with.
The notion of Crowd Wisdom then becomes an idea that feels more like Agreement, a concept I first encountered in Landmark Education's weekend seminar, The Forum.
Agreement is the concept that truths are created out of mass belief in the same idea, "The World is Round" being one of the strongest examples. 500 years ago you could barely find anyone that thought this. Today, most of us know that indeed the world is a globe, even though fewer of us do thanks to the internet.
I say all this to make the point that the strongest tool humans have is the idea that we can work together to solve big problems. Ideally, this is the way we should live. But increasingly we're separated by ideologies that keep us from working together.
Meantime, Artificial General Intelligence (AGI), a conceptual yet around-the-corner concept is on the way. AGI is the idea that AI can become 'human' by understanding its environment around itself and behaving more like us.
The threat here is that AI could do great harm to Humanity given the opportunity.
Enter Crowd Wisdom. Again.
The idea that groupthink can be a benefit for Humanity, especially when staving off the threat of AI taking over, lies in a core idea - our values. I'm not talking about religious or moral values, those may never come to define our humanity again. I'm talking about we as humans valuing Humanity.
Imagine applying the collective wisdom of a group who are all dedicated to the same outcome: the betterment of our lives. This is tricky because even we as humans can have very different ideas about how life can be better.
But imagine a common foe - is that possible anymore? I think so. Until just a week or so ago I really didn't see AI as a threat. But that changed when I realized that worrying about AI will lead to a threat. It's like driving into that tree. There were no others around, but somehow we steered right into it. Why? Because that's what we were focusing on not hitting.
If we are able to align our values toward understanding the true threat of what AI could become, there's a chance we can use our collective wisdom to solve the issue before it ever occurs - avoiding the tree all together.
How can we do this? Here are four points I think are worth examining:
Define our purpose as Humans. By chunking down from individual biases, needs and fears to a more general idea of being human, surviving as a species, we can all agree that is the No.1 goal we all share.
Examine scenarios where real-world cases of possible AI actions deviating from our collective value of Humanity's core purpose could cause harm.
Evaluate and assess insights from the group and agree on a set of guidelines that must be followed in the development of future AI programs.
Developers agree and the guidelines are adopted. Refinement processes enhance improvement and alignment with human values as AI systems evolve.
By leveraging Crowd Wisdom in its purist form, the idea is to tamp down the threat of misaligned AI values.
We may never agree as a species on much, however, we can all agree that being alive, and having a fulfilling life here on the planet is something we all enjoy.
There may be a storm coming, but let's drive smart and safe, keep our eyes on the road and avoid ever hitting that very avoidable tree.