The A.I. Dilemma— a force for good, or catastrophe?
50% of A.I. researchers believe there's a 10% or greater chance that humans go extinct from our inability to control A.I. Sobering thought I know, but what can we do about it?
I have been watching a thought-provoking video presentation given by Tristan Harris & Aza Raskin filmed at a private gathering in San Francisco on March 9, 2023. The presentation was so compelling, I have in fact listened to it three times already— I have imbedded it at the bottom of this page for your reference.
Tristan Harris was a Google Design Ethicist, and is now the Co-founder & Director of the Centre for Humane Technology (CHT). You maybe familiar with his work through The Social Dilemma, or even viewed his TED Talks. Let’s just say he is well versed technology and the importance of consciously and ethically shaping the human spirit and human potential.
Now back to that video…
Tristin & Aza demonstrate how existing A.I. capabilities already pose catastrophic risks to a functional society. At present there are no safeguards in place to curtail tragic outcomes for society— instead there is a global arms race for domination, no matter the potential cost!
Both presenters were mindfully aware that not everything A.I. is bad, and in fact there are miraculous positives and generous outcomes that can improve human life on Earth. But the deep concerns raised focussed around A.I. not being deployed responsibly, and as Tristin & Aza note— 50% of A.I. researchers believe there's a 10% or greater chance that humans go extinct from our inability to control A.I.
The Call for a Moratorium
On March 29, 2023, Elon Musk and a group of A.I. experts signed an open letter calling for a moratorium on developing artificial intelligence (AI) systems that are more powerful than OpenAI’s recently launched large language model (LLM), GPT-4. The letter, called on all AI labs to immediately pause, for at least six months, training of any systems more powerful than GPT-4.
[If ChatGPT is new for you, you may wish to checkout out this video to come up to speed].
What’s the concern you may ask? LLM are self-learning, they develop their own theory of mind— becoming more intelligent by the minute. However there is a grey area in how to ‘train’ A.I. to evolve it’s intelligence from both a moral & ethical standpoint. In fact new emergent capabilities are emerging with A.I. every day, but there are few compelling explanations for why such abilities emerge. A.I.’s evolutionary intelligence is scaling at such a great velocity, it has become impossible to predict or understand the future of these emerging capabilities.
Even as I write this article today, Geoffrey Hinton, the man widely considered as the “godfather” of artificial intelligence, has chosen to leave Google — with a message sharing his concerns about potential dangers stemming from the same technology he helped build. Resigning from his role has enabled Geoffrey to speak more openly about the risks.
Where to from here?
As the self-learning capabilities of A.I. is currently an unknown quotient, we must presume their public deployment unsafe until proven otherwise.
However, believing that global players will take the moral high road and slow down the public deployment of LLM A.I.s of their own volition is most probably a pipe dream. There is too much money, power, control & the tantalising lure of world domination at stake— all in the hands of a few.
So how do we close the gap between what is happening & what needs to happen? I agree with the sentiments of Tristan & Aza, it is up to we the people to be the drivers of change.
We need to be the instigators of conversations within our communities, and create tsunamis of global attention discussing the risks and potential deleterious fate of the Human race.
Through a groundswell of awareness & understanding, collectively we can find solutions and safe-guards that dramatically reduce the risk, whilst flourishing the potential benefits. But this conversation will be driven by we the people— you, me, friends, family, community— until a critical mass of millions & billions of souls choose a more compassionate reality.
Within the spectrum of our world we have on one side the Artificial Intelligence of the computing mind, and at the other end of the spectrum the infinite, enduring & expansive intelligence of Divine Heart-Mind. We have within us the capacity to bridge the opposition of the two forces and flourish our Human Radiance in co-creative capacities with the field. The question is, what do we choose?
Just like the unfolding lotus at the top of this page emerging out of the murky waters and blooming in vibrant synergy with natures wisdom— we too have the capacity to rise about the ugliness of this entropically destructive moment, and syntropically catapult a more awakened potential.
As I shared within this months Flourish Women’s Circle— When the Grandmothers speak, the Earth with Heal—Hopi prophecy. And within that vein of feeling, when we speak through the intelligence of our Hearts, we transcend the confines of materialistic science & space/time. We have the capacity to make quantum leaps in our collective consciousness.
As they say, necessity is the mother of invention— may the Great Mother hear our voices and guide the way.
Important Note: The above video was created before the release of ChatGPT4— hence the call to action is greater now more than ever.
For an expanded version of this article please visit my Journal Post.
Hi Simone, would you be able to advise why I am not able to access this article on your site?
https://www.universallifetools.com/2021/02/astrazeneca-vaccine-gmos/
Kind Regards, Mark.