ChatGPT's Makers Say AI Could Surpass Humanity In Next 10 Years

The creators of #ChatGPT say #AI could surpass humanity in most domains within the next 10 yrs as 'superintelligence' becomes more powerful than any technology the world has seen.

This seems to be a shared sentiment around AI right now; that at some point in the not-too-distant-future, AI could pose a great threat to our way of life.

But is that future much closer than some folks suggest or think?

With that question as context, I think these two scenarios seem more likely:

  • The threats and risks posed by AI to many facets of human life are already here rather than off into some distant future;

  • AI challenges were already at play before now; it's just that more people outside Trust & Safety (T&S) are starting to wake up to them.

The emphasized focus right now, in terms of ethics and policy, seems to be on AI's safety guardrails and regulation.

While guardrails and regulation are key, they're far from enough and they'll take some time to develop, get debated, and later implemented.

Not to mention that regulation does not necessary equal enforcement. You can have, for example, all the regulations you want but who's going to enforce them? So enforcement of AI regulation will also need to be debated, determined, and developed. 

Things like regulation, let alone enforcement, tend to take a very long time to manifest. Case in point: social media regulation in the US is *still* a hotly debated domain without much regulation as of yet, and social media has been with us now for at least 15-20+ years, depending on the platform.

A call for AI literacy

In addition to guardrails and regulation, I'm an advocate for expanded AI literacy across the board; not just in commercial and technology settings but well beyond.  Otherwise, how can we "get there" IF people don't understand the nature of AI's risks challenging individuals, societies, economies, nation-states, and yes ... even humanity. 

The protecting humanity challenge

Though protecting humanity might sound like fluff or fear-mongering, it is a necessary focus for many reasons beyond the scope of this post. 

But I yield that such a goal, however noble, is difficult to enlist or pursue in commercial environments and socio-economic settings that prioritize (and reward) machine-based predictive outputs and efficient, quantifiable processes over nuanced human outputs and organic processes.

More balance is needed

Though we can and do benefit exponentially from cybertechnologies like AI, we need a more balanced approach as we seek to identify and explain AI's risks and challenges as we move forward. 

But creating more balanced approaches in the landscape of AI and digital policy starts by developing ethical and policy frameworks that don't over-emphasize the value of AI/tech too heavily over organic human capability.

Fostering policies that nurture and cultivate ethically sound, human-machine collaborations -- governed by human-centered values -- is the balanced key.

Until next time,



--
Mayra Ruiz-McPherson, PhD(c), MA, MFA
Advancing Humanity In An AI-Enabled & Media-Laden World
Cyber/Media Psychologist (dissertation in progress)








• • • • • • • • • • • • • • • • • •

Comments