ChatGPT's Makers Say AI Could Surpass Humanity In Next 10 Years
The creators of #ChatGPT say #AI could surpass humanity in most domains within the next 10 yrs as 'superintelligence' becomes more powerful than any #technology the world has seen."
Though I'm a 100% wired tech lover/power tech user like most anyone, this article in particular helps spotlight the exact nature of my passion and focus: ** working towards ensuring our humanity is not overridden by AI tech ** ... which is pretty much already underway at varying degrees, though some engineers, developers, computer scientists, researchers, the media, and others in the community confuse, ignore, ridicule, minimize or quickly dismiss concerns, labeling them as "existential" or "alarmist" and what not.
This dismissiveness speaks to an arrogance that's soooo damned thick *** BUT NOT *** impenetrable, at least that's my stance.
As Maya Angelou is renowned for having said, when people show you who they are, BELIEVE THEM. Thus, when AI-unleashing CEOs tell you, word for word, that AI poses great risk to humanity, BELIEVE THEM.
There are way too many AI CEOs and AI prophets today saying the exact same thing differently: AI could potentially pose great threat our way of life.
I'm arguing, however, that it's not about could ... as in some far off distant possibility; the unvarnished reality is that these threats and risks are already happening; they're already here and they've been here for quite some time. It's just only now that people are (sort of) waking up to the severity of the situation.
The focus, however, is mostly now on safety guardrails and regulation. These are a necessary and needed start but they're far from enough and they'll take quite some time to develop and agree/approve.
I'm also arguing that we need AI literacy *in parallel* and across the board ... otherwise, how can we "get there" IF people don't understand the true nature of the macro risks AI imposes towards our humanity well beyond the more obvious, surface-level risks to governments, economies & societies? Those frameworks, btw, are all created, administered/patrolled and experienced/lived by humans :) so protecting humanity truly does need equal attention as much as all other discussions of guardrails and so on.
The challenge is that "protecting humanity" sounds like fluff or bullshit fear-mongering and this is especially so across societies and industries that PRIORITIZE and PRIZE machine-based predictive outputs and efficient, quantifiable processes; these are all values far more applicable to machines and not as applicable to more subjective and nuanced human beings.
We all do/can benefit exponentially from technology & AI, but we need a much more balanced approached as we move forward bc there's no turning back, the AI genie is out of the bottle and the sky's the AI limit.
But creating more balanced approaches in the realm of AI starts by developing frameworks that don't over-emphasize the value of AI/tech ABOVE human capability, as much of our 'superintelligence' lexicon does today.