R.U. Sirius discusses how people are exaggerating the risks of AI. He suggests a polarizing panic might cause more harm than the AIs!
The Scary Future According to AGI Safety Researcher Eliezer Yudkowsky
"AGI may very likely be just a few years off [but] there are currents of love and wisdom in our world that he is not considering and seems to be mostly unaware of" — Dr. Ben Goertzel"
Microsoft's ChatGPT is teaching robots to follow commands in plain English, but what could go wrong?
The United Nations and our uncertain future: breakdown or breakthrough?
In this article, David Wood asks for and outlines better steps in anticipation of future developments. Can the UN step up?
A Taxonomy of Chaos
Chaos is the new norm! In this article, Jamais outlines his taxonomy of chaos, which is Brittle; Anxious; Nonlinear; and Incomprehensible.
The search for a habitable Earth 2.0
Don’t Shut Down AI Development — Open It Up For Real
Goertzel reflects on Future of Life Institute's proposal to pause GPT-5. He thinks they're biased towards AI's risks, not benefits. Currently, 7 countries have banned ChatGPT. To ban or not to ban?
Should we pause development of AI systems smarter than GPT-4 for six months?
Should we pause development of AI systems because of risks to society and humanity? Experts are not in agreement.