Documenting the Coming Singularity

Wednesday, February 17, 2010

The Future of Futurism (Guest Post)

So you've read all of Ray Kurzweil's books. You've tracked the progression of the technological singularity, and you can feel it coming. You excitedly tell all your friends about the emergence of superhuman artificial intelligence, but they all write you off. Skepticism isn't new, of course. Darwin's theories on evolution were met with the same disbelief that eventually morphed into contempt, a sentiment that still exists today, despite that evolutionary theory has been proved sound.

Roko Mijic, a mathematics and AI researcher, writes about futurism and transhumanism for both his personal blog and GOOD, a blog dedicated to innovative ideas. In one article, Mijic explains the problems that plague futurism. He contends that rejecting absurd ideas--despite there being evidence in favor of such ideas--is a survival skill from our earlier stages in evolutionary development, a skill that we haven't completely discarded.

He argues that futurism is also hampered by the media proffering a very misconceived version of futurism, in which evil robots run the world. Mijic reasons that futurism is based on probability theory, which is the most rigorous mathematical model available for analyzing uncertainty. And so, as long as we see futurism in this light, as a theory supported by analyzing current trends and numbers, then it won't seem as far-fetched as the media and futurist critics make it out to be.

Another area that Mijic predicts will be at the heart of futurist studies is ethics. In another article, Mijic focuses on the question that's on everyone's mind concerning the technological singularity--will artificial intelligence, even though surpassing our own capabilities, want to do good? Will AI be able to empathize, to love? Or will the media be right about "evil robots?"

Mijic takes these questions very seriously, and he supports the latest solution to the problem of unrestrained, self-improving AI--Coherent Extrapolated Volition, known as the CEV algorithm. Instead of programming human values directly into AI, the CEV algorithm--currently being developed by Eliezer Yudkowsky, a scientist at the Singularity Institute of Artificial Intelligence (SIAI)--would create a procedure that allows AI to extract information on human values based on human behavioral studies, brain scans, and more. To read Yudkowsky's research on the CEV algorithm, visit the SIAI's publications web page.


This guest post is contributed by Pamelia Brown, who writes on the topics of associates degree. She welcomes your comments at

Follow me on Twitter. Please subscribe to our news feed. Get regular updates via Email. Contact us for advertising inquiries.


Robin said...

No matter whether we achieve the utlimate goal or not, lot of things useful for humans have been done on the way.