Documenting the Coming Singularity

Saturday, June 23, 2007

A Better Understanding of Singularity: Aubrey de Grey

Building recursively self-improving machines as a means of achieving smarter-than-human intelligence seems to be the only way that the singularity can occur. This simply means that this level of intelligence in machines requires machines that understand their own working well enough to make improvements on their own. The idea is that once this is achieved, they will begin to improve themselves more and more rapidly and become smarter than humans.

The really important issue becomes whether or not this strong AI is friendly to humans. The only way to make sure that it is friendly is to carefully manage and design this constraint into the recursively self-improving machines. That is, make then so that they have enough freedom to implement improvements, but not enough to lose their friendliness. Sort of like Asimov's three laws of robotics.

Clearly, we do not want to create unfriendly AI, and so there is an urgency that informs the people at the Singularity Institute. Their goal is to build friendly AI before someone else creates, whether accidentally or on purpose, unfriendly AI.

Aubrey de Grey explains all this in a very easily understood way, something he has a great talent for. He points out, additionally, that the public will begin to care about the singularity only when progress in the labs convinces people that these developments are imminent.

You will be sure to enjoy this interview, but what is even more important, you will begin, if you haven't yet, to appreciate the importance of the kind of work that is ongoing at the Singularity Institute, namely the responsible development of friendly AI before the careless development of unfriendly AI is upon us.

Singularity & The Price of Rice is updated daily; the easiest way to get your daily dose is by subscribing to our news feed. Stay on top of all our updates by subscribing now via RSS or Email.


Anonymous said...

You do not need a machine itelligence able to understand its own workings and how to improve them.

All you need is one able to judge its own abilities, and so make random changes, choose the best results, repeat.

This is after all how human intelligence was developed. And it was done with the selection based on random offspring maximization only loosly correlated with intelligence at best.