About this Episode
On Episode 103 of Voices in AI, Byron Reese discusses AI with Ben Goertzel of SingularityNET, diving into the ideas of a grasp algorithm and AGI’s.
Take heed to this episode or learn the total transcript at www.VoicesinAI.com
Byron Reese: That is Voices in AI dropped at you by GigaOm, I’m Byron Reese. Right this moment, my visitor is Ben Goertzel. He’s the CEO of SingularityNET, in addition to the Chief Scientist over at Hanson Robotics. He holds a PhD in Arithmetic from Temple College. And he’s speaking to us from Hong Kong proper now the place he lives. Welcome to the present Ben!
Ben Goertzel: Hey thanks for having me. I’m trying ahead to our dialogue.
The primary query I all the time throw at individuals is: “What’s intelligence?” And curiously you might have a definition of intelligence in your Wikipedia entry. That’s a primary, however why don’t we simply begin with that: what’s intelligence?
I really spent quite a lot of time engaged on the mathematical formalization of a definition of intelligence early in my profession and got here up with one thing pretty crude which, to be sincere, at this stage I’m not as enthused about as I used to be earlier than. However I do suppose that that query opens up quite a lot of different attention-grabbing points.
The way in which I got here to consider intelligence early in my profession was merely: attaining a broad number of objectives in a broad number of environments. Or as I put it, the flexibility to realize complicated objectives in complicated environments. This tied in with what I later distinguish as AGI versus no AI. I launched the entire notion of AGI and that time period in 2004 or so. That has to do with an AGI having the ability to obtain quite a lot of totally different or complicated objectives in quite a lot of various kinds of situations, totally different than the slender AIs that we have now throughout us that mainly do one kind of factor in a single sort of context.
I nonetheless suppose that could be a very worthwhile method to take a look at issues, however I’ve drifted extra right into a techniques concept perspective. I’ve been working with a man named David (Weaver) Weinbaum who did a chunk lately within the Free College of Brussels on the idea of open ended intelligence, which is extra taking a look at intelligence, than simply the method of exploration and knowledge creation than these within the interplay with an setting. And on this open ended intelligence view, you’re actually taking a look at clever techniques and complicated organizing techniques and the creation of objectives to be pursued, is a part of what an intelligence system does, however isn’t essentially the crux of it.
So I’d say understanding what intelligence is, is an ongoing pursuit. And I believe that’s okay. Like in biology the purpose is to outline what life is in ‘the as soon as and for all’ formal sense, earlier than you are able to do biology or an artwork, the purpose isn’t to outline what magnificence is earlier than you’ll be able to proceed. These are kind of umbrella ideas which may then result in quite a lot of totally different specific improvements and formalizations of what you do.
And but I’m wondering, since you’re proper, biologists don’t have a consensus definition for what life is and even loss of life for that matter, you marvel at some degree if possibly there’s no such factor as life. I imply like possibly it isn’t actually… and so possibly you say that’s not likely even a factor.
Effectively, that is that one in all my favourite quotes of all time [from] former President Invoice Clinton which is, “That every one is dependent upon what the that means of IS is.”
There you go. Effectively let me ask you a query about objectives, which you simply introduced up. I suppose once we’re speaking about machine intelligence or mechanical intelligence, let me ask level clean: is a compass’ purpose to level to North? Or does it simply occur to level to north? And if it isn’t it’s purpose to level to North, what’s the distinction between what it does and what it desires to do?
The usual instance utilized in resistance concept is the thermostat. The thermostat’s purpose is to maintain the temperature above a sure degree and under a sure degree or in a sure vary after which in that sense the thermostat does have—you recognize it as a sensor, it has an precise mechanism that’s a really native management system connecting the 2. So from the surface, it’s fairly arduous to not name the thermostat a purpose to a heating system, like a sensor or an actor and a call making course of in between.
Once more the phrase “purpose,” it’s a pure language idea that can be utilized for lots of various issues. I suppose that some individuals have the concept there are pure definitions of ideas which have profound and distinctive that means. I kind of suppose that solely exists within the arithmetic area the place you say a definition of an actual quantity is one thing pure and excellent due to probably the most lovely theorems you’ll be able to show round it, however in the actual world issues are messy and there may be room for various flavors of an idea.
I believe from the view of the surface observer, the thermostat is pursuing a sure purpose. And the compass could also be additionally in the event you go down into the micro physics of it. Alternatively, an attention-grabbing level is that from its personal viewpoint, the thermostat is not pursuing a purpose, just like the thermostat lacks a deliberative reflective mannequin of itself both as a goal-achieving agent. To an outdoor observer, the thermostat is pursuing a purpose.
Now for a human being, when you’re past the age of six or 9 months or one thing, you’re pursuing your purpose relative to the observer, that’s your self. However you’re pursuing that purpose—you might have a way of, and I believe this will get on the essential connection between reflection and meta considering, self-observation and common intelligence as a result of it’s the truth that we characterize inside ourselves, the truth that we are pursuing some objectives, that is what permits us to vary and adapt the objectives as we develop and be taught in a broadly purposeful and significant method. Like if a thermostat breaks, it’s not going to appropriate itself and return to its unique purpose or one thing proper? It’s simply going to interrupt, and it doesn’t even make a halting and flawed protection to grasp what it’s doing and why, like we people do.
So let’s imagine that one thing has a purpose if there’s some operate which it’s systematically maximizing, wherein case you’ll be able to say of a heating or compass system that they do have a purpose. You may say that it has a objective whether it is representing itself because the purpose maximizing system and may manipulate its illustration one way or the other. And that’s a bit bit totally different, after which additionally we get to the distinction between slender AIs and AGIs. I imply AlphaGo has a purpose of profitable at Go, but it surely doesn’t know that Go is a recreation. It doesn’t know what profitable is in any broad sense. So in the event you gave it a model of Go together with like a hexagonal board and three totally different gamers or one thing, it doesn’t have the idea to adapt behaviors on this bizarre new context and like determine what’s the objective of doing stuff on this bizarre new context as a result of it’s not representing itself in relation to the Go recreation and the reward operate in the best way the particular person enjoying Go does.
If I’m enjoying Go, I’m a lot worse than AlphaGo, I’m even worse than say my oldest son who’s like a ‘one and completed’ kind of Go participant. I’m method down on the hierarchy and I do know that it’s a recreation manipulating little stones on the board by analogy to human warfare. I understand how to look at the sport between two individuals and that profitable is finished by counting stones and so forth. So having the ability to conceptualize my purpose as a Go participant within the broader context of my interplay with the world is de facto useful when issues go loopy and the world modifications and the unique detailed objectives didn’t make any sense anymore, which has occurred all through my life as a human with astonishing regularity.
Take heed to this episode or learn the total transcript at www.VoicesinAI.com
Byron explores points round synthetic intelligence and acutely aware computer systems in his new ebook The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.