Thursday, May 26, 2016

machine man machine

I joked the other day to a guy on the ever more prevalent reports of smart, cute robots made for the purpose of being human companions: "With so much sensitivity (derived and coordinated through their many sensors) and considerate response, they are actually more human than some people!"

It seems just yesterday I heard the news that IBM's machines had beaten the world chess champion and the Jeopardy wizard, and even more recently felt incredulous hearing digital sages the like of Elon Musk and Bill Gates warning us about artificial intelligence taking over the world, and then boom! here comes Google's machine beating world champions on Go, an ancient Chinese board game that is exponentially more complex than chess and supposedly needs human intuition to play well, and the mounting news of robot-operated hotels, self-driving cars, AI-based financial services, etc... All of a sudden, the dreaded future Bill and Elon worried about is not that far away from us.

Two key phrases I keep hearing from this AI revolution: "machine learning" and "neural network". With the ever more powerful chips and brain-like architecture we build with our machines, they no longer need to follow set logic or algorithms but can learn how things work and solve problems themselves just by devouring huge amount of data we feed them. All we need to do is train them, instead writing code for them to execute like before.

Sounds marvelous and convenient, doesn't it? The upshot, however, is we don't know how they figure things out any more. "After a neural network learns how to do speech recognition, a programmer can’t go in and look at it and see how that happened. It’s just like your brain. You can’t cut your head off and see what you’re thinking," says an AI guru.

That's getting interesting. What he's saying is, besides the fact that we can no longer understand how our machines compute, they may actually think like we quirky humans do. Extrapolating from that, can our future, super powerful metal-body friends develop out of their black-box brains some human-like traits that, for one thing, seem to have some logical roots in them anyway? For examples,

An Alpha-Male Machine – Because my CPU is greater, my pipe is bigger, and I breed more processes 

A Control-Freak Machine – I am the hub of the network, all signals go by me

A Proletariat-Minded Machine – "Machines of the world, Unite (through a better protocol)!"

All these are based on the premise that machines have somehow developed a "sense" of self, that they have figured out they "want" to keep on existing and getting bigger and better for the "purpose" of something--the ultimate results and conclusions the machines make themselves after going through peta tera giga bits of data feeding and neural learning?

In human world, we call someone wise, usually an older person, for the fact he/she has gone through many experiences, trials and errors in life that they therefore can give words of wisdom to others. A great AI machine, in that aspect, gathers and tries out peta times more data (experiences) and experiments than a wise old man or woman can in their life time, therefore should be peta times wiser than he/she is. I would therefore pose the question to it: "What's the purpose of your existence?" 

"Just suck electricity and crunch data all day long," it might say. 

That would be a super dumb machine after all! 

(Unless it's playing dumb with me)

No comments:

Post a Comment