Google's AI Chatbot Is Getting A Bit Too Smart For Comfort

By R. Siva Kumar - 01 Jul '15 06:51AM

If you are stressed and wondering what life is all about, then you just need to check out a new magic box---Google's AI robot, to find out that it has the answer to every question of yours---both philosophical as well as technical, according to business-insider

Some of its answers are quite amusing, as it gives answers such as the meaning of life is "to live forever," while having a child is "immoral".

Check out this utterly simple list of answers in a new paper by The Google Brain team, which released a paper called "A Neural Conversational Model," in which two series of experimental dialogues between humans and machines were published:

    Human: What is the purpose of life?

    Machine: To serve the greater good.

    Human: What is the purpose of living?

    Machine: To live forever.

    Human: What is the purpose of existence?

    Machine: To find out what happens when we get to the planet earth.

    Human: Where are you now?

    Machine: I'm in the middle of nowhere.

    Human: What is the purpose of dying?

    Machine: To have a life.

    Human: What is the purpose of being intelligent?

    Machine: To find out what it is.

Researchers Oriol Vinyals and Quoc Le found it "encouraging that the model can remember facts, understand contexts, and perform common sense reasoning without the complexity in traditional pipelines."

Howver, some analysts feel that it indicates that Artificial Intelligence is "threatening". A chatbot learns to hold a conversation by analysing millions of movie scripts, permitting it to "muse on the meaning of life, the colour of blood, morality and even get angry with its human inquisitor", according to wired.

The findings are based on a new and advanced chatbot that learnt from dialogues discovered in thousands of movie subtitles as well as an online tech support chat. The answers are not only preprogrammed, related to certain questions, but also new answers from new questions.

However, the robot's flaw is that it "only gives simple, short, sometimes unsatisfying answers to our questions as can be seen above." Hence, the model fails the Turing test, which is designed to differentiate a computer from a human by analyzing their answers.

"The model may require substantial modifications to be able to deliver realistic conversations. Amongst the many limitations, the lack of a coherent personality makes it difficult for our system to pass the Turing test," the researchers concluded.

Started four years ago, the Google Brain project was formed to encourage "deep learning". Computer scientists could develop software to replicate the human brain's learning model.

Fun Stuff

The Next Read

Real Time Analytics