#Google DeepMind


In this world of future technology, gadgets & Science Artificial Intelligence is one of the most Controversial & Noticeable topic these days. As we know, recently Lenovo introduce it's AI in market, where as Microsoft, Google, IBM & many other tech giants are already present in market with their unique ideas, but out of all Google's AI "DeepMind" is one of the major AI program that attracts developers & tech lovers toward itself like a powerful magnet attracting a metal piece. 
Tech giant Google DeepMind is smart enough that it learns from the activities happening around it. Recently according to a report, DeepMind learns how to walk like humans & created animation by its own. Read more about DeepMind & its existence below...

Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving".

As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of "artificial intelligence", having become a routine technology. Capabilities currently classified as AI include successfully understanding human speech, competing at a high level in strategic game systems (such as chess and Go), autonomous cars, intelligent routing in content delivery networks, military simulations, and interpreting complex data.

  • In October 2015, a computer Go program called AlphaGo, powered by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero. This is the first time an artificial intelligence (AI) defeated a professional Go player.
  •  In March 2016 it beat Lee Sedol—a 9th dan Go player and one of the highest ranked players in the world—with 4-1 in a five-game match. 
  • In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie, who at the time continuously held the world No. 1 ranking for two years. 
  • After winning its three-game match against Ke Jie, the world’s top Go player, AlphaGo is retiring. DeepMind is disbanding the team that worked on the game while continuing AI research in other areas.
DeepMind :

DeepMind Technologies Limited is a British artificial intelligence company founded in September 2010.
It was acquired by Google in 2014. The company has created a neural network that learns how to play video games in a fashion similar to that of humans, as well as a Neural Turing Machine, or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.
The company made headlines in 2016 after its AlphaGo program beat a human professional Go player for the first time.

History :

The start-up was founded by Demis Hassabis, Shane Legg and Mustafa Suleyman in 2010. Hassabis and Legg first met at University College London's Gatsby Computational Neuroscience Unit. On 26 January 2014, Google announced the company had acquired DeepMind for $500 million, and tIn 2014, DeepMind received the "Company of the Year" award by Cambridge Computer Laboratory.
After Google's acquisition the company established an artificial intelligence ethics board. The ethics board for AI research remains a mystery, with both Google and DeepMind declining to reveal who sits on the board. DeepMind, together with Amazon, Google, Facebook, IBM, and Microsoft, is a founding member of Partnership on AI, an organization devoted to the society-AI interface. That it had agreed to take over DeepMind Technologies.

Aim & Progress :

DeepMind Technologies' goal is to "solve intelligence", which they are trying to achieve by combining "the best techniques from machine learning and systems neuroscience to build powerful general-purpose learning algorithms". They are trying to formalize intelligence in order to not only implement it into machines, but also understand the human brain, as Demis Hassabis explains.

To date, the company has published research on computer systems that are able to play games, and developing these systems, ranging from strategy games such as Go to arcade games. According to Shane Legg human-level machine intelligence can be achieved "when a machine can learn to play a really wide range of games from perceptual stream input and output, and transfer understanding across games attempting to distill intelligence into an algorithmic construct may prove to be the best path to understanding some of the enduring mysteries of our minds.."
Research describing an AI playing seven different Atari 2600 video games, reportedly led to their acquisition by Google.

As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within its scope, DeepMind claims that their system is not pre-programmed: it learns from experience, using only raw pixels as data input. Technically it uses deep learning on a convolution neural network, with a novel form of Q-learning, a form of model-free reinforcement learning. They test the system on video games, notably early arcade games, such as Space Invaders or Breakout. Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever.

For some AI problems, such as playing Atari or Go, the goal is easy to define - it’s winning. But how do you describe the process for performing a backflip? Or even just a jump? The difficulty of accurately describing a complex behaviour is a common problem when teaching motor skills to an artificial system. In this work Google's AI Team explore how sophisticated behaviors can emerge from scratch from the body interacting with the environment using only simple high-level objectives, such as moving forward without falling. Specifically, they trained agents with a variety of simulated bodies to make progress across diverse terrains, which require jumping, turning and crouching. The results show their agents develop these complex skills without receiving specific instructions, an approach that can be applied to train their systems for multiple, distinct simulated bodies.

A simulated 'planar' walker makes repeated attempts to climb over a wall.

This in not enough about DeepMind, its is just an intro for more please wait till we update.
#UpdatingSoon.

Thanks for spending your precious time & reading this blog, please leave your valuable review about this blog in comment box.

Source :- Internet & DeepMind Blog

Comments

Post a Comment

Popular posts from this blog

#HEXAGON - Camera, Signals, & Sensors for Cyclists

#Walabot

#70 Years of Self Independence