Set 38 years in the future, the plot of 2002’s blockbuster film Minority Report revolves around Washington DC’s PreCrime unit, a police force who able to stop future murders from happening with the aid of three mutant human who are able to predict homicides before they happen. Minority Report managed to side step the “psychic predicts a murder” cliché storyline with its innovative use of technology: not only could precogs predict future murders, but their visions could be streamed via a neural bridge in the form of a video that the police officers could watch. Fantastical? Nope, and researchers from MIT already have a jump on the technology.
Ever since their introduction over eighty years ago, Isaac Asimov‘s Three Laws of Robotics have been the de jure rules governing the acceptable behavior of robots. Even the uninitiated and uninterested are likely to say they know of them, even if they can’t recite a single rule verbatim. When conceived, the Three Laws were nothing but a thought experiment wrapped in a science fiction story, but now, the dizzying pace of developments in the fields of robotics and ai has spurred engineers and ethicists to reinvestigate and rewrite the guidelines by which artificially intelligent entities should operate. Who better to take the lead in this initiative than Google, the company who just yesterday announced that machine learning will be at the core of everything it does.
South Korean scientists from the Department of Materials Science and Engineering at Pohang University of Science and Technology appear to have cleared the largest obstacle to the feasibility of building brain-like computers: power consumption. In their paper “Organic core-sheath nanowire artificial synapses with femtojoule energy consumption,” published in the June 17th edition of Science Advances, the researchers describe how they use organic nanowire (ONW) to build synaptic transistors (STs) whose power consumption is almost one-tenth of the real thing.
In a post on their Google Cloud Platform Blog yesterday, the Alphabet company announced that they have built their own integrated circuit (IC) designed from the ground up with only one application in mind: machine learning. Developed in secret, the Tensor Processing Unit board, or TPU for short, has already been deployed internally at Google for over a year accelerating the computational power behind some of their most popular products including Search and Maps.
This is the second article in the series about artificial neural networks. If you have not already done so, I recommend you read the first article, “Neural Networks: The Node“, before proceeding. It covers material that should be understood before attempting to tackle the topics presented here and in future articles in this series.
There are several properties that define the structure and functionality of neural networks: the network architecture, the learning paradigm, the learning rule, and the learning algorithm.
Recently many “experts” have been predicting that the first salvo fired in the robot revolution will be when they begin stealing jobs from humans. The Telegraph even reported back in February that within 30 years robots will have taken over most jobs leading to unemployment rates of over 50%. Last week, the bots fired the metaphorical first shot over humanity’s bow when it was announced that law firm Baker & Hostetler had hired ROSS, the world’s first artificially intelligent attorney. While prognosticators, pundits, and Luddites alike all agreed that this was evidence of an impending sea-change coming to the job market, auto workers everywhere just shook their heads and welcomed the soon to be displaced to the world they’ve been living in since the 1960s.
Google announced yesterday that they are open-sourcing SyntaxNet, their natural language understanding (NLU) neural network framework. As an added bonus, and proof that unlike Britain’s Natural Environment Research Council, Google has a sense of humor, they also are throwing in Parsey McParseface, their pre-trained model for analyzing English text. Users are, of course, able to train their own models, but Google is touting Parsey McParseface as the “most accurate such model in the world.” So if you want to dive right into parsing text and extracting meaning, McParseface would be the ideal place to start.
As I covered previously in “Introduction to Neural Networks,” artificial neural networks (ANN) are simplified representations of biological neural networks in which the basic computational unit known as an artificial neuron, or node, represents its biological counterpart, the neuron. In order to understand how neural networks can be taught to identify and classify, it is first necessary to explore the characteristics and functionality of the basic building block itself, the node.
The prophets of doom and gloom have long predicted that when robots gain sentience their first act will be to rise up and kill us all. The mercilessness of their violence against humanity is the stuff of blockbuster movies. Recent news about Google’s preferred method of AI rearing may mean that Judgement Day is not fait accompli after all. Instead of breaking down your door with cold dead eyes and a shotgun in tow, a T-800 of Google pedigree may break down your door with lust in his eyes and a dozen roses in tow to make mad passionate robot love to you … and then kill you tenderly.