Boston Dynamics, the MIT spin-off and self-proclaimed maker of “nightmare-inducing robots“, has been sold by its parent company Alphabet (aka Google) to the Japanese tech behemoth SoftBank. No specifics regarding the price or the terms of the sale have been announced which is not surprising given we still don’t know how much Google paid for the company when it purchased it four years ago.
I know that this post will probably be of interest to about a dozen people worldwide, and even those few may be disappointed by it. Since the official SWI-Prolog packages aren’t often kept up to date and because compiling and installing SWI-Prolog from source should be both quick and straightforward, that is the recommended way to do it on Linux and other *nix systems.
If you are looking for tips, tricks or assistance with an installation problem, you likely won’t find it here. The instructions provided on the SWI-Prolog site for building and installing SWI-Prolog from source code “just worked” for me. Nevertheless, I want to document what I did, and if you are looking for the Cliff Notes version, then by all means, read on.
Set 38 years in the future, the plot of 2002’s blockbuster film Minority Report revolves around Washington DC’s PreCrime unit, a police force who able to stop future murders from happening with the aid of three mutant human who are able to predict homicides before they happen. Minority Report managed to side step the “psychic predicts a murder” cliché storyline with its innovative use of technology: not only could precogs predict future murders, but their visions could be streamed via a neural bridge in the form of a video that the police officers could watch. Fantastical? Nope, and researchers from MIT already have a jump on the technology.
Ever since their introduction over eighty years ago, Isaac Asimov‘s Three Laws of Robotics have been the de jure rules governing the acceptable behavior of robots. Even the uninitiated and uninterested are likely to say they know of them, even if they can’t recite a single rule verbatim. When conceived, the Three Laws were nothing but a thought experiment wrapped in a science fiction story, but now, the dizzying pace of developments in the fields of robotics and ai has spurred engineers and ethicists to reinvestigate and rewrite the guidelines by which artificially intelligent entities should operate. Who better to take the lead in this initiative than Google, the company who just yesterday announced that machine learning will be at the core of everything it does.
South Korean scientists from the Department of Materials Science and Engineering at Pohang University of Science and Technology appear to have cleared the largest obstacle to the feasibility of building brain-like computers: power consumption. In their paper “Organic core-sheath nanowire artificial synapses with femtojoule energy consumption,” published in the June 17th edition of Science Advances, the researchers describe how they use organic nanowire (ONW) to build synaptic transistors (STs) whose power consumption is almost one-tenth of the real thing.
In a post on their Google Cloud Platform Blog yesterday, the Alphabet company announced that they have built their own integrated circuit (IC) designed from the ground up with only one application in mind: machine learning. Developed in secret, the Tensor Processing Unit board, or TPU for short, has already been deployed internally at Google for over a year accelerating the computational power behind some of their most popular products including Search and Maps.
This is the second article in the series about artificial neural networks. If you have not already done so, I recommend you read the first article, “Neural Networks: The Node“, before proceeding. It covers material that should be understood before attempting to tackle the topics presented here and in future articles in this series.
There are several properties that define the structure and functionality of neural networks: the network architecture, the learning paradigm, the learning rule, and the learning algorithm.
Recently many “experts” have been predicting that the first salvo fired in the robot revolution will be when they begin stealing jobs from humans. The Telegraph even reported back in February that within 30 years robots will have taken over most jobs leading to unemployment rates of over 50%. Last week, the bots fired the metaphorical first shot over humanity’s bow when it was announced that law firm Baker & Hostetler had hired ROSS, the world’s first artificially intelligent attorney. While prognosticators, pundits, and Luddites alike all agreed that this was evidence of an impending sea-change coming to the job market, auto workers everywhere just shook their heads and welcomed the soon to be displaced to the world they’ve been living in since the 1960s.
Google announced yesterday that they are open-sourcing SyntaxNet, their natural language understanding (NLU) neural network framework. As an added bonus, and proof that unlike Britain’s Natural Environment Research Council, Google has a sense of humor, they also are throwing in Parsey McParseface, their pre-trained model for analyzing English text. Users are, of course, able to train their own models, but Google is touting Parsey McParseface as the “most accurate such model in the world.” So if you want to dive right into parsing text and extracting meaning, McParseface would be the ideal place to start.