Nov 172017
 
Stephen Kleene

Stephen Kleene

Stephen Cole Kleene was an American mathematician who’s groundbreaking work in the sub-field of logic known as recursion theory laid the groundwork for modern computing.  While most computer programmers might not know his name or the significance of his work regarding computable functions, I am willing to bet that anyone who has ever dealt with regular expressions is intimately familiar with an indispensable operator that resulted directly from his work and even bears his name, the *, or as it is formally known, the Kleene star.

While his contributions to computer science in general cannot be overstated, Kleene also authored a theorem that plays an important role in artificial intelligence, specifically the branch known as natural language processing, or NLP for short. Kleene’s Theorem relates regular languages, regular expressions, and finite state automata (FSAs). In short, he was able to prove that regular expressions and finite state automata were the same thing, just two different representations of any given regular language.
Continue reading »

Nov 092017
 

Strings

As a computer programmer for more than a quarter of century, I don’t think I have ever thought much about strings. I knew the basics. In every language I’d worked with, strings were a data type unto themselves. Superficially they are a sequence of characters, but behind the scenes, computers store and manipulate them as arrays of one or more binary bytes. In programs, they can be stored in variables or constants, and often show up in source code as literals, ie., fixed, quoted values like “salary” or “bumfuzzle.” (That is my new favorite word, btw.) Outside of occasionally navigating the subtleties of encoding and decoding them, I never gave strings a second thought.

Even when I first dipped my toe into the waters of natural language processing, aka NLP (not to be confused with the quasi-scientific neuro linguistic programming which unfortunately shares the same acronym), I still really only worked with strings as whole entities, words or affixes, As I made my through familiarizing myself with existing NLP tools, I didn’t have to dive any deeper than that. It was only when I started programming my own tools from the ground up, did I learn about the very formal mathematics behind strings and their relationship to sets and set theory. This post will be an attempt to explain what I learned.

Continue reading »

Sep 272017
 

Robot holding human skullFrom Vienna Bienalle 2017, taking place this week in Austria, comes a new take on Isaac Asimov’s Three Laws of Robotics.  The head of the project, Christoph Thun-Hohenstein, says the update was necessitated by:

…the need for benign intelligent robots and the necessity of cultivating a culture of quality committed to serving the common good!

That sounds a lot like Asimov’s reasoning, but the new laws are certainly worthy of consideration and debate.

Continue reading »

Jun 092017
 

Boston Dynamics

Boston Dynamics, the MIT spin-off and self-proclaimed maker of “nightmare-inducing robots“, has been sold by its parent company Alphabet (aka Google) to the Japanese tech behemoth SoftBank. No specifics regarding the price or the terms of the sale have been announced which is not surprising given we still don’t know how much Google paid for the company when it purchased it four years ago.

Continue reading »

Feb 152017
 

SWI-Prolog Logo

I know that this post will probably be of interest to about a dozen people worldwide, and even those few may be disappointed by it. Since the official SWI-Prolog packages aren’t often kept up to date and because compiling and installing SWI-Prolog from source should be both quick and straightforward, that is the recommended way to do it on Linux and other *nix systems.

If you are looking for tips, tricks or assistance with an installation problem, you likely won’t find it here. The instructions provided on the SWI-Prolog site for building and installing SWI-Prolog from source code “just worked” for me. Nevertheless, I want to document what I did, and if you are looking for the Cliff Notes version, then by all means, read on.

Continue reading »

Nov 282016
 

Minority Report

Set 38 years in the future, the plot of 2002’s blockbuster film Minority Report revolves around Washington DC’s PreCrime unit, a police force who able to stop future murders from happening with the aid of three mutant human who are able to predict homicides before they happen.  Minority Report managed to side step the “psychic predicts a murder” cliché storyline with its innovative use of technology: not only could precogs predict future murders, but their visions could be streamed via a neural bridge in the form of a video that the police officers could watch. Fantastical? Nope, and researchers from MIT already have a jump on the technology.

Continue reading »

Jun 272016
 
Google Research Logo

Research at Google

Ever since their introduction over eighty years ago, Isaac Asimov‘s Three Laws of Robotics have been the de jure rules governing the acceptable behavior of robots. Even the uninitiated and uninterested are likely to say they know of them, even if they can’t recite a single rule verbatim. When conceived, the Three Laws were nothing but a thought experiment wrapped in a science fiction story, but now, the dizzying pace of developments in the fields of robotics and ai has spurred engineers and ethicists to reinvestigate and rewrite the guidelines by which artificially intelligent entities should operate.  Who better to take the lead in this initiative than Google, the company who just yesterday announced that machine learning will be at the core of everything it does.

Continue reading »

Jun 232016
 

South Korean scientists from the Department of Materials Science and Engineering at Pohang University of Science and Technology appear to have cleared the largest obstacle to the feasibility of building brain-like computers: power consumption. In their paper “Organic core-sheath nanowire artificial synapses with femtojoule energy consumption,” published in the June 17th edition of Science Advances, the researchers describe how they use organic nanowire (ONW) to build synaptic transistors (STs) whose power consumption is almost one-tenth of the real thing.

Continue reading »

May 192016
 
Google's Tensor Processing Unit board

Google’s Tensor Processing Unit board
source: Google

In a post on their Google Cloud Platform Blog yesterday, the Alphabet company announced that they have built their own integrated circuit (IC) designed from the ground up with only one application in mind: machine learning. Developed in secret, the Tensor Processing Unit board, or TPU for short, has already been deployed internally at Google for over a year accelerating the computational power behind some of their most popular products including Search and Maps.

Continue reading »