Oct 122017
 

Boolean functions, sometimes also called switching functions, are functions that take as their input zero or more boolean values (1 or 0, true or false, etc.) and output a single boolean value. The number of inputs to the function is is called the arity of the function and is denoted as k. Every k-ary function can be written as a propositional formula, a sentence in propositional logic. A binary Boolean function, a Boolean function with two arguments, can be described by one out of sixteen canonical formulas.

Continue reading »

Jul 162017
 

An arithmetic sequence of numbers, sometimes alternatively called an arithmetic progression, is a sequence of numbers in which the difference between all pairs of consecutive numbers is constant. A very simple arithmetic sequence consists of the natural numbers: 1, 2, 3, 4, … where the difference between any number and the number before it is just one. 3, 7, 11, 15, 19, …. is another arithmetic sequence, but in this case the constant difference between elements is four.

A finite portion of an arithmetic sequence like 2, 3, 4 or 7, 11, 15 is called a finite arithmetic progression. To confuse matters, sometimes a finite arithmetic progression, like an arithmetic sequence, is also called an arithmetic progression. To be safe, when a progression is finite, I always say as much.

An arithmetic series is the sum of a finite arithmetic progression.  An arithmetic series consisting of the first four natural numbers is 1 + 2 + 3 + 4. The sum, 10, is trivial to compute via simple addition, but for a longer series with larger numbers, having a formula to calculate the sum is indispensable.

Continue reading »

Jan 292017
 

Binary NumbersAbraham Lincoln once famously said, “Everybody loves a compliment.”  I suspect that if he had been a mathematician he would have loved complements, too. We’ve already seen what complements are and talked about the two most prolific: the radix complement and the diminished radix complement. Now it’s time to explore how we can leverage complements to do some really interesting integer arithmetic. Using complements we can subtract one positive integer from another or add a negative integer to a positive one by simply performing addition with two positive integers. The algorithm behind this black magic is called the Method of Complements.

Continue reading »

Jan 102017
 

Binary NumbersIn my last post about binary signed integers, I introduced the ones complement representation. At the time, I said that the ones complement was found by taking the bitwise complement of the number. My explanation about how to do this was simple: invert each bit, flipping 1 to 0 and vice versa. While it’s true that this is all you need to know in order to determine the ones complement of a binary number, if you want to understand how computers do arithmetic with signed integers and why they represent them the way they do, then you need to understand what complements are and how the method of complements allows computers to subtract one integer from another, or add a positive and negative integer, by doing addition with only positive integers.

Continue reading »

Jul 262016
 

Binary NumbersIn the last post, we saw that one of the major failings of the signed magnitude representation was that addition and subtraction could not be performed on the same hardware as for unsigned integers. As I pointed out, the reason for this is because negating a number in signed magnitude does not yield the additive inverse of that number. The ones complement representation eliminates this issue, although it does introduce new, subtle issues, and [spoiler] doesn’t address the problem of having two representations for zero.

Continue reading »

May 022016
 

Circular Motion 0

From the orbits of celestial bodies, to cars hurtling around a racetrack, to electrons zipping around the nuclei of atoms, examples of objects in circular motion can be found in a wide variety of scales and speeds. This post is about the generic case: a point particle moving at a constant speed along the circumference of a circle. This is known as uniform circular motion.

Continue reading »

Mar 092016
 

This is going to be another one of my “selfish” posts – written primarily for me to refer back to in the future and not because I believe it will benefit anyone other than me. The idea is one that I always took for granted but had a hard time proving to myself once I decided to try.

Theorem: Suppose we have an M bit unsigned binary integer with value A. Consider the first (least significant) N bits with value B. Then:

A \equiv B \bmod{2^N}

Put another way, arithmetic with unsigned binary integers of a fixed length N is always performed modulo 2^N.
Continue reading »

Feb 012016
 

Binary NumbersI previously discussed the signed magnitude solution to representing signed integers as binary strings and pointed out that while it had the advantage of being simple, it also has some disadvantages. For starters, N-bit signed magnitude integers have two representations for zero: positive zero (a bitstring with N zeros) and negative zero (a bitstring with a one followed by N-1 zeros).

There is another significant disadvantage that isn’t obvious until you try to implement signed magnitude representation in silicon. Specifically, you can’t do mathematics with signed magnitude integers using the same hardware as is used for unsigned integers.

Continue reading »

Jan 142016
 

Electricity warning signIf you’ve got the word “power” in your name, you’d better believe expectations are going to be sky high for what you can do. The Power Rule in calculus brings it and then some.

The Power Rule, probably the most used rule when differentiating, gives us a drop dead simple way to differentiate polynomials. Specifically it says for that any polynomial term x raised to the power n with coefficient a:

(1)   \begin{equation*} \frac{d}{dx}ax^n = nax^{n-1}, \qquad n \neq 0 \end{equation*}

Apply this to every term in your polynomial, and you’ve got its derivative! Easy peasy. Let’s prove it.

Continue reading »

Jan 122016
 

It is no big secret that exponentiation is just multiplication in disguise. It is a short hand way to write an integer times itself multiple times and is especially space saving the larger the exponent becomes. In the same vein, a serious problem with calculating numbers raised to exponents is that they very quickly become extremely large as the exponent increases in value. The following rule provides a great computational advantage when doing modular exponentiation.

The rule for doing exponentiation in modular arithmetic is:

a^b \bmod{c} = ((a \bmod{c})^b) \bmod{c}

This states that if we take an integer a, raise it to an integer power b and calculate the result modulo c we will get the same result as if we had taken a modulo c first, raise it to b, and calculate that product modulo c.

Continue reading »