Abraham Lincoln once famously said, “Everybody loves a compliment.”  I suspect that if he had been a mathematician he would have loved complements, too. We’ve already seen what complements are and talked about the two most prolific: the radix complement and the diminished radix complement. Now it’s time to explore how we can leverage complements to do some really interesting integer arithmetic. Using complements we can subtract one positive integer from another or add a negative integer to a positive one by simply performing addition with two positive integers. The algorithm behind this black magic is called the Method of Complements.

In my last post about binary signed integers, I introduced the ones complement representation. At the time, I said that the ones complement was found by taking the bitwise complement of the number. My explanation about how to do this was simple: invert each bit, flipping 1 to 0 and vice versa. While it’s true that this is all you need to know in order to determine the ones complement of a binary number, if you want to understand how computers do arithmetic with signed integers and why they represent them the way they do, then you need to understand what complements are and how the method of complements allows computers to subtract one integer from another, or add a positive and negative integer, by doing addition with only positive integers.

In the last post, we saw that one of the major failings of the signed magnitude representation was that addition and subtraction could not be performed on the same hardware as for unsigned integers. As I pointed out, the reason for this is because negating a number in signed magnitude does not yield the additive inverse of that number. The ones complement representation eliminates this issue, although it does introduce new, subtle issues, and [spoiler] doesn’t address the problem of having two representations for zero.

From the orbits of celestial bodies, to cars hurtling around a racetrack, to electrons zipping around the nuclei of atoms, examples of objects in circular motion can be found in a wide variety of scales and speeds. This post is about the generic case: a point particle moving at a constant speed along the circumference of a circle. This is known as uniform circular motion.

This is going to be another one of my “selfish” posts – written primarily for me to refer back to in the future and not because I believe it will benefit anyone other than me. The idea is one that I always took for granted but had a hard time proving to myself once I decided to try.

Theorem: Suppose we have an M bit unsigned binary integer with value A. Consider the first (least significant) N bits with value B. Then:

Put another way, arithmetic with unsigned binary integers of a fixed length N is always performed modulo .

I previously discussed the signed magnitude solution to representing signed integers as binary strings and pointed out that while it had the advantage of being simple, it also has some disadvantages. For starters, N-bit signed magnitude integers have two representations for zero: positive zero (a bitstring with N zeros) and negative zero (a bitstring with a one followed by N-1 zeros).

There is another significant disadvantage that isn’t obvious until you try to implement signed magnitude representation in silicon. Specifically, you can’t do mathematics with signed magnitude integers using the same hardware as is used for unsigned integers.

If you’ve got the word “power” in your name, you’d better believe expectations are going to be sky high for what you can do. The Power Rule in calculus brings it and then some.

The Power Rule, probably the most used rule when differentiating, gives us a drop dead simple way to differentiate polynomials. Specifically it says for that any polynomial term raised to the power with coefficient :

(1)

Apply this to every term in your polynomial, and you’ve got its derivative! Easy peasy. Let’s prove it.

It is no big secret that exponentiation is just multiplication in disguise. It is a short hand way to write an integer times itself multiple times and is especially space saving the larger the exponent becomes. In the same vein, a serious problem with calculating numbers raised to exponents is that they very quickly become extremely large as the exponent increases in value. The following rule provides a great computational advantage when doing modular exponentiation.

The rule for doing exponentiation in modular arithmetic is:

This states that if we take an integer , raise it to an integer power and calculate the result modulo we will get the same result as if we had taken modulo first, raise it to , and calculate that product modulo .

The Complete Idiot's Guide to Calculus
W. Michael Kelley
Mathematics
Penguin
2006
336

Let's face it: most students don't take calculus because they find it intellectually stimulating. It's not . . . at least for those who come up on the wrong side of the bell curve! There they are, minding their own business, working toward some non-science related degree, when . . . BLAM! They get next semester's course schedule in the mail, and first on the list is the mother of all loathed college courses . . . CALCULUS! Not to fear-The Complete Idiot's Guide to Calculus, Second Edition, like its predecessor, is a curriculum-based companion book created with this audience in mind. This new edition continues the tradition of taking the sting out of calculus by adding more explanatory graphs and illustrations and doubling the number of practice problems! By the time readers are finished, they will have a solid understanding (maybe even a newfound appreciation) for this useful form of math. And with any luck, they may even be able to make sense of their textbooks and teachers.

I must stay focused. I must stay focused. I must stay … I wonder what’s new on Facebook.

I don’t really feel like writing this post mostly because I know that it will be very similar to the other two I have already done: modular addition rule proof and modular subtraction rule proof, but my New Year’s Resolution is to follow things through to completion. Well, that would’ve been my News Years resolution if I had made one. Either way, it’s back to modular arithmetic.

The rule for doing multiplication in modular arithmetic is:

This says that if we multiply integer times integer and take the product modulo , we get the same answer as if we had first taken modulo and multiplied by modulo and taken that product modulo .