Binary Signed Integers – Signed Magnitude

Binary NumbersWe humans and our meat computers don’t have any trouble recognizing the sign of a number. If there is a minus sign, “-,” in front of a number, that number is negative. If a number is prefixed by a plus sign, “+,” or, the more likely case, has no prefix at all, then the number is positive.

Computers of the silicon kind don’t have it so easy. They don’t have the luxury of pluses and minuses to tell them the sign of a number. All they have are zeros and ones, the alphabet of binary systems. So what is their solution?

Since the introduction of digital computing, researchers, mathematicians, and nerds with nothing better to do have debated the best way to represent negative numbers within the limitations imposed by binary encoding. For signed integers, three representations rose above the din, and have since established themselves as the de facto encodings. None of them is perfect. Each has its own strengths and weaknesses. By examining each representation, we can get a feel for the scenarios in which using one may be preferential to the others. More importantly, understanding how signed numbers are encoded gives us invaluable insight into how microprocessors work.

Signed Magnitude Representation

This is also sometimes known as the “sign and magnitude” representation. It is the simplest of the representations, and not to belittle those who came up with it, but this one seems like an obvious solution.

Ordinary unsigned integers are represented by a string ones and zeros. The smallest unit of addressable memory is traditionally the byte, or 8 bits. The unsigned integers a single byte can represent are 00000000 to 11111111.

00000000 is obviously decimal 0 or 0_{10}.

11111111 is 2^7 + 2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0 = decimal 255, or 255_{10}.

So a single byte can encode 256 decimal integers: 0_{10} to 255_{10}.

Figuring that the sign of a number was itself just a binary state, ie., either positive or negative, the creators of the signed magnitude representation went with what they knew: they chose to use one bit, in this case the leftmost, most significant bit (MSB), to denote the sign of the number, and the remaining bits continued to represent the magnitude of the number as before. Just to shake things up a little, they picked 1 to represent a negative sign and 0 to represent a positive sign.

With this representation, a single byte could encode 11111111 to 01111111. With only 7 bits, the highest magnitude that can be represented now is 127_{10}, so the range of integers covered by a single byte is -127_{10} to +127_{10}.

A more subtle, and slightly insidious, side effect of this representation is that there are now two ways to encode the number 0: 10000000 and 00000000. We now have both positive and negative zero.

Early computers used this representation for integers, but most have moved on to those that follow. Signed magnitude is still often used for floating point numbers, however.

2 comments

    • drguildo on September 6, 2020 at 6:04 am
    • Reply

    Don’t you mean “leftmost” for the most significant bit?

      • Tony on October 24, 2020 at 9:50 pm
        Author
      • Reply

      Thank you very much for catching this! I have updated the post.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.