Jump to content

Talk:Two's complement

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Division

[edit]

This page covers addition, subtraction and multiplication. But what about division? (restoring/non-restoring) —Preceding unsigned comment added by 128.54.69.212 (talkcontribs) 18:50, 7 February 2007

It's in the to-do list. Have at it. Dicklyon 01:08, 8 February 2007 (UTC)[reply]

I suppose it should be here, but all that I know of don't do it in two's complement. It is usual to divide the magnitude and apply the appropriate sign to the result. Partly that is because that is what has been expected since the days of sign magnitude machines. Fortran, for example, originated on the sign magnitude 704. Gah4 (talk) 20:59, 7 May 2015 (UTC)[reply]

More specifically, the sign of the remainder depends on which way you do it. Gah4 (talk) 20:03, 15 February 2017 (UTC)[reply]

Weighting of LSB is assumed to be 1 for this article.

[edit]

The weight of the least significant bit (LSB) is often assumed in this article to be 1, which is not a problem for addition or subtraction, and multiplication. However, the value 0111 in a two's compliment system can represent 7, 7/8, 7/4, for LSB weights of 1, 1/8th, and 1/4 respectively. This *usually* does not matter for many operators; however, it does in fact matter for some operators. E.g, the square root operator used on 0100 would produce 0010 if the weight of the LSB is 1, but would produce approimately 0101 (trucation) or 0110 (rounding) if the weight of the LSB is 1/8. Tata2007 18:11, 3 October 2007 (UTC)[reply]

It is always much easier to discuss the two's-complement representation of integers, but you're right that a note somewhere about how fixed-point fractions and floating-point mantissas use these integers with a power-of-two multiplier, whether implicit or explicit. Dicklyon 21:11, 3 October 2007 (UTC)[reply]
While many people understand decimal fractions, not so many such fractions in other radices. As with decimal, it can be generalized once the integer representation is understood. There could be a whole article on scaled binary (and other radix) arithmetic. PL/I is one of the few languages that can do this. Gah4 (talk) 21:32, 14 September 2018 (UTC)[reply]
Recent edits remind me of this. I believe that the IBM descriptions for S/360 and successors don't restrict them to integers, but generically fixed point values, where one must keep track of the radix point. Note that most processors generate a double length product, and accept a double length dividend, both necessary when the radix point isn't immediately to the right of the LSD. Few computer languages (PL/I being one that I have used) directly support fixed point with a moved radix point. (PL/I allows it to move either direction.) Since INTEGER in Fortran, and later int in C, have reinforced this distinction, it is not commonly taught. It might be that this article would be a good place to do it. Gah4 (talk) 23:27, 2 March 2019 (UTC)[reply]
I agree that this article should mention fixed-point fractions, but perhaps the fixed-point arithmetic article is a better place for the details. --DavidCary (talk) 08:29, 3 March 2022 (UTC)[reply]

Multiplication for two's complement

[edit]

This edit re-added multiplication to the list of operations which are identical to those for unsigned binary numbers, which is not true: if you feed two's complement numbers into a circuit or a function which computes the product of two unsigned binary numbers, you'll get wrong results if one of the numbers is negative. If you think otherwise, please provide references. – Adrian Willenbücher (talk) 14:57, 15 September 2012 (UTC)[reply]

I did not have any references in mind when I made the change. This morning I did a quick look for references and I could not find any supporting my change, but I didn’t really notice any mentioning addition and subtraction without multiplication as an advantage either. What I did notice is while most sources tend to do binary addition and subtraction by wrapping rather than expanding the result (N bits ± N bits → N bits), they tend to ignore the possibility of doing multiplication that way and instead expand the result so that overflow is impossible (N bits × N bits → 2N bits). What do you think about changing the wording along the lines of

The usual technique used for unsigned binary addition and subtraction with a fixed word size can also be used for two's complement numbers. The usual technique for unsigned multiplication with a double word size result needs to be modified to account for negative values.

 BTW I do believe multiplication works if overflow is handled by wrapping:

(2Nx)y = 2Nyxy

which wraps to −xy. Vadmium (talk, contribs) 02:40, 16 September 2012 (UTC).[reply]
Yes, it seems you're right about signed multiplication giving the same result as unsigned multiplication if the result is truncated to N bits – I actually didn't know that (since I always considered 2N-results only, for which this is not the case), so thanks for the insight! With that in mind, I think we can keep the current wording, but I still think that we need a citation, since this issue is neither trivial nor obvious. Unfortunately, I couldn't find any references either (most textbooks only treat the N x N -> 2N case). – Adrian Willenbücher (talk) 10:31, 17 September 2012 (UTC)[reply]

You need the 2N bit product for the case of fixed-point non-integer (binary point other than to the right of the LSB) multiplication, and for some other calculations. Most high-level languages don't provide for a 2N bit product, though. There are processors that supply one (N bit), the other (2N bit), or both. Gah4 (talk) 11:06, 18 September 2012 (UTC)[reply]

Yes, so? – Adrian Willenbücher (talk) 13:57, 18 September 2012 (UTC)[reply]
The low half of the product is the same for unsigned and two's complement. Gah4 (talk) 02:09, 10 May 2023 (UTC)[reply]

Fractional 2's complement?

[edit]

I wonder if it might be useful to explain 2's complement of fractional numbers like 1.5 or -1.5. I'm an elecrical engineering student, and the computer science professors want us to know how to represent a number like -6.5625 in 2's complement. I wasn't able to find an explanation anywhere on the web. The explanation should be easy as the digit to have 1 added to it is still the LSB (as opposed to the '1's' digit). Another thought would be a clarification that there are no 'negative binary numbers' until a coding scheme is used to encode them. 71.139.179.199 (talk) 01:59, 6 February 2013 (UTC)[reply]

Once you come up with the binary representation, you can move the binary point around all you want. It works the same as moving the decimal point around in base 10, but the changes are in factors of two instead of 10. Gah4 (talk) 21:11, 7 May 2015 (UTC)[reply]

Not the complement of 2^N

[edit]

2's complement of a N-bit number (sign bit included) can be calculated by 1's complement + 1 (LSB), but that is not the same as the N-bit number subtracted from 2^N, as stated in the original article. It is the same as the N-bit number subtracted from 2^(N+1) or from 0 (zero), with the most significant bit (MSB) ignored after the subtraction in both cases. The reason is that the original N-bit number plus its negative, must equal zero. 137.158.153.203 (talk) 10:53, 19 February 2013 (UTC)[reply]

Yes, the current formulation is not tidy enough. It would be correct to say: two's complement of a positive number x is the same bit vector as unsigned binary of 2Nx. Objections against this rewording? Incnis Mrsi (talk) 11:32, 19 February 2013 (UTC)[reply]

My comment above regarding 2^(N+1) is wrong, the article is correct. One must just realize that with a N-bit number it is meant that the sign bit is included in the N bits. Henniedmouton (talk) 13:49, 19 February 2013 (UTC)[reply]

The problem exists, although possibly not the same that the IP said (and thought) about. x* = 2Nx may not be posed as a definition or an unconditional rule for computing x*, since it only makes sense for positive x (unless we consider all things modulo 2N, in which case the 2N additive term can be omitted and the formula looks tautological). Incnis Mrsi (talk) 18:18, 19 February 2013 (UTC)[reply]

Range of the two's complement system.

[edit]

In the first part of the article, the formula −(2N − 1) to +(2N − 1 − 1) is given to get the highest and lowest number in a two's complement system. The problem is, this is only true if you use a binary system. The article is about binary systems, but shouldn't it be changed? You can use it with a decimal system and any system with a different radix as well. The formula would change from
−(2N − 1) to +(2N − 1 − 1) to
−(RN)/2 to +((RN)/2) − 1)
Where R is the radix of the system and N the amount of digits. — Preceding unsigned comment added by 89.98.82.197 (talk) 17:55, 11 September 2013

The two's complement by definition is used only for binary numbers (since the term is derived from ones' complement), and only for integers. Certainly you can generalize it for other positional numeral systems, you can do it in several ways, and not necessarily for even radices. But is any of such generalizations mentioned in WP:reliable sources? Incnis Mrsi (talk) 12:26, 12 September 2013 (UTC)[reply]
The more general names, I believe from Knuth, are digit complement and radix complement. That said, the radix complement system for base three should be called twos' complement, (if I understand, again, the Knuth notation regarding the apostrophe). In decimal, you have 10's complement and nines' complement. Gah4 (talk) 19:28, 26 August 2015 (UTC)[reply]
I agree that Wikipedia should describe ten's complement, nines' complement, and other generalized systems. But rather than put every detail in this two's complement article, wouldn't it be better to put those details about non-binary systems in the general method of complements article? --DavidCary (talk) 15:30, 25 February 2016 (UTC)[reply]

notation versus operation

[edit]

The article suggests the following.

  1. 1001, 11001, 111001 are two's complement representations of negative seven. [a]
  2. 0111, 00111, 000111 are two's complement representations of seven. [b]
  3. 1001, 11001, 111001 are two's complements of seven. [c]
  4. 0111, 00111, 000111 are two's complements of negative seven. [c]

Statements 2 and 3 seem awkwardly at odds. Likewise for statements 1 and 4.

Rather than using headings "converting from..." and "converting to..." it might be helpful to expand the headings to say "converting from x to y". Then provide algorithms that take an x as input and produce a y as output. One does not convert a number to some representation. Rather, one converts a representation of a number into another representation of that same number. I imagine language like, "To convert a number in notation A into notation B use operation C."

[a] from Converting to two's complement representation: "negative numbers are represented by the two's complement of the absolute value"

[b] from Converting to two's complement representation: "In two's complement notation, a non-negative number is represented by its ordinary binary representation"

[c] from From the ones' complement: "To get the two's complement of a binary number, the bits are inverted...the value of 1 is then added"

IOLJeff (talk) 20:33, 30 May 2015 (UTC)[reply]

most arithmetic

[edit]

A recent edit has the summary misleading and incorrect. I suppose it is a little misleading, but not so incorrect. But now I see: The two's complement of a number behaves like the negative of the original number in most arithmetic, This leaves the question of what most arithmetic is? Two's complement doesn't behave like the negative in decimal arithmetic, which many people might consider most. There is ten's complement for that, but maybe not an article on it. Is most here misleading and/or incorrect? Gah4 (talk) 19:56, 15 February 2017 (UTC)[reply]

Explanation in terms of an other definition.

[edit]

"The two's complement [is] equivalent to taking the ones' complement and then adding one"

Amusing for kids wanting to play tag through Wikipedia (Can we create a loop? Let's take ones' complement and define it by twos' complement!)

Correct from a mathematical point of view, but absolutely wrong, unhelpful, and un-encyclopedic for an encylopedia.

And it's in the introduction. — Preceding unsigned comment added by 203.206.162.148 (talk) 08:36, 18 July 2017 (UTC)[reply]

When I learned, when I was fairly young, this was the definition of twos complement. That is, one+1=two. Gah4 (talk) 20:30, 18 July 2017 (UTC)[reply]
FIXED. Thank you.

"Sum of a number and its two's complement will always equal –1"??

[edit]

@46.62.177.48: Two weeks ago, you made an edit to a page with the following note:

(I changed -0 to -1. Because in Two's complement the result of addition of a number and its complement is -1.).
— User:46.62.177.48 20:52, 23 November 2017

I have recently removed this text (along with a few other things) both in the interest of brevity, and because I don't believe it's correct. The article itself states:

The two's complement of an N-bit number is defined as the complement with respect to 2N.

From this and my own understanding (and the additions I've made to the introduction), the sum of a number and its two's complement is in fact 0. Would you care to clarify? —Pennbrook (talk) 10:20, 2 December 2017 (UTC)[reply]

Your comment that a positive integer plus the two's complement of that number are 0 is correct. Instead of subtracting you add a number and the two's comp of the second number instead of subtracting. To get a two's comp of a number you do the following two operations. First change all the zeros to ones and all the ones to zeros. Then add a 1 to the LSB (Least Significant Bit) of the one's comp to get two's comp. To get the two's comp of the equivalent of two decimal (using a nibble which is just 4 bits or half a byte):
0010 (2 in binary)
1101 (one's comp)
1110 (two's comp)
0111 (7 in binary)
Now add the two's comp of 2 to the 7 with both in binary. Just remember that for each bit
0 + 0 = 0
0 + 1 = 1
1 + 0 = 1
1 + 1 = 0 carry 1 to the next bit
1 + 1 + 0 from no carry = 0 carry 1
1 + 1 + 1 from carry = 1 carry 1
0111 + 1110 = 0101 with carry 1 out of range which is ignored if you have a two's comp number
0101 = 5 decimal which means that yes, we just subtracted 2 from 7.
What if we add the two's comp of 7 to 7 in binary?
0111 (the decimal number 7 in binary which is the largest positive number)
1000 (one's comp)
1001 (two's comp)
0111 + 1001 = 0000 carry 1
What happens if you add just 1 to the largest integer?
0111 + 0001 = 1000
1000 is the equivalent of -8 in decimal using two's comp arithmetic. For the following number of bits we have the following range for our integers if we use two's comp arithmetic using base 10 to show the maximums:
nibble: -8 ... 7
byte: -128 ... 127
two bytes: -32768 .. 32767
four bytes: -2,147,483,648 ... 2,147,483,647
Don't complain about me using base 10. I could use the sexagesimal (base 60) system instead. Use of base 60 is even on Cuneiform tablets. The base 10 system wasn't even complete until the adoption of the Arabic numerals, the adding of zero, and the brillliance to see that you can indicate fractional parts of the whole using negative powers of 10 just like you used them with base 60. That was finished in the 15th and 16th centuries AD. Now will somebody correct all of this? I am still recoverying from my C3 / C4 surgery. My 100+ WPM in English with almost no mistakes is down to less than 15 WPM in English with too many mistakes. hhhobbit (talk) — Preceding undated comment added 03:13, 24 November 2020 (UTC)[reply]

Recent changes to the introductory table

[edit]

@213.207.243.108: @2001:738:2001:c00f:bbef:39c3:eefe:dba8: @168.159.213.214: You made some changes to my recent revision to this page, accompanied by the following note:

There was a mistake in binary and 2's complement. basically, there is no binary form of negative numbers, instead, there are 2's complement and 1's complement forms of showing negative. plus, the 2's complement of negative numbers were all wrong.
— User:213.207.243.108 15:51, 4 December 2017‎

I see now that the tables, as I rewrote them, were not as clear as I had intended. The columns represent: a binary (two's-complement) representation of a number, the corresponding decimal number, and its two's complement. I labeled the first column "binary" in the interest of brevity and on the assumption that "two's-complement notation" would be safely implied, but in retrospect, it opened the table to misinterpretation.

To be clear, the instructive purpose of the table is for readers to be able to connect the right- and left-hand values of a given decimal number and its negation (e.g., 010 represents 2 which has two's complement 110 which represents -2 which has two's complement 010). As you've edited them, the lefthand columns are identical to the righthand columns, except that they are blank for negative values. The instructive purpose of this presentation is not clear to me.

I have reverted the changes and reorganized and relabelled the table. What are your thoughts?

Pennbrook (talk) 06:24, 6 December 2017 (UTC)[reply]

At this point, the numeral is the ones' complement of the decimal value −5. < < Should this be a positive 5??

[edit]

From the ones' complement To get the two's complement of a binary number, the bits are inverted, or "flipped", by using the bitwise NOT operation; the value of 1 is then added to the resulting value, ignoring the overflow which occurs when taking the two's complement of 0.

For example, using 1 byte, the decimal number 5 is represented by

0000 01012 The most significant bit is 0, so the pattern represents a non-negative value. To convert to −5 in two's-complement notation, the bits are inverted; 0 becomes 1, and 1 becomes 0:

1111 10102 At this point, the numeral is the ones' complement of the decimal value −5. To obtain the two's complement, 1 is added to the result, giving:

1111 10112

Dotarpa (talk) 23:05, 7 April 2018 (UTC)[reply]

Yes, it is confusing. The subsection is supposed to describe converting ones' complement to two's complement representation. It actually described converting +5 (in any binary representation) to -5 in two's complement. But also, as you are noticing, has the ambiguity described in the article: that the terms ones' complement and two's complement apply to both the representation, and to the negation operation. You have to figure out from context which one is meant. Gah4 (talk) 00:06, 8 April 2018 (UTC)[reply]
It's not as simple as changing the sentence to read "At this point, the numeral is the ones' complement with the decimal value −5"?? Perhaps it is just going over my head. Is not "1111 1010" the ones' complement of +5? Dotarpa (talk) 16:21, 8 April 2018 (UTC)[reply]
1111 1010, representing -5 in ones complement representation, is the ones' complement of 0000 0101, representing +5. Gah4 (talk) 18:46, 8 April 2018 (UTC)[reply]

radix complement

[edit]

Should there be an article on the general, radix independent method, called radix complement? Specifically, ten's complement is convenient for working with decimal numbers. (Though many computers that do decimal arithmetic, do it sign magnitude.) Gah4 (talk) 21:36, 14 September 2018 (UTC)[reply]

Yes. I see radix complement is currently a redirect to method of complements. Is that the article you wanted, Gah4? --DavidCary (talk) 04:03, 1 September 2020 (UTC)[reply]
I suppose so. It seems to be mostly decimal, and doesn't make it quite obvious that it generalizes. It also doesn't keep nines and tens's complement quite separate enough. But it is an article, so that is good. Gah4 (talk) 01:51, 10 September 2020 (UTC)[reply]

Add a section on Computational effects of Big Endian vs Little Endian or Middle Endian ie melong types

[edit]

I get confused over converting outputs in linux... there is no binary dump and the hexdump and hd or od octal dumps act strange by swapping nibbles of data for big endian vs little endian. So most code I read seems to limit transmission of data to 1 character at a time instead of in words to avoid this problem.

In octal I am especially confused because the bits will not align unless 3 bytes (or a multiple of 3 bytes) are used; every 2 octals is only 6 bits leaving 2 bits unused from a byte. So to use up every bit the number bits needs to be divisible by 3. This occurs at 24 bits which is every 3 bytes. Every Octal is three bits. I am confused about the end of string marker \0 is this octal zero which is 3 bits or is this two ASCII chars which is 2 bytes. Also would \0x1b be equal to \033 Is there the leading zero padding?? — Preceding unsigned comment added by 2600:6C51:7001:200:9856:7AA4:80B6:A255 (talk) 00:13, 10 September 2019 (UTC)[reply]

000 + 000 = 1000 ?

[edit]

The current intro text says

The two's complement of an N-bit number is defined as its complement with respect to 2N. For instance, for the three-bit number 010, the two's complement is 110, because 010 + 110 = 1000. The two's complement is calculated by inverting the digits and adding one.

This sounds a bit odd since the table lists the two's complement of 000 as 000, and by the same pattern we would have 000 + 000 = 1000. Clearly wrong, so what is the best way to rephrase this? Does the true definition of two's complement include an expanded bit range in the output (so that the table is wrong, and the two's complement of 000 is actually 1000), or ...? --Nanite (talk) 17:13, 1 December 2019 (UTC)[reply]

Wraparound, not truncation

[edit]

The intro to the article previously said that you can do addition and subtraction on bit patterns without knowing if they are unsigned or two's complement, provided truncation is used on overflow. I changed this to wraparound, because, for example with 3-bit numbers with no wraparound or truncation:

111 + 010 = 1001

What we want here is 001. 'Truncation' is a confusing term for this because in other contexts 'truncation' is used to mean throwing away the LEAST-significant digits, whereas here we are throwing away the MOST-significant digit. I changed it to wraparound. Bayle Shanks (talk) 15:58, 29 June 2020 (UTC)[reply]

Yes truncation is commonly used for discarding low bits. I am not sure how often it is used in the case of high bits. But wrap-around is the logical (arithemtic) result, not the physical process (throwing away bits). Gah4 (talk) 20:37, 29 June 2020 (UTC)[reply]

The "Three-bit signed integers" and "Eight-bit signed integers" tables are confusing and don't make sense

[edit]

In the current version, it reads as if "001" is the two's complement of the decimal value "1"! I'm not an expert on this, but 1 + 001 =/= 8.

I suggest the below. Or have I got this completely wrong?

Three-bit signed integers
Decimal encoding Binary encoding Two's complement
-3 101 011
-2 110 010
-1 111 001
0 000 -
1 001 111
2 010 110
3 011 101

Possible diagram

[edit]

Saw this figure on TeXample.net: https://texample.net/tikz/examples/complement/ I'm not sure what the caption would be, or where it would fit in the article. But if someone who knows more about this subject can add it, I think it's under a Creative Commons license with attribution. — Preceding unsigned comment added by 2601:1C2:680:DDE0:C904:B420:F87B:3F4E (talk) 18:30, 25 January 2021 (UTC)[reply]

Little Endian/Big Endian as it relates to positive/negative bit

[edit]

The article states that the Most significant bit indicates sign for big endian, and Least significant bit indicates sign for little endian. My understanding is that this is not correct as Little Endian/Big Endian are related to BYTE by BYTE order and not bit by bit order. For example if you have 0xAB (10101011) this would not be 0xD5 (11010101) in the alternate endianness. It would be the same as it is a single byte. 68.38.218.65 (talk) 15:06, 29 July 2022 (UTC)[reply]

for two's complement, the sign is always the most significant bit. Which end its byte is on, depends on endianness. For sign magnitude, the sign could be on either end. IBM S/360 (and still supported in current z/ systems) fixed point BCD puts the sign nibble with the least significant digit. That allows one to truncate a value, just using the low digits and sign. Sometimes bit-endianness is important. For one, bit serial transmission, such as Ethernet, chooses a bit order. Ethernet transmits LSB first. Most often that doesn't matter, but the unicast/multicast bit is transmitted first. In usual notation, that is the rightmost bit of the leftmost byte. But Ethernet MAC addresses are not numbers, so they don't have byte significance. There are machines with bit addressing instructions, which give them bit endianness. And even without that, documentation can number the bits either way. Many IBM machines number the MSB as bit 0, consistent with big endian addressing. Most little endian machines number the LSB as bit 0, consistent with little endian addressing. Gah4 (talk) 20:43, 29 July 2022 (UTC)[reply]

The definition of two's complement

[edit]

Right now, the definition of two's complement is

> Two's complement is a mathematical operation to reversibly convert a positive binary number into a negative binary number with equivalent (but negative) value, using the binary digit with the greatest place value (the leftmost bit in big-endian numbers, rightmost bit in little-endian numbers) to indicate whether the binary number is positive or negative (the sign).

But I don't think it is an "operation". Maybe "converting to two's complement" is an operation, but two's complement is official a representation method, or representation of positive and negative numbers using binary digits. — Preceding unsigned comment added by Winterheat (talkcontribs) 16:18, 13 August 2022 (UTC)[reply]

You can "two's complement" a number by flipping all the bits and adding 1, which negates a number that's in two's complement representation. Ones' complement is also written with the first paragraph talking about the operation of "ones' complementing" a number, but it also talks about "ones' complement arithmetic" and "ones' complement binary". This article, however, doesn't, but it probably should. Guy Harris (talk) 19:39, 13 August 2022 (UTC)[reply]

origin of the name

[edit]

I put a {{cn}} on the explanation for the name, and especially never heard it before. When I learned it when I was about 10, it was that two's complement is one more than one's complement. (The way they spelled it 55 years ago.) I never searched for an explanation of the name before, but it should trace back close to when the name was first used.Gah4 (talk) 02:15, 10 May 2023 (UTC)[reply]

Apollo's computer struggle

[edit]

If 2s complement were discovered in 1945, then why did the Apollo mission adopted the 1's complement and then only after a long time, they "re-discovered" 2's complement ? ref: youtube / J-5aT2zSfSA?t=617 — Preceding unsigned comment added by Drout 0 (talkcontribs) 13:56, 28 August 2023 (UTC)[reply]

why did the Apollo mission adopted the 1's complement To quote R-700 MIT's Role in Project Apollo, Final Report on Contracts NAS 9-163 and NAS 94065, Volume III, Computer Subsystem, section 2.3.2 "Number Representation":

The addition of numbers of opposite sign requires either one's or two's complementation or comparison of magnitude, and sometimes both may be used. The one's complement notation has the advantage of having easy sign reversal, this being equivalent to Boolean complementation. Hence a single machine instruction performs both functions. Zero is ambiguously represented by all zero's and by all one's, so that the number of numerical states in an n-bit word is 2^n - 1. Two's complement arithmetic is advantageous where end around carry is difficult to mechanize, as is particularly true in serial computers. An n-bit word has 2^n states, which is desirable for input conversions from such devices as pattern generators, geared encoders, or binary scalers, Sign reversal is awkward, however, since a full adddition is required in the process. These considerations led to the use of a one's complement number system in the AGC.

and then only after a long time, they "re-discovered" 2's complement ? They "re-discovered" nothing; as the discussion above indicates, they knew about ones complement, two's complement, and sign/magnitude format, and deliberately chose ones complement for the reason given. They were also aware of the advantages of two's complement when dealing with input from external devices. The discussion that begins at around 10 minutes, 17 seconds into the video speaks of that. Guy Harris (talk) 22:21, 28 August 2023 (UTC)[reply]

Suggested revision of introduction

[edit]

This article goes into a great deal of detail, but I think the fundamentals of two's complement need some work to make the article useful to the less committed reader (like me). I also think that one part of it is very misleading.


Firstly, the misleading bit: under Procedure it says "Two's complement is achieved by...". But shouldn't this say "The two's complement representation of a negative number is obtained by..."? Between that and the intro paragraph, I was left with the impression that the bit-flipping operation is applied to all numbers, positive or negative, to obtain the two's complement representation. It was only by reading elsewhere that I discovered that positive numbers are left as they are (with the leftmost bit being 0 to indicate positive sign).


Secondly, the intro is a rather unbalanced: the only aspect of the design of the two's complement representation that it mentions is that the bit with the greatest place value is used as a sign bit, as if that was the unique distinguishing feature of TC (which, as far as I understand, isn't the case). I would include the complete procedure in the introduction. I don't think that this is too much detail for the introduction: the method is very short, and easy to understand for the likely reader of the page, ie someone who's heard of two's complement and would like to get an idea of what it's all about.


Thirdly, I think that the reasons why two's complement is useful could be expressed more clearly.


Time to put my money where my mouth is. I don't feel qualified to alter the article, but here's what I think would be a better introduction:


"Two's complement is the most common method of representing signed (positive, negative, and zero) integers on computers, and more generally, fixed point binary values.

Two's complement uses the binary digit with the greatest place value (ie the most significant bit) as the sign to indicate whether the binary number is positive or negative. When the most significant bit is 1, the number is signed as negative; and when the most significant bit is 0 the number is signed as positive.

If the number is positive, the magnitude of the number is represented in the remaining bits in the normal binary representation.

If the number is negative, the two's complement representation is obtained by:

  • starting with the corresponding positive number
  • inverting (or flipping) all bits – changing every 0 to 1, and every 1 to 0
  • adding 1 to the result (ignoring any overflow)


The two's complement representation has the advantage over other representations of negative numbers that exactly the same procedure can be used for adding two numbers, regardless of whether either of them is negative. Also, it has only one representation of zero, whereas other representations, such as one's complement, can represent zero in two ways." Macboff (talk) 10:24, 11 October 2023 (UTC)[reply]

I'm fine with adding more details to the lead. The confusion here comes from the fact that two's complement is both a "mathematical operation on binary numbers, and a number representation based on this operation" (as the short description puts it). —Dexxor (talk) 07:36, 12 October 2023 (UTC)[reply]
I was confused when I saw the operations and it didn't say it was only if the decimal number was negative. I agree with adding that precision. Recedeno (talk) 14:38, 27 October 2023 (UTC)[reply]
If you want decimal, then it would be ten's complement. — Preceding unsigned comment added by Gah4 (talkcontribs) 05:53, 28 October 2023 (UTC)[reply]

relation to two's complement?

[edit]

the statement "The relationship to two's complement is realised by noting that 256 = 255 + 1, and (255 − x) is the ones' complement of x." is confusing. The article is already about two's complment.Chris2crawford (talk) 03:17, 14 June 2024 (UTC)[reply]

Confusing for programmers

[edit]

This article is very confusing for one of the major audiences that will want to read it: Programmers.

Often a datasheet or protocol spec will say something like "this field is in two's complement notation". To an experienced programmer, that's fine. But a beginner programmer will not understand what that means, and will look at this article for help. This article makes it look massively complicated. Then the beginner programmer will either give up, or will write massively overcomplicated code to parse what they think is a massively complicated format. That code may even work, but will certainly confuse the next programmer to look at it. (Yes, I have been that "next programmer").

Two's complement is actually really simple for programmers to deal with, if you explain it right. Please can this article be modified to explain it in a way that beginner programmers can understand?

The "Suggested revision of introduction" section above looks like a good start, but even that is overcomplicated.

Here's some text that could be used to help, feel free to use/edit/cut it as needed:

WP:NOTMANUAL. A Wikipedia article on something isn't a cookbook-style tutorial on how to use that thing - that's what Wikibooks is for. Two's complement § Theory might not belong in a tutorial for beginning programmers, but it most definitely does belong here.
Computers have a standard way of representing signed integers. That's called "two's complement". The phrase used in the lead, "the most common method of representing signed (positive, negative, and zero) integers on computers", is the most accurate way of saying it. "Standard", in this context, means "most common", not "formally standardized", so "most common" is preferable.
In C/C++ Again, this isn't a tutorial. If a C/C++ programmer wants to know the details of how to perform a particular operation, they should look for a book on C or C++. And, yes, there exists a C compiler for a line of computers with 36-bit words in which integers are represented using ones' complement notation. The reason why the GCC and MSVC documentation specify the representation is to specify the behavior of their implementations of the implementation-defined behaviors in C; the reason why they specify two's complement is that they decided (quite reasonably) not to bother implementing compilers for that line of computers (or any other non-two's-complement machines).
(And in what cases do beginning programmers need to deal with the representation of integers? "Decoding" values isn't necessary for variables, and, if you're running on a two's-complement machine and reading data in two's-complement format, there's not much work to be done.) Guy Harris (talk) 18:13, 14 August 2024 (UTC)[reply]
For a long time, the C standard allowed for ones' complement and sign magnitude. That was dropped recently. It gets more interesting for the mentioned Unisys machines for unsigned data types in C. Gah4 (talk) 05:33, 15 August 2024 (UTC)[reply]
For a long time, the C standard allowed for ones' complement and sign magnitude. It allows them up to C18 - C18's 6.2.6.2 "Integer types" says that, if the sign bit of a signed integer type value is 1, the number could either be in sign-magnitude, two's complement, or ones' complement form, and that it's implementation-defined which of those it is.
That was dropped recently. C99, C11, and C18 also say that the fixed-width signed-integer intN_t values are always two's complement; arithmetic operations on signed integer values that are performed with C11 and later's atomic_fetch and modify generic functions are also specified as being two's complement.
https://www.open-std.org/jtc1/sc22/wg14/www/projects#9899 seems to indicate that C23 is done, but not yet published by ISO. The "similar draft" of C23 says "two's complement or GTFO" (well, not in those words, but...).
It gets more interesting for the mentioned Unisys machines for unsigned data types in C. Because you can't use the same instructions to do signed and unsigned addition and subtraction, as signed operations require an end-around carry and unsigned operations must not have an end-around carry, so you can't use the machine instructions to do unsigned arithmetic? Guy Harris (talk) 08:09, 15 August 2024 (UTC)[reply]

Introduction

[edit]

Computers have a standard way of representing signed integers. That's called "two's complement".

The number is stored in binary, in a fixed-length field. This explanation will use N to refer to the size of the field, in bits. This explanation will use NUM_VALUES to mean 2N. For example, for an 8-bit field NUM_VALUES is 256, for a 16-bit field NUM_VALUES is 65536, and for a 3-bit field NUM_VALUES is 8.

The values that can be stored range from -(NUM_VALUES ÷ 2) to (NUM_VALUES ÷ 2) - 1 inclusive.

General encoding and decoding

[edit]

To encode a value:

  1. Check the value is within the range. If not, then encoding it is impossible, so stop.
  2. If the value is negative, add NUM_VALUES
  3. Now you have an unsigned integer that is can be encoded into binary in the usual way, and will fit into N bits.

To decode a value:

  1. First decode the field as if it was an unsigned integer
  2. Figure out if the result is going to be negative. There are two different ways to check this, which have the same result, so you can use either approach:
    1. If the unsigned value is greater than or equal to (NUM_VALUES ÷ 2), then the result is going to be negative.
    2. If the most significant bit (called the "sign bit") is 1, then the result is going to be negative.
  3. If you just decided that the result is going to be negative, then subtract NUM_VALUES to get the result.
  4. Otherwise, just use the unsigned value as the result.

Examples

[edit]
8-bit example
Bit pattern Unsigned value Actual value
00000000 0 0
00000001 1 1
00000010 2 2
00000011 3 3
... ... ...
01111110 126 126
01111111 127 127
10000000 128 -128
10000001 129 -127
10000010 130 -126
... ... ...
11111101 253 -3
11111110 254 -2
11111111 255 -1

Encoding and decoding in code

[edit]

Almost all modern computers use two's complement for their built-in signed integer types.

C/C++, standard sizes

[edit]

In C/C++, for the integer sizes supported by the compiler, the conversion can usually be done as a simple cast between unsigned integer and signed integer. Typically this will work for at least 8-bit, 16-bit, 32-bit, and 64-bit fields. For example, to decode a 16-bit field:

uint16_t unsigned_value = ... read the value as if it was an unsigned integer ...;
int16_t result = (int16_t)unsigned_value;

Or to encode a 32-bit field:

int32_t value = ... get the value ...;
uint32_t result = (uint32_t)value;
// Now write out result.

Note: The C and C++ language specifications say that both the above are "implementation-defined". However, common compilers document that they support it. (Examples: GCC and MSVC 1 2 ).

C/C++, other sizes

[edit]

For field sizes that don't match to a built-in integer type, encoding and decoding is slightly more complex. It is possible to use the general encoding method discussed above. However, there's another approach which is likely to be faster to run but is not as easy for beginners to understand.

To encode, you can just cast and then mask off the bits that aren't needed. For example, to encode a 5-bit field:

int32_t value = ... get the value ...;
// "value" must be in range -8 to 7 inclusive. We assume that here,
// since this is an example, but real code may need to check that.
//
// We cast "value" from signed to the same-size unsigned type, then mask it.
// Since this is a 5-bit field we need a mask that has the low 5 bits set to "1" and all other bits set to "0", which is 0x1Fu.
uint32_t result = ((uint32_t)value & 0x1Fu);
// Now write out the low 5 bits of result.

To decode, you need to "sign-extend" the value. This can be done using C's "signed right shift" operator. You first shift it to the top bits of a variable, and then do a signed right shift to get the bits back to the bottom of that variable while "sign-extending" it. For example, to decode a 5-bit field:

uint32_t unsigned_value = ... read the value as if it was an unsigned integer ...;
uint32_t temp = (((uint32_t)unsigned_value) << (32 - 5));
int32_t signed_value = (((int32_t)temp) >> (32 - 5));

Note: The C and C++ language specifications say that both the above are "implementation-defined". However, common compilers document that they support it. (Examples: GCC and MSVC 1 2 3). 08af9a09 (talk) 16:47, 14 August 2024 (UTC)[reply]