Proof overflow with carry our and carry in - math

I have an exercise which I don't really understand.
Prove that in a 2's complement number system addition overflows if and only if the carry from the sign position does not equal the carry into the sign position. Consider the three cases: adding two positive numbers, adding two negative numbers and adding two numbers of opposite sign.
I know how to count when adding who numbers, and how to see if the addition overflows or not, by looking at the carry in and carry out
But how will I do this proof in a general way?

Since your question shows few details and you show no work of your own, I'll answer with few details.
For each of those three cases (two positives, two negatives, one of each), consider the four sub-cases (carry into and out of the sign bit, carry into but not out of, carry out of but not into, no carry at all). In each case, show that some of those sub-cases are not possible. Then look at each sub-case and see if it means overflow.
Let's look at the first case--two positive numbers. First show that it is not possible for any carry out of the sign bit, so that removes two sub-cases. Then show that a carry into the sign bit (but not out of) is an overflow condition, and that no carry into the sign bit (and none out of) is not an overflow condition.
Then in each and every sub-case you will see that overflow happens when the two carries (in to and out of) differ and overflow does not happen when the two carries are equal.
This may not be the "general way" you were looking for, since you need to consider twelve combinations of cases and sub-cases, eliminating some and looking at the consequences of the others. But it does work. If you want more details, show more work of your own and I will be glad to add more.

Related

Are initial and final permutation in DES always done in the same order?

Everywhere on the internet, it is found that the 58th bit position takes first position in initial permutation. Also, the 40th bit position takes first position in final permutation. Is this always the same in every case? I mean is this done randomly or in a particular order (the same order)?
enter image description here
Ciphers doesn't work randomly. In the case of DES, the tables (PC1, PC2, IP, E, P, IP-1, but also the shifts and the S boxes) are always the same. You can find them in Wikipedia; here's NIST official documentation.
This documentation also contains a lot of test sets to validate correct DES implementations (with a small error... I challenge you to find it out!).
Anyway, this is another useful resource to fully understand DES.

References or Standardization of "Value Updating" in Constraint Satisfaction

Constraint Satisfaction Problems (CSPs) are basically, you have a set of constraints with variables and the domains of values for the variables. Then given some configuration of the variables (assignment of variables to values in their domains), you check to see if the constraints are "satisfied". That is, you check to see that evaluating all of the constraints returns a Boolean "true".
What I would like to do is sort of the reverse. Instead of this Boolean "testing" if the constraints are true, I would like to instead take the constraints and enforce them on the variables. That is, set the variables to whatever values they need to be in order to satisfy the constraints. An example of this would be like in a game, you say "this box's right side is always to the left of its containing box's right side," or, box.right < container.right. Then the constraint solving engine (like Cassowary for the game example) would take the box and set its "right" property to whatever number value it resolved to. So instead of the constraint solver giving you a Boolean value "yes the variable configuration satisfies the constraints", it instead updates the variables' configuration with appropriate values, "you have updated the variables". I think Cassowary uses the Simplex Algorithm for solving its constraints.
I am a bit confused because Wikipedia says:
constraint satisfaction is the process of finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is therefore a set of values for the variables that satisfies all constraints—that is, a point in the feasible region.
That seems different than the constraint satisfaction problem, of which it says:
An evaluation is consistent if it does not violate any of the constraints.
That's why it seems CSPs are to return Boolean values, while in CS you can set the values. Not quite clear the distinction.
Anyways, I am looking for general techniques on Constraint Solving, in the sense of setting variables like in the simplex algorithm. However, I would like to apply it to any situation, not just linear programming. Some standard and simple example constraints are:
All variables are different.
box.right < container.right
The sum of all variables < 10
Variable a goes before variable b in evaluation.
etc.
For the first case, seeing if the constraints are satisfied (Boolean true) is pretty easy: iterate through the pairs of variables, and if any pair is not equal to each other, return false, otherwise return true after processing all variables.
However, doing the equivalent of setting the variables doesn't seem possible at first glance: iterate through the pairs of variables, and if they are not equal, perhaps you set the first one to the second one. You might have to do some fixed point thing, processing some of them more than once. And then figuring out what value to set them to seems arbitrary how I just did it. Maybe instead you need some further (nested) constraints defining how set the values (e.g. "set a to b if a > b, otherwise set b to a"). The possibilities are customizable.
In addition, for simpler cases like box.right < container.right, it is even complicated. You could say at first that if box.right >= container.right then set box.right = container.right. But maybe actually you don't want that, but instead you want some iPhone-like physics "bounce" where it overextends and then bounces back with momentum. So again, the possibilities are large, and you should probably have additional constraints.
So my question is, similar to how for testing the constraints (for Boolean value) is standardized to CSP, I am wondering if there are any references or standardizations in terms of setting the values used by the constraints.
The only thing I have seen so far is that Cassowary simplex algorithm example which works well for an array of linear inequalities on real-numbered variables. I would like to see something that can handle the "All variables are different" case, and the other cases listed, as well as the standard CSP example problems like for scheduling, box packing, etc. I am not sure why I haven't encountered more on setting/updating constraint variables instead of the Boolean "yes constraints are satisfied" problem.
The only limits I have are that the constraints work on finite domains.
If it turns out there is no standardization at all and that every different constraint listed requires its own entire field of research, that would be good to know. Then I at least know what the situation is and why I haven't really seen much about it.
CSP is a research fields with many publications each year. I suggest you to read one of the books on the subject, like Rina Dechter's.
For standardized CSP languages, check MiniZinc on one hand, and XCSP3 on the other.
There are two main approaches to CSP solving: systematic and stochastic (also known as local search). I have worked on three different CSP solvers, one of them stochastic, but I understand systematic solvers better.
There are many different approaches to systematic solvers. It is possible to fill a whole book covering all the possible approaches, so I will explain only the two approaches I believe the most in:
(G)AC3 which propagates constraints, until all global constraints (hyper-arcs) are consistent.
Reducing the problem to SAT, and letting the SAT solver do the hard work. There is a great algorithm that creates the CNF lazily, on demand when the solver is already working. In a sence, this is a hybrid SAT/CSP algorithm.
To get the AC3 approach going you need to maintain a domain for each variable. A domain is basically a set of possible assignments.
For example, consider the domains of a and b: D(a)={1,2}, D(b)={0,1} and the constraint a <= b. The algorithm checks one constraint at a time, and when it reaches a <= b, it sees that a=2 is impossible, and also b=0 is impossible, so it removes them from the domains. The new domains are D'(a)={1}, D'(b)={1}.
This process is called domain propagation. Using a queue of "dirty" constraints, or "dirty" variables, the solver knows which constraint to propagate next. When the queue is empty, then all constraints (hyper arcs) are consistent (this is where the name AC3 comes from).
When all arcs are consistent, then the solver picks a free variable (with more than one value in the domain), and restricts it to a single value. In SAT, this is called a decision It adds it to the queue and propagates the constraints. If it gets to a conflict (a constraint can't be satisfied), it goes back and undos an earlier decision.
There are a lot of things going on here:
First, how the domains are represented. Some solvers only hold a pair of bounds for each domain. Others, have a set of integers. My solver holds an interval set, or a bit vector.
Then, how the solver knows to propagate a constraint? Some solvers such as SAT solvers, Minion, and HaifaCSP, use watches to avoid propagating irrelevant constraints. This has a significant performance impact on clauses.
Then there is the issue of making decisions. Usually, it is good to choose a variable that has a small domain and high connectivity. There are many papers comparing many different strategies. I prefer a dynamic strategy that resembles the VSIDS of SAT solvers. This strategy is auto-tuned according to conflicts.
Making decision on the value is also important. Many simply take the smallest value in the domain. Sometimes this can be suboptimal if there is a constraint that limits a sum from below. Another option is to randomly choose between max and min values. I tune it further, and use the last assigned value.
After everything, there is the matter of backtracking. This is a whole can of worms. The problem with simple backtracking is that sometimes the cause for conflicts happened at the first decision, but it is detected only at the 100'th. The best thing is to analyze the conflict, and realize where the cause of the conflict is. SAT solvers have been doing this for decades. But CSP representation is not as trivial as CNF. So not many solvers could do it efficiently enough.
This is a nontrivial subject that can fill at least two university courses. Just the subject of conflict analysis can take half of a course.

Specify min max range with specific value placed in a defined level

I'm working on a basic betting system at the moment, and I need the following:
Specify a minimum and maximum return over X Levels (at present 21. This isn't looking like changing any time soon due to how the rest of the program works)
Specify a "break even" level (this can change). Level where the player makes their bet back (well, was going to be 90% of their bet, I think their full bet might be nicer)
Each level has to be higher than the last, no particular scaling, so this should be a bit easier.
I think I've looked at this too much, I can't seem to get the right level for the "break even" level so I'm over complicating it.
Worst case scenario, the minimum can be optional, the most important part are the break even level and maximum, I can always tweak it later on.
I've decided to go with another solution. Basically a function that enables me to specify a minimum, maximum and amount of levels in between.
I'll explain it in case other people are looking for something like this.
I found this answer as a basis: Smooth movement to ascend through the atmosphere
That way I can easily adjust the increase of wins and everything else I need

Message/chat system with a negative margin for each message

I'm trying to transfer the following mockup into working dynamic code but I'm having a few problems right now.
Mockup: http://www.imagebanana.com/view/a6yuqvgm/chat.png
The goal here is to implement the "negative margin" each message box has so that the messages overlap a bit. So, if person A (me) and person B have a conversation, all messages from person B should be on the right side and all of my messages (person A) should be on the left side. This part is obviously rather easy.
Also, if I answer to a message from my chat partner, my message should have a negative margin so that my message sort of "goes into" the message from my partner but on the other side. This is for design and space-saving reasons. The longer the messages, the greater the margin should be. Shorter messages need to have a smaller margin.
I'm currently a pit puzzled as how to successfully implement such things. A simple negative margin is not enough, because when a user sends two messages in a row, the messages overlap (the second one goes into the first one). The mockup shows the perfect situation, rotary messages (person A, person B, person A, person B, and so on), but that's not always the case obviously.
My question now is, is that even possible with pure CSS? I guess I need to add some dynamic part, either in PHP or JS, both is fine. I just need some hints in the right direction.
You can do it in pure CSS if you don't need need the margins sized according to the height of each message. The key in either case is the use of the adjacent (+) selector to target from-messages which follow to-messages and vice versa, avoiding overlap between consecutive messages from the same person.
Here's how: http://jsbin.com/ujonoj/14/edit
Note the commented-out bit of CSS: you can use that to have static negative margin (however much you want) and avoid the JS, if need be.
Edit - added two safety checks to cover cases of very long messages following very short ones, and to stop setMargin running on consecutive to-to/from-from messages. The long-short safety check simply doesn't set the negative margin greater than some percent (80 in my example) of the previous message.

Time complexity to fill hash table?

This is a homework question, but I think there's something missing from it. It asks:
Provide a sequence of m keys to fill a hash table implemented with linear probing, such that the time to fill it is minimum.
And then
Provide another sequence of m keys, but such that the time fill it is maximum. Repeat these two questions if the hash table implements quadratic probing
I can only assume that the hash table has size m, both because it's the only number given and because we have been using that letter to address a hash table size before when describing the load factor. But I can't think of any sequence to do the first without knowing the hash function that hashes the sequence into the table.
If it is a bad hash function, such that, for instance, it hashes every entry to the same index, then both the minimum and maximum time to fill it will take O(n) time, regardless of what the sequence looks like. And in the average case, where I assume the hash function is OK, how am I supposed to know how long it will take for that hash function to fill the table?
Aren't these questions linked to the hash function stronger than they are to the sequence that is hashed?
As for the second question, I can assume that, regardless of the hash function, a sequence of size m with the same key repeated m-times will provide the maximum time, because it will cause linear probing from the second entry on. I think that will take O(n) time. Is that correct?
Well, the idea behind these questions is to test your understanding of probing styles. For linear probing, if a collision occurs, you simply test the next cell. And it goes on like this until you find an available cell to store your data.
Your hash table doesn't need to be size m but it needs to be at least size m.
First question is asking that if you have a perfect hash function, what is the complexity of populating the table. Perfect hashing function addresses each element without collision. So for each element in m, you need O(1) time. Total complexity is O(m).
Second question is asking for the case that hash(X)=cell(0), which all of the elements will search till the first empty cell(just rear of the currently populated table).
For the first element, you probe once -> O(1)
For the second element, you probe twice -> O(2)
for the nth element, you probe n times -> O(n)
overall you have m elements, so -> O(n*(n+1)/2)
For quadratic probing, you have the same strategy. The minimum case is the same, but the maximum case will have O(nlogn). ( I didn't solve it, just it's my educated guess.)
This questions doesn't sound terribly concerned with the hash function, but it would be nice to have. You seem to pretty much get it, though. It sounds to me like the question is more concerned with "do you know what a worst-case list of keys would be?" than "do you know how to exploit bad hash functions?"
Obviously, if you come up with a sequence where all the entries hash to different locations, then you have O(1) insertions for O(m) time in total.
For what you are saying about hashing all the keys to the same location, each insertion should take O(n) if that's what you are suggesting. However, that's not the total time for inserting all the elements. Also, you might want to consider not literally using the same key over and over but rather using keys that would produce the same location in the table. I think, by convention, inserting the same key should cause a replacement, though I'm not 100% sure.
I'll apologize in advance if I gave too much information or left anything unclear. This question seems pretty cut-and-dried save the part about not actually knowing the hash function, and it was kind of hard to really say much without answering the whole question.

Resources