What is the difference between BZ and BNZ in instruction pipeline? - pipeline

I am confused between branching instructions BZ and BNZ.
Can anybody, please, explain the concept and working of BZ and BNZ with an example?

BZ means Branch on Zero , it means the loop will continue if the status/flag register tells the CPU that the previous result (determined by ALU) is zero.
BNZ means Branch on Not Zero ( I think u can figure this out :) yeah? ).
It'd be better if someone verifies my answer , I'm not an expert in this field.

Related

Ada mod and rem implementation

When looking into exactly what the difference is between mod and rem (something I admittedly should have done years ago, I found little on the matter. https://en.wikipedia.org/wiki/Modulo_operation states there are a few different divisions that can be used, and also states which sign the result has for each. If there's any statement about which division is performed in the ARM, I must've missed it. I assume it's Euclidian, but I want to be sure.
edit:
So I had missed this: http://www.adaic.org/resources/add_content/standards/05rm/html/RM-4-5-5.html which covers the relations. However, in the relation for mod: A = B*N + (A mod B)
The only mention of N is "in addition, for some signed integer value N". Where does N come from?
As said in the comments, http://www.ada-auth.org/standards/12rm/html/RM-4-5-5.html well explains the fundamental differences in behavior. The tables lower down in the reference manual were of great help. What I eventually came to the conclusion of (and implemented for various fractional types) is that rem uses truncated division, and mod uses floored division. I will edit this answer should I be shown wrong.

what happens to Schweizer T-Norm when p goes to zero?

I am reading Jang's book of Neuro-Fuzzy and Soft Computing and in the 2nd chapter the author talks about Schweizer and Sklar T-Norm which is presented by this equation:
it's a handy T-norm. in the exercises (#20 page 45) it asks what would happen to the Tss(a,b,p) if p->0
In fact it asks to show that the whole equation is going to be just ab in the end.
I tried different things and at last, I used Ln but I got this: -1/p Ln(a^-p + b^-p) and I have no idea where to go from here!
can anybody suggest anything? thanks for your help.
p.s: is there any simple way of expanding Ln(x+y) generally?

Fix floating point imprecision in ceiling

The problem:
ceiling(31)
#31
ceiling(31/60*60)
#32
What is the correct way to fix this kind of errors?
Doing the multiplication before the division is not an option, my code looks something like this:
x <- 31/60
...
y <- ceiling(x*60)
I'm thinking of doing a new function:
ceil <- function(x) {
ceiling(signif(x))
}
But I'm new to R, maybe there is a better way.
UPDATE
Sorry, I didn't give more details, I have the same problem in different parts of my code for different reasons, but always with ceiling.
I am aware of the rounding error in floating-point calculation. Maybe the title of the question could be improved, I don't want to fix an imprecision of the ceiling function, what I want to do is perhaps the opposite, make ceiling less exact. A way to tell R to ignore the digits that are clearly noise:
options(digits=17)
31/60*60
#31.000000000000004
But, apparently, the epsilon required to ignore the noise digits depends on the context of the problem.
The real problem here, I strongly believe, is found in my hero The Data Munger Guru's tagline, which is: "What is the problem that you are trying to solve?
Tell me what you want to do, not how you want to do it. "
There are myriad cases where floating-point precision will cause apparent integers to turn into "integer +/- epsilon" , and so you need to figure out why you are going for "ceiling" , why you allow your values to not be integers, etc. <-- more or less what Pascal Cuoq wrote in his comment.
The solution to your concern thus depends on what's actually going on. Perhaps you want, say trunc(x/60)->y followed with trunc(y*60) , or maybe not :-) . Maybe you want y<-round(x/60*60) +1 , or jhoward's suggested approach. It depends, as I stress here, critically on what your goal is and how you want to deal with corner cases.

Prolog Accumulators. Are they really a "different" concept?

I am learning Prolog under my Artificial Intelligence Lab, from the source Learn Prolog Now!.
In the 5th Chapter we come to learn about Accumulators. And as an example, these two code snippets are given.
To Find the Length of a List
without accumulators:
len([],0).
len([_|T],N) :- len(T,X), N is X+1.
with accumulators:
accLen([_|T],A,L) :- Anew is A+1, accLen(T,Anew,L).
accLen([],A,A).
I am unable to understand, how the two snippets are conceptually different? What exactly an accumulator is doing different? And what are the benefits?
Accumulators sound like intermediate variables. (Correct me if I am wrong.) And I had already used them in my programs up till now, so is it really that big a concept?
When you give something a name, it suddenly becomes more real than it used to be. Discussing something can now be done by simply using the name of the concept. Without getting any more philosophical, no, there is nothing special about accumulators, but they are useful.
In practice, going through a list without an accumulator:
foo([]).
foo([H|T]) :-
foo(T).
The head of the list is left behind, and cannot be accessed by the recursive call. At each level of recursion you only see what is left of the list.
Using an accumulator:
bar([], _Acc).
bar([H|T], Acc) :-
bar(T, [H|Acc]).
At every recursive step, you have the remaining list and all the elements you have gone through. In your len/3 example, you only keep the count, not the actual elements, as this is all you need.
Some predicates (like len/3) can be made tail-recursive with accumulators: you don't need to wait for the end of your input (exhaust all elements of the list) to do the actual work, instead doing it incrementally as you get the input. Prolog doesn't have to leave values on the stack and can do tail-call optimization for you.
Search algorithms that need to know the "path so far" (or any algorithm that needs to have a state) use a more general form of the same technique, by providing an "intermediate result" to the recursive call. A run-length encoder, for example, could be defined as:
rle([], []).
rle([First|Rest],Encoded):-
rle_1(Rest, First, 1, Encoded).
rle_1([], Last, N, [Last-N]).
rle_1([H|T], Prev, N, Encoded) :-
( dif(H, Prev)
-> Encoded = [Prev-N|Rest],
rle_1(T, H, 1, Rest)
; succ(N, N1),
rle_1(T, H, N1, Encoded)
).
Hope that helps.
TL;DR: yes, they are.
Imagine you are to go from a city A on the left to a city B on the right, and you want to know the distance between the two in advance. How are you to achieve this?
A mathematician in such a position employs magic known as structural recursion. He says to himself, what if I'll send my own copy one step closer towards the city B, and ask it of its distance to the city? I will then add 1 to its result, after receiving it from my copy, since I have sent it one step closer towards the city, and will know my answer without having moved an inch! Of course if I am already at the city gates, I won't send any copies of me anywhere since I'll know that the distance is 0 - without having moved an inch!
And how do I know that my copy-of-me will succeed? Simply because he will follow the same exact rules, while starting from a point closer to our destination. Whatever value my answer will be, his will be one less, and only a finite number of copies of us will be called into action - because the distance between the cities is finite. So the total operation is certain to complete in a finite amount of time and I will get my answer. Because getting your answer after an infinite time has passed, is not getting it at all - ever.
And now, having found out his answer in advance, our cautious magician mathematician is ready to embark on his safe (now!) journey.
But that of course wasn't magic at all - it's all being a dirty trick! He didn't find out the answer in advance out of thin air - he has sent out the whole stack of others to find it for him. The grueling work had to be done after all, he just pretended not to be aware of it. The distance was traveled. Moreover, the distance back had to be traveled too, for each copy to tell their result to their master, the result being actually created on the way back from the destination. All this before our fake magician had ever started walking himself. How's that for a team effort. For him it could seem like a sweet deal. But overall...
So that's how the magician mathematician thinks. But his dual the brave traveler just goes on a journey, and counts his steps along the way, adding 1 to the current steps counter on each step, before the rest of his actual journey. There's no pretense anymore. The journey may be finite, or it may be infinite - he has no way of knowing upfront. But at each point along his route, and hence when ⁄ if he arrives at the city B gates too, he will know his distance traveled so far. And he certainly won't have to go back all the way to the beginning of the road to tell himself the result.
And that's the difference between the structural recursion of the first, and tail recursion with accumulator ⁄ tail recursion modulo cons ⁄ corecursion employed by the second. The knowledge of the first is built on the way back from the goal; of the second - on the way forth from the starting point, towards the goal. The journey is the destination.
see also:
Technical Report TR19: Unwinding Structured Recursions into Iterations. Daniel P. Friedman and David S. Wise (Dec 1974).
What are the practical implications of all this, you ask? Why, imagine our friend the magician mathematician needs to boil some eggs. He has a pot; a faucet; a hot plate; and some eggs. What is he to do?
Well, it's easy - he'll just put eggs into the pot, add some water from the faucet into it and will put it on the hot plate.
And what if he's already given a pot with eggs and water in it? Why, it's even easier to him - he'll just take the eggs out, pour out the water, and will end up with the problem he already knows how to solve! Pure magic, isn't it!
Before we laugh at the poor chap, we mustn't forget the tale of the centipede. Sometimes ignorance is bliss. But when the required knowledge is simple and "one-dimensional" like the distance here, it'd be a crime to pretend to have no memory at all.
accumulators are intermediate variables, and are an important (read basic) topic in Prolog because allow reversing the information flow of some fundamental algorithm, with important consequences for the efficiency of the program.
Take reversing a list as example
nrev([],[]).
nrev([H|T], R) :- nrev(T, S), append(S, [H], R).
rev(L, R) :- rev(L, [], R).
rev([], R, R).
rev([H|T], C, R) :- rev(T, [H|C], R).
nrev/2 (naive reverse) it's O(N^2), where N is list length, while rev/2 it's O(N).

prevent regular expression formatting in my javascript

I'm sure someone else has asked this but my Google foo is failing me and I cannot find it.
When I divide more than once in an equation like this:
this.active[i].pos(last.pos()+(last.width()/2)+10+(this.active[i].width()/2));
"/2)+10+(this.active[i].width()/" will come up with regular expression formatting(all orange) in the editor which is driving me insane. :(
Is there a way I can change my settings to prevent this? I do not use regular expression at all, so disabling it's formatting entirely in the editor would be acceptable.
Can anyone provide, or point me towards, an answer?
If you found it on Google, I would appreciate learning your search terms.
Thank you. :)
I've been searching the web for about 45 minutes trying to find a solution to this very question when I came across this question here on stack overflow. I almost started a bounty on it but decided I'd see if I could figure it out myself.
I came up with two possible solutions, both of which are much simpler than I though they would be.
Solution 1: Separate the formula into two sections that can be stored in variables and added together when needed. For example, I happened to be writing a formula for a surface area calculation, which formed a regular expression and returned the incorrect answer:
return [(this.base * this.height)/2] + [(this.perimeter * this.slant)/2];
I split the formula at the + and stored them in variables:
var a = (this.base * this.height)/2;
var b = (this.perimeter * this.slant)/2;
return a + b
This solution worked just fine. But then I started thinking that there had to be a simpler solution I was over-looking which led me to:
Solution 2: Dividing by 2 is the same as multiplying by 0.5 (duh!) . In my case - and in almost any case - dividing by 2 and multiplying by 0.5 will get you the same answer. My code then looked like this:
return [(this.base * this.height) * 0.5] + [(this.perimeter * this.slant) * 0.5];
I tested both, and both work, though obviously solution 2 is more efficient (less code).
The only time I could imagine needing to use solution 1 is if you're dividing by a very long number or a prime number (dividing by 3 gives you a more accurate result than multiplying by 0.33).
Anyway, I know you posted this question months ago and probably either came up with a solution or moved on, but I figured I'd post this answer anyway as a reference for any future issues with the same idea.
(Also, this is in JavaScript but I can't imagine something this simple is any different in a similar language).

Resources