I have two vectors. One containing the mean of my list, one the standard deviation.
I want both numbers to be rounded to the same spot. And the rounding should depend on the standard deviation. It should round to two significant figures of it, if those are below 24, and to one if they would be 25 or bigger.
Here are examples, as this is really confusing:
2.30344 0.01223 -> 2.303 0.012
304.57231 1.35234 -> 304.6 1.3
204.43953 3.35234 -> 204 3
I know of the round function where I can enter the digits, which I would have to apply for both. I also know of the signiffuction where I could enter two digits, but how can I check if the first two digits then are smaller than 25? And how can I then figure out to what digit signif has decided to round?
Related
I have a vector that is being filled with random numbers within this range [0,1]. I want to somehow accept only the vectors, in which an element inside of it has a maximum deviation of 0,02 from its previous one and its next one.
For example I have the below vector [3,1]. This is acceptable, because the deviation of the 2nd element, between the first and the third element is not bigger than 0,02. Vector is not always consisted of 3 rows, it could be more.
**Vector**
0.32957
0.33097
0.33946
This is what i thought:
n=4
P=rand(1,n);
sort(P,"ascend");
for L=2:n
while P(L-1)-P(L)>0.02
P=rand(1,n);
endwhile
endfor
Vectorize this!
isvalid=~any(diff(sort(a))>0.02);
sort(a) : if its not sorted, sort
diff() : take the difference between adjacent elements
___ >0.02: Check if any of those differences is bigger than what you accept
~any(): if any is bigger, then return zero, "not valid".
From your code, it seems that there may be more to the question than what you ask, you seem to have the XY problem. You want to create a random vector that has the properties that you describe. You seem to be using uniform random numbers, so let me propose a way to generate your vector where your conditions are always true.
a(1)=rand(1); %or any other way to generate a first value.
length=100; %desired length.
a(2:length)=rand(length-1,1)*0.02; %generate random numbers never bigger than 0.02
a=cumsum(a); %cumulative sum
This ensures the vector is increasing in value, and never increasing more than 0.02
I'm facing a problem. when we want to subtract a number from another using 2's complement we can do that. I don't know how to subtract fractional number using 2's complement.
5 is in binary form 101 and 2 is 10. if we want to subtract 2 from 5 we need to find out 2's complement of 2
2's complement of 2-> 11111110
so if we now add with binary of 5 we can get the subtraction result. If I want to get the result of 5.5-2.125. what would be the procedure.
Fixed point numbers can be used and it is still common to find them in embedded code or hardware.
Their use is identical to integers, but you need to specify where your "point" is. For instance, assume that you want 3 bits after after the point and that your data is 8 bits, bits 7..3 are the integer part (left of "point") and bits 2..0 the fractional part. The interpretation of integer part is as usual the binary decomposition of this integer: bits 3 correspond to 20, bits 4 to 21, etc.
For the fractional part, the decomposition is in negative powers or two. bits 2 correspond to 2-1, bits 1 to 2-2 and bit 0 to 2-3.
So for you problem, 5.5=4+1+1/2=22+20+2-1 and its code is 00101(.)100. Similarly 2.125=2+1/8 and its code is 00010(.)001 (note (.) is just an help to understand the coding).
Indeed they are just integers, but you must take into account that all your numbers are multiplied by 2-3. This will have no impact for addition, but results of multiplication and division must be adjusted. Taking into account the position of point and managing over and underflows is the difficulty of arithmetic with fixed point, but it allows to do fractional computations even if your hardware does not provide floating point support (for instance with low end microcontrollers or FPGA systems).
Two complement is similar to integers and its computation is identical. If code of 2.125 is 00010(.)001, than -2.125==11101(.)111. Operations are as usual.
+5 00101(.)100
-2.125 11101(.)111
00011(.)011
and 00011(.)011=2+1+1/4+1/8=3,375
For the record, two complement first use was for fixed point fractional numbers and two complement name comes from that. If a fractional number if represented by, say 0(.)1100000 (0.75), its negative counter part will be 1(.)0100000 (-0.75 or 1.25 if interpreted as unsigned) and we always have x+(unsigned)-x=2. For this coding, the negative value of a fractional number x is the number y that must be added to x to get a 2, hence the name that y is 2's complement of x.
This question already has answers here:
Round up from .5
(7 answers)
Closed 6 years ago.
It seems there is an error in round function. Below I would expect it to return 6, but it returns 5.
round(5.5)
# 5
Other then 5.5, such as 6.5, 4.5 returns 7, 5 as we expect.
Any explanation?
This behaviour is explained in the help file of the ?round function:
Note that for rounding off a 5, the IEC 60559 standard is expected to
be used, ‘go to the even digit’. Therefore round(0.5) is 0 and
round(-1.5) is -2. However, this is dependent on OS services and on
representation error (since e.g. 0.15 is not represented exactly, the
rounding rule applies to the represented number and not to the printed
number, and so round(0.15, 1) could be either 0.1 or 0.2).
round( .5 + 0:10 )
#### [1] 0 2 2 4 4 6 6 8 8 10 10
Another relevant email exchange by Greg Snow: R: round(1.5) = round(2.5) = 2?:
The logic behind the round to even rule is that we are trying to
represent an underlying continuous value and if x comes from a truly
continuous distribution, then the probability that x==2.5 is 0 and the
2.5 was probably already rounded once from any values between 2.45 and 2.54999999999999..., if we use the round up on 0.5 rule that we learned in grade school, then the double rounding means that values
between 2.45 and 2.50 will all round to 3 (having been rounded first
to 2.5). This will tend to bias estimates upwards. To remove the
bias we need to either go back to before the rounding to 2.5 (which is
often impossible to impractical), or just round up half the time and
round down half the time (or better would be to round proportional to
how likely we are to see values below or above 2.5 rounded to 2.5, but
that will be close to 50/50 for most underlying distributions). The
stochastic approach would be to have the round function randomly
choose which way to round, but deterministic types are not
comforatable with that, so "round to even" was chosen (round to odd
should work about the same) as a consistent rule that rounds up and
down about 50/50.
If you are dealing with data where 2.5 is likely to represent an exact
value (money for example), then you may do better by multiplying all
values by 10 or 100 and working in integers, then converting back only
for the final printing. Note that 2.50000001 rounds to 3, so if you
keep more digits of accuracy until the final printing, then rounding
will go in the expected direction, or you can add 0.000000001 (or
other small number) to your values just before rounding, but that can
bias your estimates upwards.
When I was in college, a professor of Numerical Analysis told us that the way you describe for rounding numbers is the correct one. You shouldn't always round up the number (integer).5, because it is equally distant from the (integer) and the (integer + 1). In order to minimize the error of the sum (or the error of the average, or whatever), half of those situations should be rounded up and the other half should be rounded down. The R programmers seem to share the same opinion as my professor of Numerical Analysis...
So, I need to generate a spline function to feed it into another program which only accepts a fixed space between consecutive points. So, I used spline function in R with a given number of points to genrate spline, however, the floating-point cutoff makes the space among the points variable, for example:
spline(d$V1, d$V2, n=(max(d$V1)-min(d$V1))/0.0200)
> head(t.spl, 7)
x y
1 2.3000 -3.0204
2 2.3202 -3.0204
3 2.3404 -3.0204
4 2.3606 -3.0204
5 2.3807 -3.0204
6 2.4009 -3.0204
7 2.4211 -3.0204
so, the space between 1st 1nd 2nd row is 0.0202, while between 4th and 5th is 0.0201. So because of this problem, the other program that I am feeding this spline into, doesn't accept this. So, is there any way to make this work?
As an aside: please provide a reproducible example next time (I can't copy/paste your code in because I don't have d or t.spl)
I think you'll find that the different intervals (0.0202 vs 0.0201) is an artifact of the number of characters you are printing on the screen, not of the spline function.
It seems R is printing 4 digits after the decimal point for you for neatness, so it's doing the rounding only for the purposes of displaying the results to you.
You can see how many digits are displayed with options('digits')$digits, and adjust it with options(digits=new_number_of_digits) (see ?options for details).
For example:
options(digits=4)
pi
# 3.142
options(digits=10)
pi
# 3.141592654
In summary, when you feed the values in to your other program, make sure you print the values with enough decimal points that the other program accepts the intervals as being "equal".
If you are writing to a file, for example, just make sure you write enough digits out. If you are copy-pasting from the R console, make sure you adjust R to print out enough digits.
MathematicalCoffee is probably right. I'm just adding an alternative for the sake of wordiness.
myspline <- splinefun(dV$1,dV$2)
mydata.y <- myspline(desired_x_values,deriv=0)
Will guarantee the uniform x-spacings you desire.
This is more of a maths question than programming but I figure a lot of people here are pretty good at maths! :)
My question is: Given a 9 x 9 grid (81 cells) that must contain the numbers 1 to 9 each exactly 9 times, how many different grids can be produced. The order of the numbers doesn't matter, for example the first row could contain nine 1's etc. This is related to Sudoku and we know the number of valid Sudoku grids is 6.67×10^21, so since my problem isn't constrained like Sudoku by having to have each of the 9 numbers in each row, column and box then the answer should be greater than 6.67×10^21.
My first thought was that the answer is 81! however on further reflection this assumes that the 81 numbers possible for each cell are different, distinct number. They are not, there are 81 possible numbers for each cell but only 9 possible different numbers.
My next thought was then that each of the cells in the first row can be any number between 1 and 9. If by chance the first row happened to be all the same number, say all 1s, then each cell in the second row could only have 8 possibilites, 2-9. If this continued down until the last row then number of different permutations could be calculated by 9^2 * 8^2 * 7^2 ..... * 1^2. However this doesn't work if each row doesn't contain 9 of the same number.
It's been quite a while since I studied this stuff and I can't think of a way to work it out, I'd appreciate any help anyone can offer.
Imagine taking 81 blank slips of paper and writing a number from 1 to 9 on each slip (nine of each number). Shuffle the deck, and start placing the slips on the 9x9 grid.
You'd be able to create 81! different patterns if you considered each slip to be unique.
But instead you want to consider all the 1's to be equivalent.
For any particular configuration, how many times will that configuration be repeated
due to the 1's all being equivalent? The answer is 9!, the number of ways you can permute the nine slips with 1 written on them.
So that cuts the total number of permutations down to 81!/9!. (You divide by the number of indistinguishable permutations. Instead of 9! indistinguishable permutations, imagine there were just 2 indistinguishable permutations. You would divide the count by 2, right? So the rule is, you divide by the number of indistinguishable permutations.)
Ah, but you also want the 2's to be equivalent, and the 3's, and so forth.
By the same reasoning, that cuts down the number of permutations to
81!/(9!)^9
By Stirling's approximation, that is roughly 5.8 * 10^70.
First, let's start with 81 numbers, 1 through 81. The number of permutations for that is 81P81, or 81!. Simple enough.
However, we have nine 1s, which can be arranged in 9! indistinguishable permutations. Same with 2, 3, etc.
So what we have is the total number of board permutations divided by all the indistinguishable permutations of all numbers, or 81! / (9! ** 9).
>>> reduce(operator.mul, range(1,82))/(reduce(operator.mul, range(1, 10))**9)
53130688706387569792052442448845648519471103327391407016237760000000000L