I'm trying to accomplish something which I think is fairly simple, but I'm not able to do. I've a function that takes in a number of dimensions as input, say func(n). What I'd like the function to do is to find all possible directions along an entity can move in that n-dimensional space. So for n=2 I'm expecting the output to be
1, 1
1,-1
-1, 1
-1,-1
The end use case is to say: given a pair of variables, either both can increase, both can decrease, one can increase while the other decreases and the opposite. Its easy to enumerate them out for n=2 but my n is bound to be in the 8-12 range. This would give 2^8 to 2^12 combinations. How is this done in R?
I tried the permutations function in gtools package but that's clearly not what is needed here. Any pointers appreciated.
We could use expand.grid
expand.grid(rep(list(c(1, -1)), 2))
Related
Assume I had the following iterative fuction:
f(z) = z^2 + c
z initally equal to 0
and each answer of the function becomes z for the next iteration. i.e. if c is 1 then the fist iteration gives 1, the second gives 2 and so fourth.
Now assuming I already set a value for c, I would like to be able to use Python to find the limit as this function approaches an infinite number of iterations. How would I best be able to do that? Would Sympy be a good tool?
editied to clearify what I man by iterative function.
As a background, I'm a computer programmer and I'm working on a software library that allows a computer to quickly search through all dates to find a set of dates that satisfies a criteria. For example:
I want a list of every possible time that has ever occurred that has occurred on a friday or a saturday that is in April or May during the first week of the month.
My library uses numerical sets to efficiently represent ranges of dates that satisfy a criteria.
I've been thinking about ways to improve the performance of some parts of the app and I think that by combining sets and some geometry, I can really improve my results. However, my geometry is a bit rusty and I was hoping you might could help.
Here's my thought:
Certain elements of time can be represented as a circular dial. For example, Minutes can be positioned on a clock with values between 0...59. We could store valid ranges as a list of arcs. For example, If we wanted all times that ended with 05..10, we could store [5,10]. If we wanted all times that end with :45-59 or :00-15, we could store [45, 15]. Notice how this last arc "loops around" the dial. Here's a mockup showing different ranges intersecting on a dial
My question is this:
Given a set of whole numbers between N...M arranged into a circle.
Given Arc1 which is representing by [A, B] and Arc2 which is represented by [C, D] where A, B, C, and D are all within in range N...M
How do I determine:
A. Whether the arcs intersect.
B. If they do, what their intersection is.
C. If they do, what their union is.
Thank you so much for your help. If you're not able to help, if you can point me in the right direction, that would be great.
Thanks!
A simple and safe approach is to split the intervals that straddle 0. Then you perform pairwise interval intersection/union (for instance if A < D and C < B then [max(A,C), min(B,D)] for the intersection), and merge them if they meet at 0.
It seems the primitive operation to implement would be something like 'is the number X contained in the arch [A,B]'. Once you have that, you could implement an [A,B]/[C,D] arch-intersection predicate by something like -
Arch intersection means exactly that at least one of the following conditions is met:
C is contained in [A,B]
D is contained in [A,B]
A is contained in [C,D]
B is contained in [C,D]
One way to implement this contained-in-arch test without any branches is with some trigonometry and vector cross product. Not sure it would be faster (the math/branches performance tradeoff is entirely empiric), but it might be worth a try.
Denote Xa = sin(X/N * 2PI), Ya = cos(X/N * 2PI) and similarly for Xb,Yb etc.
C is contained in [A,B] is equivalent to:
Xa * Yc - Ya * Xc > 0
AND
Xc * Yb - Yc * Xb > 0
You can complete the other 3 conditions in an identical manner.
Hope this turns out useful.
I think I have a rather simple problem but I can't figure out the best approach. I have a vector with 30 different values. Now I need to divide the vector into 10 groups in such a way that the mean within group variance is as small as possible. the size of the groups is not important, it can anything between one and 21.
Example. Let's say I have vector of six values, that I have to split into three groups:
Myvector <- c(0.88,0.79,0.78,0.62,0.60,0.58)
Obviously the solution would be:
Group1 <-c(0.88)
Group2 <-c(0.79,0.78)
Group3 <-c(0.62,0.60,0.58)
Is there a function that gives the same outcome as the example and that I can use for my vector withe 30 values?
Many thanks in advance.
It sounds like you want to do k-means clustering. Something like this would work
kmeans(Myvector,3, algo="Lloyd")
Note that I changed the default algorithm to match your desired output. If you read the ?kmeans help page you will see that there are different algorithms to calculate the different clusters because it's not a trivial computational problem. They might necessarily guarantee optimality.
I'm converting a rather complicated set of code from Matlab to R. I have zero experience in Matlab and am a functioning novice in R.
I have a segment of code which reads (in matlab):
dSii=(sum(tao.*Sik,1))'-(sum(m'))'.*Sii-beta.*Sii./N.*(Iii+sum(Iik)');
Which I've simplified and will focus on the first segment (if I can solve the first segment I'm confident I can perform the rest):
J = (sum(A.*B,1))' - ...
tao (or A) and Sik (or B) are matrices. So my assumption is I'm performing matrix multiplication here (A * B)and summing the resultant column. The '1' is what is throwing me off in that statement. In R, that 1 would likely indicate we're talking about a sum of rows as opposed to columns(indicated by 2). But I can't find any supporting documentation for that kind of Matlab statement.
I was thinking of using a statement like this (but of course, too many '1's and ',')
J<- (apply(A*B, 1), 1, sum)
Thanks for all your help. I searched for other examples here and elsewhere and couldn't find an answer. I'm willing to work for it but this is akin to me studying French (which I don't know) to translate in Spanish (which I'm moderate in) while interpreting the whole process in English. :D
Because of the different conventions in R and Matlab, the idiosyncrasies have to be learned for each (just like your language analogy!). The Matlab command sum(A.*B,1) means multiply A and B element-wise, so they must be the same shape, and then sum along dimension 1, i.e. add each row together to get the column sums. Dimension 1 is the default so, sum(A.*B) would do the same thing as sum(A.*B,1). Because R treats * as element-wise for matrix multiplication, the following Matlab and R codes will produce the same column of numbers in J:
Matlab:
A=[[1,2,3];[4,5,6];[7,8,9]];
B=[[10,11,12];[13,14,15];[16,17,18]];
J=sum(A.*B,1)'; %the ' means to transpose the column sums to be a 3x1 matrix
R:
A<-matrix(c(1,2,3,4,5,6,7,8,9),3,byrow=T)
B<-matrix(c(10,11,12,13,14,15,16,17,18),3,byrow=T)
J<-matrix(colSums(A*B)) # no transpose needed here: nrow(J)==3
EDIT
So it seems I "underestimated" what varying length numbers meant. I didn't even think about situations where the operands are 100 digits long. In that case, my proposed algorithm is definitely not efficient. I'd probably need an implementation who's complexity depends on the # of digits in each operands as opposed to its numerical value, right?
As suggested below, I will look into the Karatsuba algorithm...
Write the pseudocode of an algorithm that takes in two arbitrary length numbers (provided as strings), and computes the product of these numbers. Use an efficient procedure for multiplication of large numbers of arbitrary length. Analyze the efficiency of your algorithm.
I decided to take the (semi) easy way out and use the Russian Peasant Algorithm. It works like this:
a * b = a/2 * 2b if a is even
a * b = (a-1)/2 * 2b + a if a is odd
My pseudocode is:
rpa(x, y){
if x is 1
return y
if x is even
return rpa(x/2, 2y)
if x is odd
return rpa((x-1)/2, 2y) + y
}
I have 3 questions:
Is this efficient for arbitrary length numbers? I implemented it in C and tried varying length numbers. The run-time in was near-instant in all cases so it's hard to tell empirically...
Can I apply the Master's Theorem to understand the complexity...?
a = # subproblems in recursion = 1 (max 1 recursive call across all states)
n / b = size of each subproblem = n / 1 -> b = 1 (problem doesn't change size...?)
f(n^d) = work done outside recursive calls = 1 -> d = 0 (the addition when a is odd)
a = 1, b^d = 1, a = b^d -> complexity is in n^d*log(n) = log(n)
this makes sense logically since we are halving the problem at each step, right?
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
Many thanks in advance
What might my professor mean by providing arbitrary length numbers "as strings". Why do that?
This actually change everything about the problem (and make your algorithm incorrect).
It means than 1234 is provided as 1,2,3,4 and you cannot operate directly on the whole number. You need to analyze your algorithm in terms of #additions, #multiplications, #divisions.
You should expect a division to be a bit more expensive than a multiplication, and a multiplication to be lot more expensive than an addition. So a good algorithm try to reduce the number of divisions and multiplications.
Check out the Karatsuba algorithm, (ps don't copy it that's not what your teacher want) is one of the fastest for this specification.
Add 3): Native integers are limited in how large (or small) numbers they can represent (32- or 64-bit integers for example). To represent arbitrary length numbers you can choose strings, because then you are not really limited by this. The problem is then, of course, that your arithmetic units are not really made to add strings ;-)