I'm trying to solve a problem using APL, for which I have two vectors v1 and v2, with relative length of at most +1, depending on the input. That means that ((≢v1)-(≢v2))∊¯1 0 1.
What would be the best way to interleave said vectors, so to create a third vector v3 such that v3=v1[0],v2[0],v1[1],v2[1],...?
(If it's relevant, I'm using Dyalog APL version 16.0)
This should work in just about every APL.
(v0,v1)[⍋(⍳⍴v0),⍳⍴v1]
If you want to worry about either v0 or v1 being scalars, then
(v0,v1)[⍋(⍳⍴,v0),⍳⍴,v1]
If you don't mind getting a prototype fill element when the vectors have unequal length, then
Interleave←{,⍉↑⍵}
will do. Try it online!
Otherwise, you can interleave the matching parts, and then append the missing element(s — it works for length-differences greater than one too):
Interleave←{
lengths←⌊/≢¨⍵
main←,⍉↑lengths↑¨⍵
tail←⊃,/lengths↓¨⍵
main,tail
}
Try it online!
Using a Dyalog dfn:
zip ← {
mix ← ,⍉↑ ⍺ ⍵
mask ← ,⍉↑ 1⊣¨¨ ⍺ ⍵
mask / mix
}
The idea here is to mix both arguments, then transpose the result and finally flatten it (mix).
Then, we apply the same on an array of 1s, corresponding to the length of the given arrays (mask), and use it as a mask, to filter the prototypes added by the mix primitive.
Note that this allows also for zipping arrays with a difference of length greater than one element.
Try it online!
As I do not know Dyalog APL, I answer in the old ISO APL from the 1970's years:
(v1,v2)[⍋((0.5×(⍴v1)<⍴v2)+⍳⍴v1),((0.5×(⍴v2)<⍴v1)+⍳⍴v2]
The first element will be the one of the longuest vector, if they have the same length, the first element is the first element of v1.
Here's how I would solve the original question in APL2:
LEN←∊⌊/⍴¨V1 V2
V3←∊((LEN↑V1),¨LEN↑V2),LEN↓¨V1 V2
With vectors of same length use the inner product:
1 2 3,.,40 50 60
┌──────────────┐
│1 40 2 50 3 60│
└──────────────┘
Building on top of that begets this dfn :
{r←(⍴⍺)⌊⍴⍵⋄(∊(r⍴⍺),.,r⍴⍵),(r↓⍺),r↓⍵}
alternatively, we can laminate (keeping the same overall logic):
{r←(⍴⍺)⌊⍴⍵⋄(,(r⍴⍺),⍪r⍴⍵),(r↓⍺),r↓⍵}
Related
I am trying to solve a problem in computational algebra using python.
Basically given two sets, say A={a,b} and B={e}, I need to compute the element by element tensor products and get a final set say C={a\tensor{e},b\tensor{e}} containing these products of elements.
I can do an element by element multiplication using arrays with numbers but I can't do an element by element tensor multiplication of letters instead of numbers.
Not sure if I understood correctly, this below code multiplies each letter of one set with each letter of the the other set
def getProduct(A,B):
prod=[]
for a in A:
for b in B:
prod.append(a+b)
return prod
A=['a','b']
B=['e']
print(getProduct(A,B))
Output: ['ae', 'be']
I was having trouble fitting my question into the 150 character limit of the title; I apologize if it's unclear.
Let's say I have two vectors of equal length (A and B) composed entirely of ones and zeroes. I'd like to know if there exists any pair A_i and B_i such that A_i = 0 when B_i = 1.
My problem is efficiency. It's trivially easy to write a for-loop, do an element by element comparison, set a flag and break if the condition is met. The problem comes up though, that I'm working on matrices of up to 20000 rows and numbers of columns of a similar magnitude. I'd like to have a way of quickly doing this check and removing any of the rows which are redundant for my purposes... at this scale the element by element comparison takes an impractical amount of time.
Is there any elegant linear algebra trick to address this in an efficient way?
Edit 1: The columns aren't exactly arranged at random. I can't say that they are strictly sorted, but I can say that as I go from left to right the columns on the left are more likely to cover columns the left than vice-versa (by cover, I mean that A covers B iff for all i, if A_i = 1 then B_i = 1 too). I can try and apply some additional kind of sorting to them if it would make addressing the problem easier (but I would prefer to avoid that if the sorting process would be similarly impractical).
Within individual columns, I'm not aware of any pattern to how the ones are distributed by index.
Imagine you have a list of N numbers. You also have the "target" number. You want to find the combination of Z numbers that summed together are close to the target.
Example:
Target = 3.085
List = [0.87, 1.24, 2.17, 1.89]
Output:
[0.87, 2.17] = 3.04 (0.045 offset)
In the example above you would get the group [0.87, 2.17] because it has the smallest offset from the target of 0.045. It's a list of 2 numbers but it could be more or less.
My question is what is the best way/algorithm (fastest) to solve this problem? I'm thinking a recursive approach but not yet exactly sure how. What is your opinion on this problem?
This is a knapsack problem. To solve it you would do the following:
def knap(numbers,target):
values = Set()
values.add(0)
for v in values:
for n in numbers:
if v+n<(2*target): # this is optional..
values.add(v+n);
for v in values:
# find the closest item to your target
Essentially, you are building up all of the possible sums of the numbers. If you have integral values, you can make this even faster by using an array instead of a set.
Intuitively, I would start by sorting the list. (Use your favorite algorithm.) Then find the index of the largest element that is smaller than the target. From there, pick the largest element that is less than the target, and combine it with the smallest element. That would probably be your baseline offset. If it is a negative offset, you can keep looking for combinations using bigger numbers; if it is a positive offset you can keep looking for combinations using smaller numbers. At that point recursion might be appropriate.
This doesn't yet address the need for 'Z' numbers, of course, but it's a step in the right direction and can be generalized.
Of course, depending on the size of the problem the "fastest" way might well be to divide up the possible combinations, assign them to a group of machines, and have each one do a brute-force calculation on its subset. Depends on how the question is phrased. :)
Working through the first edition of "Introduction to Functional Programming", by Bird & Wadler, which uses a theoretical lazy language with Haskell-ish syntax.
Exercise 3.2.3 asks:
Using a list comprehension, define a function for counting the number
of negative numbers in a list
Now, at this point we're still scratching the surface of lists. I would assume the intention is that only concepts that have been introduced at that point should be used, and the following have not been introduced yet:
A function for computing list length
List indexing
Pattern matching i.e. f (x:xs) = ...
Infinite lists
All the functions and operators that act on lists - with one exception - e.g. ++, head, tail, map, filter, zip, foldr, etc
What tools are available?
A maximum function that returns the maximal element of a numeric list
List comprehensions, with possibly multiple generator expressions and predicates
The notion that the output of the comprehension need not depend on the generator expression, implying the generator expression can be used for controlling the size of the generated list
Finite arithmetic sequence lists i.e. [a..b] or [a, a + step..b]
I'll admit, I'm stumped. Obviously one can extract the negative numbers from the original list fairly easily with a comprehension, but how does one then count them, with no notion of length or indexing?
The availability of the maximum function would suggest the end game is to construct a list whose maximal element is the number of negative numbers, with the final result of the function being the application of maximum to said list.
I'm either missing something blindingly obvious, or a smart trick, with a horrible feeling it may be the former. Tell me SO, how do you solve this?
My old -- and very yellowed copy of the first edition has a note attached to Exercise 3.2.3: "This question needs # (length), which appears only later". The moral of the story is to be more careful when setting exercises. I am currently finishing a third edition, which contains answers to every question.
By the way, did you answer Exercise 1.2.1 which asks for you to write down all the ways that
square (square (3 + 7)) can be reduced to normal form. It turns out that there are 547 ways!
I think you may be assuming too many restrictions - taking the length of the filtered list seems like the blindingly obvious solution to me.
An couple of alternatives but both involve using some other function that you say wasn't introduced:
sum [1 | x <- xs, x < 0]
maximum (0:[index | (index, ()) <- zip [1..] [() | x <- xs, x < 0]])
I have a std::vector of double values. Now I need to know if two succeeding elements are within a certain distance in order to process them. I have sorted the vector previously with std::sort.
I have thought about a solution to determine these elements with std::find_if and a lambda expression (c++11), like this:
std::vector<std::vector<double>::iterator> foundElements;
std::vector<double>::iterator foundElement;
while (foundElement != gradients.end())
{
foundElement = std::find_if(gradients.begin(), gradients.end(),
[] (double grad)->bool{
return ...
});
foundElements.push_back(foundElement);
}
But what should the predicate actually return? The plan is that I use the vector of iterators to later modify the vector.
Am I on the right track with this approach or is it too complicated/impossible? What are other, propably more practical solutions?
EDIT: I think I will go after the std::adjacent_find function, as proposed by one answerer.
Read about std::adjacent_find.
Can you enhance the grammar of your question?
"I have a std::vector with different double values. Now I need to know if two succeeding ones (I have sorted the vector previously with std::sort) are within a certain distance to process them)."
Do you imply that each element of a vector of type double is a unique value? If that's the case, can it be reasonably inferred that your goal to find the distance between each of these elements?