Using spreadsheet to count consecutive cells - multidimensional-array

So I want to use Google spreadsheet to find how many times does five consecutive cells have a value greater than a given value in a row,but one cell cant be a part of two set of consecutive cells.For example i want to count the number of times a particular item was bought in a month for consecutive five days but if it was bought for 7 days at a stretch it will only be counted as one whereas if it is multiple of five it will be counted as many multiples of five.
For Ex:If cells 1-5 have a value greater than the given value it should give me a count of 1, but if cells 1-9 also are greater than the given value even then it should give me count of 1 but if 1-10 have a value greater than the given value then it should give me a count of 2.I hope this was clear.
I want to write this code in Google Drive using custom function, I tried writing a code in C.
*
int x; //no. of rows
int y; //no. of columns
int arr[x][y]; //array to store numbers
int count[x];
int i,j,k; //for loops
for(i=0;i<x;i++) //set count to 0 for all rows
count[x]=0;
for(i=0;i<x;i++)
{
for(j=0;j<y;j++)
{
for(k=1;k<=5, j<y;k++, j++)
{
if(!arr[i][j]>0)
{
break;
}
else if(k==5 && arr[i][j]!<1)
{
count[i]++;
j--;
}
}
}
}
//display the count array now to see result.
*

You can do this without writing code. That's kinda the purpose of a spreadsheet.
You have one column, say column A, with the values.
In the next column, start a counter that increments each row if the value in the first column is >= your preset value, and reset the counter if not. The formula would be something like (for cell B2)
=IF(A2>=$E$1,B1+1,0)
In the next column, calculate the multiples of 5. For cell C2:
=IF(MOD(B2,5)=0,C1+1,C1)
Copy those cells down to the bottom of the list in column A, and the last value will be the count of values that exceeded cell $e1 a multiple of 5 consecutive times.

Another way using native Sheets functions:
=ArrayFormula((ROWS(A:A)-LEN(REGEXREPLACE(CONCATENATE(LEFT(A:A>=1)),"TTTTT","")))/5)
and using a custom function:
function countGroups(range, comparisonValue, groupSize) {
var v = comparisonValue || 1; // default to comparing to 1 if not specified
var n = groupSize || 5; // default to groups of 5 if not specified
var count = 0, counter = 0;
for (var i = 0, length = range.length; i < length; i++) {
if (range[i][0] >= v) {
counter++;
if (counter == n) {
counter = 0;
count++;
}
}
else
counter = 0;
}
return count;
}

Related

Finding the minimum set of coins that make a given value

I've been trying to figure out if there would be a way to get the optimal minimum set of coins that would be used to make the change.
The greedy algorithm approach for this has an issue such as if we have the set of coins {1, 5, 6, 9} and we wanted to get the value 11. The greedy algorithm would give us {9,1,1} however the most optimal solution would be {5, 6}
From reading through this site I've found that this method can give us the total minimum number of coins needed. Would there be a way to get the set of coins from that as well?
I'm assuming you already know the Dynamic Programming method to find only the minimum number of coins needed. Let's say that you want to find the minimum number of coins to create a total value K. Then, your code could be
vector <int> min_coins(K + 1);
min_coins[0] = 0; // base case
for(int k = 1; k <= K; ++k) {
min_coins[k] = INF;
for(int c : coins) { // coins[] contains all values of coins
if(k - c >= 0) {
min_coins[k] = min(min_coins[k], min_coins[k - c] + 1);
}
}
}
Answer to your question: In order to find the actual set of coins that is minimal in size, we can simply keep another array last_coin[] where last_coin[k] is equal to the coin that was last added to the optimal set of coins for a sum of k. To illustrate this:
vector <int> min_coins(K + 1), last_coin(K + 1);
min_coins[0] = 0; // base case
for(int k = 1; k <= K; ++k) {
min_coins[k] = INF;
for(int c : coins) {
if(k - c >= 0) {
if(min_coins[k - c] + 1 < min_coins[k]) {
min_coins[k] = min_coins[k - c] + 1;
last_coin[k] = c; // !!!
}
}
}
}
How does this let you find the set of coins? Let's say we wanted to find the best set of coins that sum to K. Then, we know that last_coin[K] holds one of the coins in the set, so we can add last_coin[K] to the set. After that, we subtract last_coin[K] from K and repeat until K = 0. Clearly, this will construct a (not necessarily the) min-size set of coins that sums to K.
Possible implementation:
int value_left = K;
while(value_left > 0) {
last_coin[value_left] is added to the set
value_left -= last_coin[value_left]
}

Portable vector shift/permutation in OpenCL?

I'm trying to write a trimmed mean kernel that takes as input a set of frames (~100). I'm thinking of using an insertion sort (of size ~8). This means that I'll need to read one float/ uint/ushort at a time from the input images and compare it against an 8-wide vector, shifting the elements up and inserting the new value at the correct spot (if necessary), with the largest value added to the mean.
I'm having difficulties finding a portable way of shifting the elements in the vector and inserting the new one at the correct spot. I know that AMD GPUs have ds_permute for example, but those are not portable, and I can't figure out a clever way of using arithmetic and relational operators to do it (since those operate only on their lane and AFAIK unaligned vector accesses are UB in OpenCL).
If you only have 8 items in your list then you could add some indirection and have an index table uchar[8]. You assign the pre-sorted elements values 0-7. As you perform the sort you don't rearrange those items, instead you insert their indices into the table.
To get the speedup you then need to store each index using 4 bits to that all 8 fit into a 32-bit word. Honestly, I don't think this will be faster in your case though.
float elements[8];
uint index_table = 0;
uint sorted_size = 0;
// insert elements[i]
void insert(uint i)
{
uint temp = index_table
for (j = 0; j < sorted_size ; ++j)
{
if (elements[i] < elements[temp & 0xf])
{
// Insert i
temp = (temp << 4) | i;
index_table = (index_table & (4 * j - 1)) | (temp << (4 * j));
return;
}
temp >>= 4;
}
// Insert at end
index_table |= i << 4 * sorted_size ;
}
void insertion_sort()
{
// We can skip the first iteration since the 1st element is always inserted at the start
for (sorted_size = 1; sorted_size < 8; ++sorted_size)
{
insert(sorted_size);
}
}
float ith_smallest(uint i)
{
return elements[(index_table >> 4 * i) & 0xf];
}

Frame the solution using Dynamic programming

Given a bag with a maximum of 100 chips,each chip has its value written over it.
Determine the most fair division between two persons. This means that the difference between the amount each person obtains should be minimized. The value of a chips varies from 1 to 1000.
Input: The number of coins m, and the value of each coin.
Output: Minimal positive difference between the amount the two persons obtain when they divide the chips from the corresponding bag.
I am finding it difficult to form a DP solution for it. Please help me.
Initially I had to tried it as a Non DP solution.Actually I havent thought of solving it using DP. I simply sorted the value array. And assigned the largest value to one of the person, and incrementally assigned the other values to one of the two depending upon which creates minimum difference. But that solution actually didnt work.
I am posting my solution here :
bool myfunction(int i, int j)
{
return(i >= j) ;
}
int main()
{
int T, m, sum1, sum2, temp_sum1, temp_sum2,i ;
cin >> T ;
while(T--)
{
cin >> m ;
sum1 = 0 ; sum2 = 0 ; temp_sum1 = 0 ; temp_sum2 = 0 ;
vector<int> arr(m) ;
for(i=0 ; i < m ; i++)
{
cin>>arr[i] ;
}
if(m==1 )
{
if(arr[0]%2==0)
cout<<0<<endl ;
else
cout<<1<<endl ;
}
else {
sort(arr.begin(), arr.end(), myfunction) ;
// vector<int> s1 ;
// vector<int> s2 ;
for(i=0 ; i < m ; i++)
{
temp_sum1 = sum1 + arr[i] ;
temp_sum2 = sum2 + arr[i] ;
if(abs(temp_sum1 - sum2) <= abs(temp_sum2 -sum1))
{
sum1 = sum1 + arr[i] ;
}
else
{
sum2 = sum2 + arr[i] ;
}
temp_sum1 = 0 ;
temp_sum2 = 0 ;
}
cout<<abs(sum1 -sum2)<<endl ;
}
}
return 0 ;
}
what i understand from your question is you want to divide chips in two persons so as to minimize the difference between sum of numbers written on those.
If understanding is correct, then potentially you can follow below approach to arrive at solution.
Sort the values array i.e. int values[100]
Start adding elements from both ends of array in for loop i.e. for(i=0; j=values.length;i<j;i++,j--)
Odd numbered iteration sum belongs to one person & even numbered sum to other person
run the loop till i < j
now, the difference between two sums obtained in odd & even iterations should be minimum as array was sorted earlier.
If my understanding of the question is correct, then this solution should resolve your problem.
Reflect as appropriate.
Thanks
Ravindra

Divide and Conquer Recursion

I am just trying to understand how the recursion works in this example, and would appreciate it if somebody could break this down for me. I have the following algorithm which basically returns the maximum element in an array:
int MaximumElement(int array[], int index, int n)
{
int maxval1, maxval2;
if ( n==1 ) return array[index];
maxval1 = MaximumElement(array, index, n/2);
maxval2 = MaximumElement(array, index+(n/2), n-(n/2));
if (maxval1 > maxval2)
return maxval1;
else
return maxval2;
}
I am not able to understand how the recursive calls work here. Does the first recursive call always get executed when the second call is being made? I really appreciate it if someone could please explain this to me. Many thanks!
Code comments embedded:
// the names index and n are misleading, it would be better if we named it:
// startIndex and rangeToCheck
int MaximumElement(int array[], int startIndex, int rangeToCheck)
{
int maxval1, maxval2;
// when the range to check is only one cell - return it as the maximum
// that's the base-case of the recursion
if ( rangeToCheck==1 ) return array[startIndex];
// "divide" by checking the range between the index and the first "half" of the range
System.out.println("index = "+startIndex+"; rangeToCheck/2 = " + rangeToCheck/2);
maxval1 = MaximumElement(array, startIndex, rangeToCheck/2);
// check the second "half" of the range
System.out.println("index = "+startIndex+"; rangeToCheck-(rangeToCheck/2 = " + (rangeToCheck-(rangeToCheck/2)));
maxval2 = MaximumElement(array, startIndex+(rangeToCheck/2), rangeToCheck-(rangeToCheck/2));
// and now "Conquer" - compare the 2 "local maximums" that we got from the last step
// and return the bigger one
if (maxval1 > maxval2)
return maxval1;
else
return maxval2;
}
Example of usage:
int[] arr = {5,3,4,8,7,2};
int big = MaximumElement(arr,0,arr.length-1);
System.out.println("big = " + big);
OUTPUT:
index = 0; rangeToCheck/2 = 2
index = 0; rangeToCheck/2 = 1
index = 0; rangeToCheck-(rangeToCheck/2 = 1
index = 0; rangeToCheck-(rangeToCheck/2 = 3
index = 2; rangeToCheck/2 = 1
index = 2; rangeToCheck-(rangeToCheck/2 = 2
index = 3; rangeToCheck/2 = 1
index = 3; rangeToCheck-(rangeToCheck/2 = 1
big = 8
What is happening here is that both recursive calls are being made, one after another. The first one searches have the array and returns the max, the second searches the other half and returns the max. Then the two maxes are compared and the bigger max is returned.
Yes. What you have guessed is right. Out of the two recursive calls MaximumElement(array, index, n/2) and MaximumElement(array, index+(n/2), n-(n/2)), the first call is repeatedly carried out until the call is made with a single element of the array. Then the two elements are compared and the largest is returned. Then this comparison process is continued until the largest element is returned.

Handling large groups of numbers

Project Euler problem 14:
The following iterative sequence is
defined for the set of positive
integers:
n → n/2 (n is even) n → 3n + 1 (n is
odd)
Using the rule above and starting with
13, we generate the following
sequence: 13 → 40 → 20 → 10 → 5 → 16 →
8 → 4 → 2 → 1
It can be seen that this sequence
(starting at 13 and finishing at 1)
contains 10 terms. Although it has not
been proved yet (Collatz Problem), it
is thought that all starting numbers
finish at 1.
Which starting number, under one
million, produces the longest chain?
My first instinct is to create a function to calculate the chains, and run it with every number between 1 and 1 million. Obviously, that takes a long time. Way longer than solving this should take, according to Project Euler's "About" page. I've found several problems on Project Euler that involve large groups of numbers that a program running for hours didn't finish. Clearly, I'm doing something wrong.
How can I handle large groups of numbers quickly?
What am I missing here?
Have a read about memoization. The key insight is that if you've got a sequence starting A that has length 1001, and then you get a sequence B that produces an A, you don't to repeat all that work again.
This is the code in Mathematica, using memoization and recursion. Just four lines :)
f[x_] := f[x] = If[x == 1, 1, 1 + f[If[EvenQ[x], x/2, (3 x + 1)]]];
Block[{$RecursionLimit = 1000, a = 0, j},
Do[If[a < f[i], a = f[i]; j = i], {i, Reverse#Range#10^6}];
Print#a; Print[j];
]
Output .... chain length´525´ and the number is ... ohhhh ... font too small ! :)
BTW, here you can see a plot of the frequency for each chain length
Starting with 1,000,000, generate the chain. Keep track of each number that was generated in the chain, as you know for sure that their chain is smaller than the chain for the starting number. Once you reach 1, store the starting number along with its chain length. Take the next biggest number that has not being generated before, and repeat the process.
This will give you the list of numbers and chain length. Take the greatest chain length, and that's your answer.
I'll make some code to clarify.
public static long nextInChain(long n) {
if (n==1) return 1;
if (n%2==0) {
return n/2;
} else {
return (3 * n) + 1;
}
}
public static void main(String[] args) {
long iniTime=System.currentTimeMillis();
HashSet<Long> numbers=new HashSet<Long>();
HashMap<Long,Long> lenghts=new HashMap<Long, Long>();
long currentTry=1000000l;
int i=0;
do {
doTry(currentTry,numbers, lenghts);
currentTry=findNext(currentTry,numbers);
i++;
} while (currentTry!=0);
Set<Long> longs = lenghts.keySet();
long max=0;
long key=0;
for (Long aLong : longs) {
if (max < lenghts.get(aLong)) {
key = aLong;
max = lenghts.get(aLong);
}
}
System.out.println("number = " + key);
System.out.println("chain lenght = " + max);
System.out.println("Elapsed = " + ((System.currentTimeMillis()-iniTime)/1000));
}
private static long findNext(long currentTry, HashSet<Long> numbers) {
for(currentTry=currentTry-1;currentTry>=0;currentTry--) {
if (!numbers.contains(currentTry)) return currentTry;
}
return 0;
}
private static void doTry(Long tryNumber,HashSet<Long> numbers, HashMap<Long, Long> lenghts) {
long i=1;
long n=tryNumber;
do {
numbers.add(n);
n=nextInChain(n);
i++;
} while (n!=1);
lenghts.put(tryNumber,i);
}
Suppose you have a function CalcDistance(i) that calculates the "distance" to 1. For instance, CalcDistance(1) == 0 and CalcDistance(13) == 9. Here is a naive recursive implementation of this function (in C#):
public static int CalcDistance(long i)
{
if (i == 1)
return 0;
return (i % 2 == 0) ? CalcDistance(i / 2) + 1 : CalcDistance(3 * i + 1) + 1;
}
The problem is that this function has to calculate the distance of many numbers over and over again. You can make it a little bit smarter (and a lot faster) by giving it a memory. For instance, lets create a static array that can store the distance for the first million numbers:
static int[] list = new int[1000000];
We prefill each value in the list with -1 to indicate that the value for that position is not yet calculated. After this, we can optimize the CalcDistance() function:
public static int CalcDistance(long i)
{
if (i == 1)
return 0;
if (i >= 1000000)
return (i % 2 == 0) ? CalcDistance(i / 2) + 1 : CalcDistance(3 * i + 1) + 1;
if (list[i] == -1)
list[i] = (i % 2 == 0) ? CalcDistance(i / 2) + 1: CalcDistance(3 * i + 1) + 1;
return list[i];
}
If i >= 1000000, then we cannot use our list, so we must always calculate it. If i < 1000000, then we check if the value is in the list. If not, we calculate it first and store it in the list. Otherwise we just return the value from the list. With this code, it took about ~120ms to process all million numbers.
This is a very simple example of memoization. I use a simple list to store intermediate values in this example. You can use more advanced data structures like hashtables, vectors or graphs when appropriate.
Minimize how many levels deep your loops are, and use an efficient data structure such as IList or IDictionary, that can auto-resize itself when it needs to expand. If you use plain arrays they need to be copied to larger arrays as they expand - not nearly as efficient.
This variant doesn't use an HashMap but tries only to not repeat the first 1000000 numbers. I don't use an hashmap because the biggest number found is around 56 billions, and an hash map could crash.
I have already done some premature optimization. Instead of / I use >>, instead of % I use &. Instead of * I use some +.
void Main()
{
var elements = new bool[1000000];
int longestStart = -1;
int longestRun = -1;
long biggest = 0;
for (int i = elements.Length - 1; i >= 1; i--) {
if (elements[i]) {
continue;
}
elements[i] = true;
int currentStart = i;
int currentRun = 1;
long current = i;
while (current != 1) {
if (current > biggest) {
biggest = current;
}
if ((current & 1) == 0) {
current = current >> 1;
} else {
current = current + current + current + 1;
}
currentRun++;
if (current < elements.Length) {
elements[current] = true;
}
}
if (currentRun > longestRun) {
longestStart = i;
longestRun = currentRun;
}
}
Console.WriteLine("Longest Start: {0}, Run {1}", longestStart, longestRun);
Console.WriteLine("Biggest number: {0}", biggest);
}

Resources