Split screen based on number of items and screen ratio - math

I have to split the screen equally (as possible) given a dynamic number of items.
So i need to find the number of columns and rows based on the screen size/ratio.
Each item size is not important since i will calculate them in % based in the items per col and row.
If i have 5 items, then (depending on the screen ratio) i will probably have 3 columns in the 1st row and 2 columns in the 2nd row. That's ok.

First you have to decide what you mean with "divide the screen equally".
It probably means that there is a preferred x-to-y ratio for each item.
You could use the following algorithm that favors getting close to the desired x-to-y ratio over reducing the number of empty spaces.
// Set this to the desired x-to-y ratio.
const preferred_ratio = 1.2;
function calc_cols_rows(In: screen, In: num_items, Out: num_rows, Out: num_cols) {
desired_aspect = (screen.width / screen.height) / preferred_ratio;
// Calculate the number of rows that is closest to the desired aspect:
num_rows = round(sqrt(num_items / desired_aspect));
// Calculate the required number of columns:
num_cols = ceil(num_items / num_rows);
}

Related

In R: Avoid duplicates in selection from many rows

Summary: I have an array of 10 rows and 4 columns filled with numbers. I select one number from each row and want to avoid duplicates in the selection.
Elaborate:
I have a grid of 100*100 cells. In that grid are 10 cells that contain a "person". In an iterative process I want to make the persons "walk around" in the grid, but I do not want to occur that two persons are in the same cell at the same time.
I have a vector that describes the positions of the 10 persons. It contains the cell numbers with a person. These positions are counting across all rows and columns (i.e. ranges from 1:10000). For example: Position 234 would be in the 3rd row, 34th column).
Positions<-sample(1:10000,10) #Initial positions
What I did is to first make an array of the surrounding cells of each person (up, right, down, left) giving 4 positions for each person:
Surroundings<-array(c(Positions+100,Positions+1,Positions-100,Positions-1),dim=c(10,4))
I then take a random direction from each of the rows in Surroundings into vector PosNew. It is this last vector in which I want to avoid duplicates.
I could repeat the random selection process of PosNew until it has no duplicates, but this could take very long. There are probably more efficient ways to do this.
For simplicity sake, let's assume that persons do not walk off the grid and no other errors occur.
My script:
Positions<-sample(1:10000,10) #Initial positions
for(i in 1:50) {
Surroundings<-array(c(Positions+100,Positions+1,Positions-100,Positions-1),dim=c(10,4))
PosNew<-Surroundings[cbind(1:10,sample(1:4,10,replace=TRUE))]
Dups<-length(which(duplicated(PosNew)==TRUE))
Positions<-PosNew
}
I am looking for a way to check for duplicates in the selected new positions and make sure that Dups is never above zero. Any suggestions are welcome, including suggestions to make the code faster/more efficient.
Added: What could I do when at some point one or more of the persons really cannot move to an empty cell, because all 4 sides are occupied? I want that person to stay in its original cell. How to code that?
Thank you so much for your time!
As this is an iterative process, where every person's move depends on the locations of others, I don't think you can do much better then moving one person and sampling the position for the next from the difference of the sets of all directions and all occupied positions (note that this adds a bit of unfairness as the first person has the most freedom to move, so to speak).
So the code would be something like this:
Positions <- sample(1:10000, 10) #Initial positions
for (i in 1:50) {
Surroundings <-
array(c(Positions + 100, Positions + 1, Positions - 100, Positions - 1),
dim = c(10, 4))
# BEGIN NEW CODE
PosNew <- numeric(10)
for (i in 1:10) {
# PosNew[seq_len(i-1)] is the set of occupied positions
available <- setdiff(Surroundings[i, ], PosNew[seq_len(i-1)])
if (length(available) != 0)
PosNew[i] <- sample(available, 1)
else
PosNew[i] <- Positions[i] # stay where you are
}
# END NEW CODE
Dups <- sum(duplicated(PosNew)) # shorter version - sum logical values to get a count
Positions <- PosNew
}
Hope this helps!

Loop over data frame comparing pairs

I have created the following dataframe:
set.seed(42)
df1 = data.frame(pair = rep(c(1:26),2), size = rnorm(52,5.4,1.89))
It represents random pairs of individuals of a certain size, as assigned by the 'pair' column.
The random distribution (5.4, 1.89) is based on observed data from the group that I sampled in my study (N=26 pairs).
I now want to ask a very basic question that I am unable to code my way to:
Imagine a horizontal line at the mean (5.4), severing the population in two:
What proportion of individuals are paired with another individual from the same side of the line? i.e. is there a tendency for small to be with small and big to be with big?
I want to compare the proportion I observed with the proportion generated from 'asking' the above question a lot of times (e.g. 1000 repetitions).
In my study 18/26 individuals were together with a similar sized partner, and so I want to ask 'out of a 1000 repetitions, how many times was the proportion of similar individuals equal to or greater than 18/26?' this will be my 'p-value'.
I have no clue how to code this, but in my head it goes like this:
For each value in column 'size': when pair value are equal, do this:
is the larger individual equal to or bigger than 5.4? is the smaller
individual equal to or bigger than 5.4?
if so, return a "yes"
OR
is the larger individual equal to or smaller than 5.4? is the smaller
individual equal to or smaller than 5.4?
if so, return a "yes"
if none of the above are true, return a 0
provide an output of the proportion of yes and no. store this in a data.frame repeat
this process 1000 times, adding all the outputs to the mentioned data
frame:
run1 24/26
run2 4/26
...
run999 13/26
I really hope someone can show me the start to this, or the relevant code/arguments/structure.
Is this what you want
#Create empty output, for 10 iterations
same_group_list = replicate(10,0)
diff_group_list = replicate(10,0)
for (j in 1:10){ #For 10 iterations
df1 = data.frame(pair = rep(c(1:26),2), size = rnorm(52,5.4,1.89))
#Sort by 'pair'
df1 = df1[with(df1, order(pair)), ]
#Assign a group based on if 'size' is > or < than mean(size)
for (i in 1:nrow(df1)){
if (df1$size[i] <= mean(df1$size)){ #Use 5.4 explicitly instead of mean(df1$size) if you want
df1$Group[i] = -1
} else {
df1$Group[i] = 1
}
}
df1$Group = as.numeric(df1$Group) #Convert to numeric
output2 = tapply(df1$Group, df1$pair, mean) #Carry out groupwise mean
diff_group_list[j] = sum(output2 == 0) #A mean of 0 means pair grouped with another group
same_group_list[j] = length(output2) - diff_group_list[j] #Everything else is the same group
}
output = data.frame("Same groupout of 26" = same_group_list, "Different Group out of 26" = diff_group_list)
I created a data frame with pairs side by side and then compared which of them where higher than 5.4. Then compared pairs. The pairs with both sizes higher than 5.4 were summed, and then everything was divided by 26.
The data frame proportions shows the proportion for each run.
proportions <- data.frame(run = (1:1000), prop = rep(NA,1000))
for (i in 1:1000) {
df = data.frame(pair = c(1:26),
size1 = rnorm(26,5.4,1.89),
size2 = rnorm(26,5.4,1.89)
)
greaterPairs <- sum(df[,2] > 5.4 & df[,3]>5.4)
proportions[i,2] = greaterPairs/26
}
head(proportions)
I did not keep the proportion in the string format "18/26" because later, if you want to sum the total of them which follows some condition, you will have to do it visually, one by one. So, for example, if you want to know how many of them are greater than or equal 18/26:
sum(proportions$prop >= (18/26))

How would i find what column and row the mouse is in?

I need to find what column and row the mouse location is in. To simplify this question, lets only find the column. I will write in pseudocode.
I have a map (a grid of rows and columns, made up by square cells) with a pixel width. I have a cell size which makes up each columns pixel width.
eg map.width / cell size = map.NumberOfColumns.
From this we can get what column the mouse is on.
Eg if ( mouse.X > cellSize ) {col is definitely > 1} (i have not used zero indexing in this example).
So if anyone here loves maths, i would very much appreciate some help. Thanks.
Assuming square cells, 1-based row/col indexing, and truncating integer division:
col = mouse.X / cellSize + 1;
row = mouse.Y / cellSize + 1;

How can I modify this (simple) equation to produce my desired result?

I have a database of 817 items, each given a "rank" of 1 to 817 (the smaller the number, the "better" the item). This rank is based off of many factors that indicate quality.
Now, I need to assign a "value" to these items, with the item at rank 1 being valued the most, and the value decreasing with rank (non-linear).
The easiest first attempt was to simply choose an arbitrary base (100,000) and divide by the rank:
$value = 100000 / $rank;
/**
* Rank : Value
* 1 : 100,000
* 2 : 50,000
* 3 : 33,333
* etc.
*/
This produces exponential decay, as shown in the red line in this image:
However, I wish to value these items in a manner that looks more like the blue line above. How can I change my formula to achieve this?
Try 1/sqrt(x) (i.e, pow(x, -1/2)) for starters. If that's still not slow enough, try a smaller fractional power.
Why don't you go with linear?
value = n - rank
where n is the count of your items, i.e. 817.
I haven't tried but use exponent instead of dividing by 1000 of a base 2.
UPDATES
value = 2 pow (n-rank)

How do I normalize an image?

If I have a series of pixels, which range from say -500 to +1000, how would I normalize all the pixels on the same gradient so that they fall between a specific range, say 0 and 255?
Some pseudocode like this would scale values linearly from one range to another
oldmin=-500
oldmax=1000
oldrange=oldmax-oldmin;
newmin=0
newmax=255;
newrange=newmax-newmin;
foreach(oldvalue)
{
//where in the old scale is this value (0...1)
scale=(oldvalue-oldmin)/oldrange;
//place this scale in the new range
newvalue=(newrange*scale)+newmin
}
Your question isn't very clear so I'm going to assume that you're doing some kind of image processing and the results you get are values from -500 to 1000 and now you need to save the color to a file where every value needs to be between 0 and 255.
How you do this is really very dependent in the application, what is really the meaning of the results and what exactly you want to do. The two main options are:
clamp the values - anything under 0 you replace by 0 and anything above 255 you replace by 255. You'll want to do this, for instance, if your image processing is some kind of interpolation which really shouldn't reach these values
Linear normalization - linearly may your minimal value to 0 and your maximal value to 255. Of course you'll first need to find the minimum and maximum. You do:
v = (origv - min)/(max - min) * 255.0
What this does is first map the values to [0,1] and then stretch them back to [0,255].
A third option is to mix and match between these two options. Your application might demand that you treat negative values as unneeded values and clamp them to 0 and positive values to linearly map to [0,255].
First make it all positive. If the minimum is -500 then add 500 to all values. Then the minimum would be 0, and the maximum would be 1500.
Then it is just a rule of three and you have it:
[value in 0,255] = 255*(Pixel/1500)
Some pseudo code may help:
foreach( pixel_value in pixel_values): # between -500 and 1000
position = (pixel_value + 500) / 1500 # gives you a 0 to 1 decimal
new_value = int(postion * 255) # or instead of casting, you could round it off
That's python code by the way.
Create two variables, MinInputValue and MaxInputValue. Initialize MinInputValue to a very large positive number (higher than the largest pixel value you ever expect to see) and MaxInputValue to a very large negative number (lower than the lowest pixel value you ever expect to see).
Loop over every pixel in the image. For each pixel, if the pixel value PixelValue is lower than MinInputValue, set MinInputValue to PixelValue. If the pixel value is higher than MaxInputValue, set MaxInputValue to PixelValue.
Create a new variable, InputValueRange, and set it to MaxInputValue - MinInputValue.
Once this is done, loop over every pixel in the image again. For each pixel PixelValue, calculate the output pixel value as 255.0 * (PixelValue - MinInputValue) / InputValueRange. You can assign this new value back to the original PixelValue, or you can set the corresponding pixel in an output image of the same size.

Resources