Consider I have this dataframe
Climb no. No. of Climbers Total people reaching Top A is in climb
1 7 2 Yes
2 5 3 No
3 10 1 Yes
How can I find using R the probability one particular person A will reach the top of the mountain for each climb?
Thanks for any direction
Unless I am missing something, the probability that any one person will reach the top is simply the number who reached the top divided by the total number of climbers (assuming that each person is equally likely to reach the top).
dat<-data.frame(Climb=c(1,2,3),Climbers=c(7,5,10),Top=c(2,3,1))
dat$Top/dat$Climbers
[1] 0.2857143 0.6000000 0.1000000
Related
I've got a dataset that has monthly metrics for different stores. Each store has three monthly (Total sales, customers and transaction count), my task is over a year I need to find the store that most closely matches a specific test store (Ex: Store 77).
Therefore over the year both the test store and most similar store need to have similar performance. My question is how do I go about finding the most similar store? I've currently used euclidean distance but would like to know if there's a better way to go about it.
Thanks in advance
STORE
month
Metric 1
22
Jan-18
10
23
Jan-18
20
Is correlation a better way to measure similarity in this case compared to distance? I'm fairly new to data so if there's any resources where I can learn more about this stuff it would be much appreciated!!
In general, deciding similarity of items is domain-specific, i.e. it depends on the problem you try to solve. Therefore, there is not one-size-fits-all solution. Nevertheless, there is some a basic procedure someone can follow trying to solve this kind of problems.
Case 1 - only distance matters:
If you want to find the most similar items (stores in our case) using a distance measure, it's a good tactic to firstly scale your features in some way.
Example (min-max normalization):
Store
Month
Total sales
Total sales (normalized)
1
Jan-18
50
0.64
2
Jan-18
40
0.45
3
Jan-18
70
0
4
Jan-18
15
1
After you apply normalization on all attributes, you can calculate euclidean distance or any other metric that you think it fits your data.
Some resources:
Similarity measures
Feature scaling
Case 2 - Trend matters:
Now, say that you want to find the similarity over the whole year. If the definition of similarity for your problem is just the instance of the stores at the end of the year, then distance will do the job.
But if you want to find similar trends of increase/decrease of the attributes of two stores, then distance measures conceal this information. You would have to use correlation metrics or any other more sophisticated technique than just a distance.
Simple example:
To keep it simple, let's say we are interested in 3-months analysis and that we use only sales attribute (unscaled):
Store
Month
Total sales
1
Jan-18
20
1
Feb-18
20
1
Mar-18
20
2
Jan-18
5
2
Feb-18
15
2
Mar-18
40
3
Jan-18
10
3
Feb-18
30
3
Mar-18
78
At the end of March, in terms of distance Store 1 and Store 2 are identical, both having 60 total sales.
But, as far as the increase ratio per month is concerned, Store 2 and Store 3 is our match. In February they both had 2 times more sales and in March 1.67 and 1.6 times more sales respectively.
Bottom line: It really depends on what you want to quantify.
Well-known correlation metrics:
Pearson correlation coefficient
Spearman correlation coefficient
I'm hoping someone may be able to help with a problem I have - trying to solve using R.
Individuals can submit requests for items. The minimum number of requests per person is one. There is a recommended maximum of five, but people can submit more in exceptional circumstances. Each item can only be allocated one individual.
Each item has a 'desirability'/quality score ranging from 10 (high quality) down to 0 (low quality). The idea is to allocate items, in line with requests, such that as many high quality items as possible are allocated. It is less important that individuals have an equitable spread of requests met.
Everyone has to have at least one request met. Next priority is to look at whether we can get anyone who is over the recommended limit within it by allocating requests to others. After that the priority is to look at where the item would rank in each individual's request list based on quality score, and allocate to the person where it would rank highest (eg, if it would be first in someone's list and third in another's, give it to the former).
Effectively I'd need a sorting algorithm of some kind that:
Identifies where an item has been requested more than once
Check all the requests of everyone making said request
If that request is the only one a person has made, give it to them
(if this scenario applies to more than one person, it should be
flagged in some way)
If all requestees have made more than one request, check to see if
any have made more than five requests - if they have it can be taken
off them.
If all are within the recommended limit, see where the request would
rank (based on quality score) and give to the person in whose list it
would rank highest.
The process needs to check that the above step isn't happening to people so many times that it leaves them without any requests...so it
effectively has to check one item at a time.
Does anyone have any ideas about how to approach this? I can think of all kinds of why I could arrange the data to make it easy to identify and see where this needs to happen, but not to automate the process itself. Thanks in advance for any help.
The data (at least the bits needed for this process) looks like the below:
Item ID Person ID Item Score
1 AAG 9
1 AAK 8
2 AAAX 8
2 AN 8
2 AAAK 8
3 Z 8
3 K 8
4 AAC 7
4 AR 5
5 W 10
5 V 9
6 AAAM 7
6 AAAL 7
7 AAAAN 5
7 AAAAO 5
8 AB 9
8 D 9
9 AAAAK 6
9 AAAAC 6
10 A 3
10 AY 3
My problem I have is that I need to calculate out how much a point is worth based on played games.
If a team plays a match it can get 3 points for a win, 1 point for a tie and 0 points for a loss.
And the problem here is following:
Team 1
Wins:8 Tie:2 Loss:3 Points:26 Played Games: 13
Team 2
Wins:8 Tie:3 Loss:4 Points:27 Played Games: 15
And here you can see that Team 2 has 1 more point than Team 1 has. But Team 2 has played 2 more matches and have a lesser win % then Team 1 has. But if you should list these two then Team 2 would get a higher "rating" then Team 1 has.
So how should the math look for this to make it fair? where Team 1 will have a better score here then Team 2 ?
Just divide by the number of games to get the average points per game played.
Team1: 2.0 ppg
Team2: 1.8 ppg
Okey first of all thanks for the help.
And the solution of this is the following:
p/pg * p = Real points
p = Sum(points),
pg = Played games
So for the example up top the real points will be:
Team 1: 52
Team 2: 48.6
I have observed nurses during 400 episodes of care and recorded the sequence of surfaces contacts in each.
I categorised the surfaces into 5 groups 1:5 and calculated the probability density functions of touching any one of 1:5 (PDF).
PDF=[ 0.255202629 0.186199343 0.104052574 0.201533406 0.253012048]
I then predicted some 1000 sequences using:
for i=1:1000 % 1000 different nurses
seq(i,1:end)=randsample(1:5,max(observed_seq_length),'true',PDF);
end
eg.
seq = 1 5 2 3 4 2 5 5 2 5
stairs(1:max(observed_seq_length),seq) hold all
I'd like to compare my empirical sequences with my predicted one. What would you suggest to be the best strategy or property to look at?
Regards,
EDIT: I put r as a tag as this may well fall more easily under that category due to the nature of the question rather than the matlab code.
I've implemented a simple up/down voting system on a website, and I keep track of individual votes as well as vote time and unique user iD (hashed IP).
My question is not how to calculate the percent or sum of the votes - but more, what is a good algorithm for determining a good score based on votes?
I find sorting by pure vote percent to be unacceptable, as well as simply tallying upvotes.
Consider this example:
Image A: 4 upvotes, 1 downvotes
Image B: 5 upvotes, 4 downvotes
Image C: 1 upvote, 0 downvotes
The ideal system would put A first, maybe followed by B and then C.
In a pure percentage scenario, the order is C > A > B. (wrong)
In a pure vote count scenario, the order is B > A > C. (wrong)
I have an idea for a somewhat "hybrid" algorithm based on the system's confidence in a score, maybe something along the lines of:
// (if totalvotes > 0, else score = 0)
score = 1 - ((downvotes+1 / totalvotes+1) * sqrt(1 / totalvotes))
However, I was hoping to ask the community if there are any really well-defined algorithms already out there that I simply don't know about, before I sit around tweaking my algorithm from now until sunset.
I also have date data for each vote - however, the content of the site isn't very time-sensitive so I don't really care to sort by "what's hot" at all.
Sorting by the average of votes is not very good.
By instead balancing the proportion of positive ratings with the uncertainty of a small number of observations like explained in this article, you achieve a much better representation of your scores.
The article below explains how to not make the same mistake that many popular websites do. (Amazon, urbandictionary etc.)
http://evanmiller.org/how-not-to-sort-by-average-rating.html
Hope this helps!
I know that doesn't answer your question, but I just spent 3 minutes for fun trying to find some formula and... just check it :) A column is upvotes and B is downvotes :)
=(LN((A1+1)/(A1+B1+1))+1)*LN(A1)
5 3 0.956866995
4 1 1.133543015
5 4 0.787295787
1 0 0
6 4 0.981910844
2 8 -0.207447157
6 5 0.826007385
3 3 0.483811507
4 0 1.386294361
5 0 1.609437912
6 1 1.552503332
5 2 1.146431478
100 100 -3.020151034
10 10 0.813671022