I have 2 data sets, 1 with information about travels, and the other one containing the cost per travel depending on where I'm leaving from. I need to get the total cost of the travel and it's easy to do with a merge by where I'm leaving from, but when I do this it adds 1,500 rows to my 100,000 row dataset..
Anyone knows why is this happening? The biggest dataset is 100,000 rows, the other one is about 10,000
EDITThis is a subset of df1
x Poste Locat V3
1 905916 Mixco 0.3
2 905818 Mixco 0.6
3 905818 Mixco 0.6
4 905338 Castellana 0.5
5 904876 Mixco 0.3
This is a subset of df2
x Vehiculo Poste
1 Camion 340592
2 Camion 262776
3 Camion 340622
4 Camion 243254
5 Camion 258505
I need to merge both datasets using "Poste", as I will get the cost based on both "Locat"(location) and "Vehiculo"(vehicle) from another dataset.
sol <- merge(sol, df[,c(5,16)], by="Poste")
Related
I have a dataset with over 400,000 cows. These cows are (unevenly) spreak over 2355 herds. Some herds are only present once in the data, while one herd is even present 2033 times in the data, meaning that 2033 cows belong to this herd. I want to delete herds from my data that occur less than 200 times.
With use of plyr and subset, I can obtain a list of which herds occur less than 200 times, I however can not find out how to apply this selection to the full dataset.
For example, my current data looks a little like:
cow herd
1 1
2 1
3 1
4 2
5 3
6 4
7 4
8 4
With function count() I can obtain the following:
x freq
1 3
2 1
3 1
4 3
Say I want to delete the data belonging to herds that occur less than 3 times, I want my data to look like this eventually:
cow herd
1 1
2 1
3 1
6 4
7 4
8 4
I do know how to tell R to delete data herd by herd, however since, in my real datatset, over 1000 herds occur less then 200 times, it would mean that I would have to type every herd number in my script one by one. I am sure there is an easier and quicker way of asking R to delete data above or below a certain occurence.
I hope my explanation is clear and someone can help me, thanks in advance!
Use n + group_by:
library(dplyr)
your_data %>%
group_by(herd) %>%
filter(n() >= 3)
I have this following R dataframe:
OffspringID1 OffspringID2 Relation Replicate
1 ID24 ID1 PO 3
2 ID29 ID31 PO 3
3 ID31 ID82 PO 3
4 ID44 ID75 PO 3
5 ID1 ID24 HS 9
6 ID1 ID51 HS 9
7 ID1 ID54 HS 9
8 ID1 ID55 HS 9
9 ID1 ID83 HS 9
and so on. I would like to grep the number of observations per level of the factor "Relation", for each combination of individuals (OffspringID1/OffspringID2) that are identical.
I think I could basically use a simple aggregate function, but as you may see, I could get identical but permuted pairs across rows (e.g., row 1 and row 5 display the same individuals pair but in a different order).
How could I take this into account within aggregate? And in a general way, is there a rule of thumb to detect rows resulting from columns permutations within a R dataframe?
Thank you very much!
I will present my question in two ways. First, requesting a solution for a task; and second, as a description of my overall objective (in case I am overthinking this and there is an easier solution).
1) Task Solution
Data context: each row contains four price variables (columns) representing (a) the price at which the respondent feels the product is too cheap; (b) the price that is perceived as a bargain; (c) the price that is perceived as expensive; (d) the price that is too expensive to purchase.
## mock data set
a<-c(1,5,3,4,5)
b<-c(6,6,5,6,8)
c<-c(7,8,8,10,9)
d<-c(8,10,9,11,12)
df<-as.data.frame(cbind(a,b,c,d))
## result
# a b c d
#1 1 6 7 8
#2 5 6 8 10
#3 3 5 8 9
#4 4 6 10 11
#5 5 8 9 12
Task Objective: The goal is to create a single column in a new data frame that lists all of the unique values contained in a, b, c, and d.
price
#1 1
#2 3
#3 4
#4 5
#5 6
...
#12 12
My initial thought was to use rbind() and unique()...
price<-rbind(df$a,df$b,df$c,df$d)
price<-unique(price)
...expecting that a, b, c and d would stack vertically.
[Pseudo illustration]
a[1]
a[2]
a[...]
a[n]
b[1]
b[2]
b[...]
b[n]
etc.
Instead, the "columns" are treated as rows and stacked horizontally.
V1 V2 V3 V4 V5
1 1 5 3 4 5
2 6 6 5 6 8
3 7 8 8 10 9
4 8 10 9 11 12
How may I stack a, b, c and d such that price consists of only one column ("V1") that contains all twenty responses? (The unique part I can handle separately afterwards).
2) Overall Objective: The Bigger Picture
Ultimately, I want to create a cumulative share of population for each price (too cheap, bargain, expensive, too expensive) at each price point (defined by the unique values described above). For example, what percentage of respondents felt $1 was too cheap, what percentage felt $3 or less was too cheap, etc.
The cumulative shares for bargain and expensive are later inverted to become not.bargain and not.expensive and the four vectors reside in a data frame like this:
buckets too.cheap not.bargain not.expensive too.expensive
1 0.01 to 0.50 0.000000000 1 1 0
2 0.51 to 1.00 0.000000000 1 1 0
3 1.01 to 1.50 0.000000000 1 1 0
4 1.51 to 2.00 0.000000000 1 1 0
5 2.01 to 2.50 0.001041667 1 1 0
6 2.51 to 3.00 0.001041667 1 1 0
...
from which I may plot something that looks like this:
Above, I accomplished my plotting objective using defined price buckets ($0.50 ranges) and the hist() function.
However, the intersections of these lines have meanings and I want to calculate the exact price at which any of the lines cross. This is difficult when the x-axis is defined by price range buckets instead of a specific value; hence the desire to switch to exact values and the need to generate the unique price variable.
[Postscript: This analysis is based on Peter Van Westendorp's Price Sensitivity Meter (https://en.wikipedia.org/wiki/Van_Westendorp%27s_Price_Sensitivity_Meter) which has known practical limitations but is relevant in the context of my research which will explore consumer perceptions of value under different treatments rather than defining an actual real-world price. I mention this for two reasons 1) to provide greater insight into my objective in case another approach comes to mind, and 2) to keep the thread focused on the mechanics rather than whether or not the Price Sensitivity Meter should be used.]
We can unlist the data.frame to a vector and get the sorted unique elements
sort(unique(unlist(df)))
When we do an rbind, it creates a matrix and unique of matrix calls the unique.matrix
methods('unique')
#[1] unique.array unique.bibentry* unique.data.frame unique.data.table* unique.default unique.IDate* unique.ITime*
#[8] unique.matrix unique.numeric_version unique.POSIXlt unique.warnings
which loops through the rows as the default MARGIN is 1 and then looks for unique elements. Instead, if we use the 'price', either as.vector or c(price) converts into vector
sort(unique(c(price)))
#[1] 1 3 4 5 6 7 8 9 10 11 12
If we use unique.default
sort(unique.default(price))
#[1] 1 3 4 5 6 7 8 9 10 11 12
What I am trying to do is merge my dataframe by rows. For instance let's say my data.frame is called data and it looks like this: I have 5 columns- subject contains 5s and 6s, Phase contains Post-Lure and Pre-Lure, Type contains Visual and Auditory and Memory contains a list of scores. Ex:
Subject Phase Type Memory
1 5 Post-Lure Visual 0.80000000
2 5 Post-Lure Auditory 0.70666667
3 5 Pre-Lure Visual 0.40000000
4 5 Pre-Lure Auditory 0.61333333
5 6 Post-Lure Visual 0.80000000
6 6 Post-Lure Auditory 0.54666667
As you can see from the code above, the subject is repeated (subject 5 is the same person but the phase and/or type are now different). Thus, I am looking for a code that will make all of the data for each subject on the same row. Hence, the memory scores, and the different types and phases each subject were exposed to will just now become additional columns on the same row. I feel aggregate may do the trick but is it possible to use that code without applying a function to each of the numbers. Any help would be greatly appreciated. Thank you.
As mentioned in the comment, you need to add an "indicator" variable of some sort (for example, how many "times" there are for each subject).
That can be done with ave and seq_along:
mydf$time <- with(mydf, ave(Subject, Subject, FUN=seq_along))
Next, you can use reshape() to go from "long" to "wide".
reshape(mydf, direction = "wide",
idvar="Subject", timevar="time")
# Subject Phase.1 Type.1 Memory.1 Phase.2 Type.2 Memory.2
# 1 5 Post-Lure Visual 0.8 Post-Lure Auditory 0.7066667
# 5 6 Post-Lure Visual 0.8 Post-Lure Auditory 0.5466667
# Phase.3 Type.3 Memory.3 Phase.4 Type.4 Memory.4
# 1 Pre-Lure Visual 0.4 Pre-Lure Auditory 0.6133333
# 5 <NA> <NA> NA <NA> <NA> NA
If you wanted to use the "reshape2" or "tidyr" packages, you would first have to get the data into a "long" form using melt or gather, but note that in the process, your variable types would be converted since a single column would be containing several data types.
Do you just want to reshape your data? The question isn't clear. Let's call your dataframe df. Then
library(reshape2)
dcast(df, Subject ~ Phase + Type)
will produce
Subject Post-Lure_Auditory Post-Lure_Visual Pre-Lure_Auditory Pre-Lure_Visual
1 5 0.7066667 0.8 0.6133333 0.4
2 6 0.5466667 0.8 NA NA
I have a data set consisting of 2000 individuals. For each individual, i:2000 , the data set contains n repeated situations. Letting d denote this data set, each row of dis indexed by i and n. Among other variables, d has a variable pid which takes on identical value for an individual across different (situations) rows.
Taking into consideration the panel nature of the data, I want to re-sample d (as in bootstrap):
with replacement,
store each re-sample data as a data frame
I considered using the sample function but could not make it work. I am a new user of r and have no programming skills.
The data set consists of many variables, but all the variables have numeric values. The data set is as follows.
pid x y z
1 10 2 -5
1 12 3 -4.5
1 14 4 -4
1 16 5 -3.5
1 18 6 -3
1 20 7 -2.5
2 22 8 -2
2 24 9 -1.5
2 26 10 -1
2 28 11 -0.5
2 30 12 0
2 32 13 0.5
The first six rows are for the first person, for which pid=1, and the next sex rows, pid=2 are different observations for the second person.
This should work for you:
z <- replicate(100,
d[d$pid %in% sample(unique(d$pid), 2000, replace=TRUE),],
simplify = FALSE)
The result z will be a list of dataframes you can do whatever with.
EDIT: this is a little wordy, but will deal with duplicated rows. replicate has its obvious use of performing a set operation a given number of times (in the example below, 4). I then sample the unique values of pid (in this case 3 of those values, with replacement) and extract the rows of d corresponding to each sampled value. The combination of a do.call to rbind and lapply deal with the duplicates that are not handled well by the above code. Thus, instead of generating dataframes with potentially different lengths, this code generates a dataframe for each sampled pid and then uses do.call("rbind",...) to stick them back together within each iteration of replicate.
z <- replicate(4, do.call("rbind", lapply(sample(unique(d$pid),3,replace=TRUE),
function(x) d[d$pid==x,])),
simplify=FALSE)