I have got a very large data set
mdf <- data.frame (sn = 1:40, var = rep(1:10, 4), block = rep(1:4, each = 10),
yld = c(1:40))
I have small data set
blockdf <- data.frame(block = 1:4, yld = c(10, 20, 30, 40)) # block means
All variables in both dataset except yld are factors.
I want to subtract block means (blockdf$yld) form each mdf$yld dataset, such that the block effects should correspond to block in mdf dataframe.
for example: value 10 will be substracted from all var within
first block yld in mdf
20 - second block yld in mdf
and so on
Please note that I might have sometime unbalance number of var within the reps. So I want to write it in such way that it can handle unbalance situation
This should do the trick
block_match <- match(mdf$block, blockdf$block)
transform(mdf, yld = yld - blockdf[block_match, 'yld'])
This should work
newdf <- merge(x=mdf, y=blockdf, by="block", suffixes = c("",".blockmean"))
newdf$newvr <- newdf$yld-newdf$yld.blockmean
print(newdf, row.names=FALSE)
block sn var yld yld.blockmean newvr
1 1 1 1 10 -9
1 2 2 2 10 -8
1 3 3 3 10 -7
1 4 4 4 10 -6
1 5 5 5 10 -5
1 6 6 6 10 -4
1 7 7 7 10 -3
1 8 8 8 10 -2
1 9 9 9 10 -1
1 10 10 10 10 0
2 11 1 11 20 -9
2 12 2 12 20 -8
...........................
Related
I have this data set in R:
first_variable = rexp(100,100)
second_variable = rexp(100,100)
n_obs = 1:100
question_data = data.frame(n_obs, first_variable, second_variable)
I want to make this dataset so that:
The rows 1-10 has id:1,2,3,4,5,6,7,8,9,10
The rows 11-20 has id: 1,2,3,4,5,6,7,8,9,10
The rows 21-30 has id : 1,2,,3,4,5,6,7,8,9,10
etc
In other words, the id's 1-10 repeat for each sets of 10 rows.
I found this code that I thought would work:
# here, n = 10 (a set of n = 10 rows)
bloc_len <- 10
question_data$id <-
rep(seq(1, 1 + nrow(question_data) %/% bloc_len), each = bloc_len, length.out = nrow(question_data))
But this is not working, and is making each set of 10 rows as the same ID:
n_obs first_variable second_variable id
1 1 0.006223412 0.0258968583 1
2 2 0.004473815 0.0065543554 1
3 3 0.011745754 0.0005061101 1
4 4 0.005620351 0.0033549525 1
5 5 0.045860202 0.0132625822 1
6 6 0.002477348 0.0068517981 1
I would have wanted something like this:
n_obs first_variable second_variable id
1 1 0.0062234115 0.0258968583 1
2 2 0.0044738150 0.0065543554 2
3 3 0.0117457544 0.0005061101 3
4 4 0.0056203508 0.0033549525 4
5 5 0.0458602019 0.0132625822 5
6 6 0.0024773478 0.0068517981 6
7 7 0.0049527013 0.0047461094 7
8 8 0.0058581805 0.0108604478 8
9 9 0.0041171801 0.0002445268 9
10 10 0.0090667287 0.0019289691 10
11 11 0.0039002449 0.0135441919 1
12 12 0.0064558661 0.0230979415 2
13 13 0.0104993267 0.0005609776 3
14 14 0.0153162705 0.0038364012 4
15 15 0.0107109676 0.0183818539 5
16 16 0.0131620151 0.0029710189 6
17 17 0.0244441763 0.0095645480 7
18 18 0.0058112355 0.0125754349 8
19 19 0.0005022588 0.0156614272 9
20 20 0.0007572985 0.0049964333 10
21 21 0.0276024376 0.0024303513 1
Is this possible?
Thank you!
Instead of each, try using times:
question_data$id <-
rep(seq(bloc_len), times = nrow(question_data) %/% bloc_len, length.out = nrow(question_data))
Like the example shared, if the number of rows in the data (100) is completely divisible by the number of id's (10) then we can use R's recycling property to repeat the id's.
bloc_len <- 10
question_data$id <- seq_len(bloc_len)
If they are not completely divisible we can use rep -
question_data$id <- rep(seq_len(bloc_len), length.out = nrow(question_data))
I have this optimization problem where I am trying to maximize column z based on a unique value from column X, but also within a constraint that each of the unique values picked of X added up column of Y most be less than (in this example) 23.
For example, I have this sample data:
d=data.frame(x=c(1,1,1,2,2,2,3,3,3),y=c(9,7,5,9,7,5,9,7,5),z=c(25,20,5,20,10,5,10,5,3))
Which looks like this:
X Y Z
1 1 9 25
2 1 7 20
3 1 5 5
4 2 9 20
5 2 7 10
6 2 5 5
7 3 9 10
8 3 7 5
9 3 5 5
The result should look like this:
X Y Z
1 1 9 25
4 2 9 20
9 3 5 5
How do I set this problem up in the lpSolve::lp function?
You are trying to maximize the sum of the z values of the selected options subject to two types of constraints:
The sum of the y values for the selected options does not exceed 23
You select exactly one value for each unique x value
You can create a binary variable for each option and then solve with lpSolve:
d=data.frame(x=c(1,1,1,2,2,2,3,3,3),y=c(9,7,5,9,7,5,9,7,5),z=c(25,20,5,20,10,5,10,5,3))
library(lpSolve)
all.x <- unique(d$x)
d[lp(direction = "max",
objective.in = d$z,
const.mat = rbind(outer(all.x, d$x, "=="), d$y),
const.dir = rep(c("==", "<="), c(length(all.x), 1)),
const.rhs = rep(c(1, 23), c(length(all.x), 1)),
all.bin = TRUE)$solution == 1,]
# x y z
# 1 1 9 25
# 4 2 9 20
# 9 3 5 3
Seems simple but I can't figure it out.
I have a bunch of animal location data (217 individuals) as a single dataframe. I'm trying to randomly select X locations per individual for further analysis with the caveat that X is within the range of 6-156.
So I'm trying to set up a loop that first randomly selects a value within the range of 6-156 then use that value (say 56) to randomly extract 56 locations from the first individual animal and so on.
for(i in unique(ANIMALS$ID)){
sub<-sample(6:156,1)
sub2<-i([sample(nrow(i),sub),])
}
This approach didn't seem to work so I tried tweaking it...
for(i in unique(ANIMALS$ID)){
sub<-sample(6:156,1)
rand<-i[sample(1:nrow(i),sub,replace=FALSE),]
}
This did not work either.. Any suggestions or previous postings would be helpful!
Head of the datafile...ANIMALS is the name of the df, ID indicates unique individuals
> FID X Y MONTH DAY YEAR HOUR MINUTE SECOND ELKYR SOURCE ID animalid
1 0 510313 4813290 9 5 2008 22 30 0 342008 FG 1 1
2 1 510382 4813296 9 6 2008 1 30 0 342008 FG 1 1
3 2 510385 4813311 9 6 2008 2 0 0 342008 FG 1 1
4 3 510385 4813394 9 6 2008 3 30 0 342008 FG 1 1
5 4 510386 4813292 9 6 2008 2 30 0 342008 FG 1 1
6 5 510386 4813431 9 6 2008 4 1 0 342008 FG 1 1
Here's one way using mapply. This function takes two lists (or something that can be coerced into a list) and applies function FUN to corresponding elements.
# simulate some data
xy <- data.frame(animal = rep(1:10, each = 10), loc = runif(100))
# calculate number of samples for individual animal
num.samples.per.animal <- sample(3:6, length(unique(xy$animal)), replace = TRUE)
num.samples.per.animal
[1] 6 3 4 4 6 3 3 6 3 5
# subset random x number of rows from each animal
result <- do.call("rbind",
mapply(num.samples.per.animal, split(xy, f = xy$animal), FUN = function(x, y) {
y[sample(1:nrow(y), x),]
}, SIMPLIFY = FALSE)
)
result
animal loc
7 1 0.99483999
1 1 0.50951321
10 1 0.36505294
6 1 0.34058842
8 1 0.26489107
9 1 0.47418823
13 2 0.27213396
12 2 0.28087775
15 2 0.22130069
23 3 0.33646632
21 3 0.02395097
28 3 0.53079981
29 3 0.85287600
35 4 0.84534073
33 4 0.87370167
31 4 0.85646813
34 4 0.11642335
46 5 0.59624723
48 5 0.15379729
45 5 0.57046122
42 5 0.88799675
44 5 0.62171858
49 5 0.75014593
60 6 0.86915983
54 6 0.03152932
56 6 0.66128549
64 7 0.85420774
70 7 0.89262455
68 7 0.40829671
78 8 0.19073661
72 8 0.20648832
80 8 0.71778913
73 8 0.77883677
75 8 0.37647108
74 8 0.65339300
82 9 0.39957202
85 9 0.31188471
88 9 0.10900795
100 10 0.55282999
95 10 0.10145296
96 10 0.09713218
93 10 0.64900866
94 10 0.76099256
EDIT
Here is another (more straightforward) approach that also handles cases when number of rows is less than the number of samples that should be allocated.
set.seed(357)
result <- do.call("rbind",
by(xy, INDICES = xy$animal, FUN = function(x) {
avail.obs <- nrow(x)
num.rows <- sample(3:15, 1)
while (num.rows > avail.obs) {
message("Sample to be larger than available data points, repeating sampling.")
num.rows <- sample(3:15, 1)
}
x[sample(1:avail.obs, num.rows), ]
}))
result
I like Stackoverflow because I learn so much. #RomanLustrik provided a simple solution; mine is straight-froward as well:
# simulate some data
xy <- data.frame(animal = rep(1:10, each = 10), loc = runif(100))
newVec <- NULL #Create a blank dataFrame
for(i in unique(xy$animal)){
#Sample a number between 1 and 10 (or 6 and 156, if you need)
samp <- sample(1:10, 1)
#Determine which rows of dataFrame xy correspond with unique(xy$animal)[i]
rows <- which(xy$animal == unique(xy$animal)[i])
#From xy, sample samp times from the rows associated with unique(xy$animal)[i]
newVec1 <- xy[sample(rows, samp, replace = TRUE), ]
#append everything to the same new dataFrame
newVec <- rbind(newVec, newVec1)
}
I have a large data set like this:
df <- data.frame(group = c(rep(1, 6), rep(5, 6)), score = c(30, 10, 22, 44, 6, 5, 20, 35, 2, 60, 14,5))
group score
1 1 30
2 1 10
3 1 22
4 1 44
5 1 6
6 1 5
7 5 20
8 5 35
9 5 2
10 5 60
11 5 14
12 5 5
...
I want to do a subtraction for each neighboring score within each group, if the difference is greater than 30, remove the smaller score. For example, within group 1, 30-10=20<30, 10-22=-12<30, 22-44=-22<30, 44-6=38>30 (remove 6), 44-5=39>30 (remove 5)... The expected output should look like this:
group score
1 1 30
2 1 10
3 1 22
4 1 44
5 5 20
6 5 35
7 5 60
...
Does anyone have idea about realizing this?
Like this?
repeat {
df$diff=unlist(by(df$score,df$group,function(x)c(0,-diff(x))))
if (all(df$diff<30)) break
df <- df[df$diff<30,]
}
df$diff <- NULL
df
# group score
# 1 1 30
# 2 1 10
# 3 1 22
# 4 1 44
# 7 5 20
# 8 5 35
# 10 5 60
This (seems...) to require an iterative approach, because the "neighboring score" changes after removal of a row. So before you remove 6, the difference 44 - 6 > 30, but 6 - 5 < 30. After you remove 6, the difference 44 - 5 > 30.
So this calculates difference between successive rows by group (using by(...) and diff(...)), and removes the appropriate rows, then repeats the process until all differences are < 30.
It's not elegant but it should work:
out = data.frame(group = numeric(), score=numeric())
#cycle through the groups
for(g in levels(as.factor(df$group))){
temp = subset(df, df$group==g)
#now go through the scores
left = temp$score[1]
for(s in seq(2, length(temp$score))){
if(left - temp$score[s] > 30){#Test the condition
temp$score[s] = NA
}else{
left = temp$score[s] #if condition not met then the
}
}
#Add only the rows without NAs to the out
out = rbind(out, temp[which(!is.na(temp$score)),])
}
There should be a way to do this using ave but carrying the last value when removing the next if the diff >30 is tricky! I'd appreciate the more elegant solution if there is one.
You can try
df
## group score
## 1 1 30
## 2 1 10
## 3 1 22
## 4 1 44
## 5 1 6
## 6 1 5
## 7 5 20
## 8 5 35
## 9 5 2
## 10 5 60
## 11 5 14
## 12 5 5
tmp <- df[!unlist(tapply(df$score, df$group, FUN = function(x) c(F, -diff(x) > 30), simplify = T)), ]
while (!identical(df, tmp)) {
df <- tmp
tmp <- df[!unlist(tapply(df$score, df$group, FUN = function(x) c(F, -diff(x) > 30), simplify = T)), ]
}
tmp
## group score
## 1 1 30
## 2 1 10
## 3 1 22
## 4 1 44
## 7 5 20
## 8 5 35
## 10 5 60
Apologies for the seemingly simple question, but I can't seem to find a solution to the following re-arrangement problem.
I'm used to using read.csv to read in files with a header row, but I have an excel spreadsheet with two 'header' rows - cell identifier (a, b, c ... g) and three sets of measurements (x, y and z; 1000s each) for each cell:
a b
x y z x y z
10 1 5 22 1 6
12 2 6 21 3 5
12 2 7 11 3 7
13 1 4 33 2 8
12 2 5 44 1 9
csv file below:
a,,,b,,
x,y,z,x,y,z
10,1,5,22,1,6
12,2,6,21,3,5
12,2,7,11,3,7
13,1,4,33,2,8
12,2,5,44,1,9
How can I get to a data.frame in R as shown below?
cell x y z
a 10 1 5
a 12 2 6
a 12 2 7
a 13 1 4
a 12 2 5
b 22 1 6
b 21 3 5
b 11 3 7
b 33 2 8
b 44 1 9
Use base R reshape():
temp = read.delim(text="a,,,b,,
x,y,z,x,y,z
10,1,5,22,1,6
12,2,6,21,3,5
12,2,7,11,3,7
13,1,4,33,2,8
12,2,5,44,1,9", header=TRUE, skip=1, sep=",")
names(temp)[1:3] = paste0(names(temp[1:3]), ".0")
OUT = reshape(temp, direction="long", ids=rownames(temp), varying=1:ncol(temp))
OUT
# time x y z id
# 1.0 0 10 1 5 1
# 2.0 0 12 2 6 2
# 3.0 0 12 2 7 3
# 4.0 0 13 1 4 4
# 5.0 0 12 2 5 5
# 1.1 1 22 1 6 1
# 2.1 1 21 3 5 2
# 3.1 1 11 3 7 3
# 4.1 1 33 2 8 4
# 5.1 1 44 1 9 5
Basically, you should just skip the first row, where there are the letters a-g every third column. Since the sub-column names are all the same, R will automatically append a grouping number after all of the columns after the third column; so we need to add a grouping number to the first three columns.
You can either then create an "id" variable, or, as I've done here, just use the row names for the IDs.
You can change the "time" variable to your "cell" variable as follows:
# Change the following to the number of levels you actually have
OUT$cell = factor(OUT$time, labels=letters[1:2])
Then, drop the "time" column:
OUT$time = NULL
Update
To answer a question in the comments below, if the first label was something other than a letter, this should still pose no problem. The sequence I would take would be as follows:
temp = read.csv("path/to/file.csv", skip=1, stringsAsFactors = FALSE)
GROUPS = read.csv("path/to/file.csv", header=FALSE,
nrows=1, stringsAsFactors = FALSE)
GROUPS = GROUPS[!is.na(GROUPS)]
names(temp)[1:3] = paste0(names(temp[1:3]), ".0")
OUT = reshape(temp, direction="long", ids=rownames(temp), varying=1:ncol(temp))
OUT$cell = factor(temp$time, labels=GROUPS)
OUT$time = NULL