Adding row in R with next day and 0 in each column - r

I have a data.frame with 4 columns. The first column is the_day, from 11/1/15 until 11/30/15. The next 3 have values corresponding to each day based on amount_raised. However, some dates are missing because there were no values in the next 3 columns (no money was raised).
For example, 11/3/15 is missing. What I want to do is add a row in between 11/2/15 and 11/4/15 with the date, and zeros in the next 3 columns. So it would read like this:
11/3/2015 0 0 0
Do I have to create a vector and then add it into the existing data.frame? I feel like there has to be a quicker way.

This should work,
date_seq <- seq(min(df$the_day),max(df$the_day),by = 1)
rbind(df,cbind(the_day = as.character( date_seq[!date_seq %in% df$the_day]), inf = "0", specified = "0", both = "0"))
# the_day inf specified both
# 1 2015-11-02 1.32 156 157.32
# 2 2015-11-04 4.25 40 44.25
# 3 2015-11-05 3.25 25 28.25
# 4 2015-11-06 1 15 16
# 5 2015-11-07 4.75 10 14.75
# 6 2015-11-08 32 0 32
# 7 2015-11-03 0 0 0
If you want sort it according to the_day, take the data frame in a variable and use the order function
ans <- rbind(df,cbind(the_day = as.character( date_seq[!date_seq %in% df$the_day]), inf = "0", specified = "0", both = "0"))
ans[order(ans$the_day), ]
# the_day inf specified both
# 1 2015-11-02 1.32 156 157.32
# 7 2015-11-03 0 0 0
# 2 2015-11-04 4.25 40 44.25
# 3 2015-11-05 3.25 25 28.25
# 4 2015-11-06 1 15 16
# 5 2015-11-07 4.75 10 14.75
# 6 2015-11-08 32 0 32

data.frames are not efficient to work with row-wise internally. I would suggest something along the following lines:
create empty (zero) 30x3 matrix. This will include your amount_raised.
create a complete sequence of dates from 11/1 till 11/30
for each existing date, find it's match in the complete sequence
copy the corresponding line from your data frame to the matched line in the matrix (use match() function).
Eventually, make a new data frame out of the new sequence and matrix.

Related

Apply multiple math operations in a single column using another groupby column as index

I have the df below. I have a series of 5 articles. I have 5 independent variables 27,43,54,112,12 which I want to use to divide the Art_count column by using as a reference the groupby column Serie. Example. I group Serie 1 and divide the Art_count of serie 1 by 27 and the new value i store in a new column called Art_norm.
I want a code that looks like:
for column in df['Serie']:
if df.loc[column,'Serie'] == 1 & df.loc[column,'Serie'] == 2 & df.loc[column,'Serie'] == 3 & df.loc[column,'Serie'] == 4 df.loc[column,'Serie'] == 5:
df['Art_norm'] = df.loc[df.Serie.eq(1), 'Art_nr']/27
df['Art_norm'] = df.loc[df.Serie.eq(2), 'Art_nr']/43
df['Art_norm'] = df.loc[df.Serie.eq(3), 'Art_nr']/54
df['Art_norm'] = df.loc[df.Serie.eq(4), 'Art_nr']/112
df['Art_norm'] = df.loc[df.Serie.eq(5), 'Art_nr']/12
print (df)
Of course my code does not work but I dont know how to perform multiple math calculation in a single column with a loop.
Month_Year Serie Art_count
0 2012-01-01 1 41
1 2012-01-01 1 23
2 2012-01-01 2 72
3 2012-01-01 2 54
4 2012-01-01 3 127
5 2012-01-01 3 98
6 2012-01-01 4 387
7 2012-01-01 4 424
8 2012-01-01 5 197
9 2012-01-01 5 124

Join spatial features with dataframe by id with inconsistent format

Hello everyone I was hoping I could get some help with this issue:
I have shapefile with 2347 features that correspond to 3172 units, perhaps when the original file was created there were some duplicated geometries and they decided to arrange them like this:
Feature gis_id
1 "1"
2 "2"
3 "3,4,5"
4 "6,8"
5 "7"
6 "9,10,13"
... like that until the 3172 units and 2347 features
On the other side my data table has 72956 observations (about 16 columns) with data corresponding to the gis_id from the shapefile. However, this table has a unique gis_id per observation
head(hru_ls)
jday mon day yr unit gis_id name sedyld tha sedorgn kgha sedorgp kgha surqno3 kgha lat3no3 kgha
1 365 12 31 1993 1 1 hru0001 0.065 0.861 0.171 0.095 0
2 365 12 31 1993 2 2 hru0002 0.111 1.423 0.122 0.233 0
3 365 12 31 1993 3 3 hru0003 0.024 0.186 0.016 0.071 0
4 365 12 31 1993 4 4 hru0004 6.686 16.298 1.040 0.012 0
5 365 12 31 1993 5 5 hru0005 37.220 114.683 6.740 0.191 0
6 365 12 31 1993 6 6 hru0006 6.597 30.949 1.856 0.021 0
surqsolp kgha usle tons sedmin ---- tileno3 ----
1 0.137 0 0.010 0
2 0.041 0 0.009 0
3 0.014 0 0.001 0
4 0.000 0 0.175 0
5 0.000 0 0.700 0
6 0.000 0 0.227 0
With multiple records for each unit (20 years data)
I would like to merge the geometry data of my shapefile to my data table. I've done this before with sp::merge I think, but with a shapefile that did not have multiple id's per geometry/feature.
Is there a way to condition the merging so it gives each feature from the data table the corresponding geometry according to if it has any of the values present on the gis_id field from the shapefile?
This is a very intriguing question, so I gave it a shot. My answer is probably not the quickest or most concise way of going about this, but it works (at least for your sample data). Notice that this approach is fairly sensitive to the formatting of the data in shapefile$gis_id (see regex).
# your spatial data
shapefile <- data.frame(feature = 1:6, gis_id = c("1", "2", "3,4,5", "6,8", "7", "9,10,13"))
# your tabular data
hru_ls <- data.frame(unit = 1:6, gis_id = paste(1:6))
# loop over all gis_ids in your tabular data
# perhaps this could be vectorized?
gis_ids <- unique(hru_ls$gis_id)
for(id in gis_ids){
# Define regex to match gis_ids
id_regex <- paste0("(,|^)", id, "(,|$)")
# Get row in shapefile that matches regex
searchterm <- lapply(shapefile$gis_id, function(x) grepl(pattern = id_regex, x = x))
rowmatch <- which(searchterm == TRUE)
# Return shapefile feature id that maches tabular gis_id
hru_ls[hru_ls$gis_id == id, "gis_feature_id"] <- shapefile[rowmatch, "feature"]
}
Since you didn't provide the geometry fields in your question, I just matched on Feature in your spatial data. You could either add an additional step that merges based on Feature, or replace "feature" in shapefile[rowmatch, "feature"] with your geometry fields.

R- Subtracting the mean of a group from each element of that group in a dataframe

I am trying to merge a vector 'means' to a dataframe.
My dataframe looks like this Data = growth
I first calculated all the means for the different groups (1 group = population + temperature + size + replicat) using this command:
means<-aggregate(TL ~ Population + Temperature + Replicat + Size + Measurement, data=growth, list=growth$Name, mean)
Then, I selected the means for Measurement 1 as follows as I am only interested in these means.
meansT0<-means[which(means$Measurement=="1"),]
Now, I would like to merge this vector of means values to my dataframe (=growth) so that the right mean of each group corresponds to the right part of the dataframe.
The goal is to then substrat the mean of each group (at Measurement 1) to each element of the dataframe based on its belonging group (and for all other Measurements except Measurement 1). Maybe there is no need to add the means column to the dataframe? Do you know any command to do that ?
[27.06.18]
I made up this simplified dataframe, I hope this help understanding.
So, what I want is to substrat, for each individual in the dataframe and for each measurement (here only Measurement 1 and Measurement 2, normally I have more), the mean of its belongig group at MEASUREMENT 1.
So, if I get the means by group (1 group= Population + Temperature + Measurement):
means<-aggregate(TL ~ Population + Temperature + Measurement, data=growth, list=growth$Name, mean)
means
I got these values of means (in this example) :
Population Temperature Measurement TL
JUB 15 **1** **12.00000**
JUB 20 **1** **15.66667**
JUB 15 2 17.66667
JUB 20 2 18.66667
JUB 15 3 23.66667
JUB 20 3 24.33333
We are only interested by the means at MEASUREMENT 1. For each individual in the dataframe, I want to substrat the mean of its belonging group at Measurement 1: in this example (see dataframe with R command):
-for the group JUB+15+Measurement 1, mean = 12
-for the group JUB+20+Measurement 1, mean = 15.66
growth<-data.frame(Population=c("JUB", "JUB", "JUB","JUB", "JUB", "JUB","JUB", "JUB", "JUB","JUB", "JUB", "JUB","JUB", "JUB", "JUB","JUB", "JUB", "JUB"), Measurement=c("1","1","1","1","1","1","2","2","2","2","2","2", "3", "3", "3", "3", "3", "3"),Temperature=c("15","15","15","20", "20", "20","15","15","15","20", "20", "20","15","15","15","20", "20", "20"),TL=c(11,12,13,15,18,14, 16,17,20,21,19,16, 25,22,24,26,24,23), New_TL=c("11-12", "12-12", "13-12", "15-15.66", "18-15.66", "14-15.66", "16-12", "17-12", "20-12", "21-15.66", "19-15.66", "16-15.66", "25-12", "22-12", "24-12", "26-15.66", "24-15.66", "23-15.66"))
print(growth)
I hope with this, you can understand better what I am trying to do. I have a lot of data and if I have to do this manually, this will take me a lot of time and increase the risk of me putting mistakes.
Here is an option with tidyverse. After grouping by the group columns, use mutate_at specifying the columns of interest and get the difference of that column (.) with the mean of it.
library(tidyverse)
growth %>%
group_by(Population, Temperature, Replicat, Size, Measurement) %>%
mutate_at(vars(HL, TL), funs(MeanGroupDiff = .
- mean(.[Measurement == 1])))
Using a reproducible example with mtcars dataset
data(mtcars)
mtcars %>%
group_by(cyl, vs) %>%
mutate_at(vars(mpg, disp), funs(MeanGroupDiff = .- mean(.[am==1])))
Have you considered using the data.table package? It is very well suited for doing these kind of grouping, filtering, joining, and aggregation operations you describe, and might save you a great deal of time in the long run.
The code below shows how a workflow similar to the one you described but based on the built in mtcars data set might look using data.table.
To be clear, there are also ways to do what you describe using base R as well as other packages like dplyr, just throwing out a suggestion based on what I have found the most useful for my personal work.
library(data.table)
## Convert mtcars to a data.table
## only include columns `mpg`, `cyl`, `am` and `gear` for brevity
DT <- as.data.table(mtcars)[, .(mpg, cyl,am, gear)]
## Take a subset where `cyl` is equal to 6
DT <- DT[cyl == 6]
## Calculate grouped mean based on `gear` and `am` as grouping variables
DT[,group_mpg_avg := mean(mpg), keyby = .(gear, am)]
## Calculate each row's difference from the group mean
DT[,mpg_diff_from_group := mpg - group_mpg_avg]
print(DT)
# mpg cyl am gear group_mpg_avg mpg_diff_from_group
# 1: 21.4 6 0 3 19.75 1.65
# 2: 18.1 6 0 3 19.75 -1.65
# 3: 19.2 6 0 4 18.50 0.70
# 4: 17.8 6 0 4 18.50 -0.70
# 5: 21.0 6 1 4 21.00 0.00
# 6: 21.0 6 1 4 21.00 0.00
# 7: 19.7 6 1 5 19.70 0.00
Consider by to subset your data frame by factors (but leave out Measurement in order to compare group 1 and all other groups). Then, run an ifelse conditional logic calculation for needed columns. Since by will return a list of data frames, bind all outside with do.call():
df_list <- by(growth, growth[,c("Population", "Temperature")], function(sub) {
# TL CORRECTION
sub$Correct_TL <- ifelse(sub$Measurement != 1,
sub$TL - mean(subset(sub, Measurement == 1)$TL),
sub$TL)
# ADD OTHER CORRECTIONS
return(sub)
})
final_df <- do.call(rbind, df_list)
Output (using posted data)
final_df
# Population Measurement Temperature TL New_TL Correct_TL
# 1 JUB 1 15 11 11-12 11.0000000
# 2 JUB 1 15 12 12-12 12.0000000
# 3 JUB 1 15 13 13-12 13.0000000
# 7 JUB 2 15 16 16-12 4.0000000
# 8 JUB 2 15 17 17-12 5.0000000
# 9 JUB 2 15 20 20-12 8.0000000
# 13 JUB 3 15 25 25-12 13.0000000
# 14 JUB 3 15 22 22-12 10.0000000
# 15 JUB 3 15 24 24-12 12.0000000
# 4 JUB 1 20 15 15-15.66 15.0000000
# 5 JUB 1 20 18 18-15.66 18.0000000
# 6 JUB 1 20 14 14-15.66 14.0000000
# 10 JUB 2 20 21 21-15.66 5.3333333
# 11 JUB 2 20 19 19-15.66 3.3333333
# 12 JUB 2 20 16 16-15.66 0.3333333
# 16 JUB 3 20 26 26-15.66 10.3333333
# 17 JUB 3 20 24 24-15.66 8.3333333
# 18 JUB 3 20 23 23-15.66 7.3333333

Removing certain values from the dataframe in R

I am not sure how I can do this, but what I need is I need to form a cluster of this dataframe mydf where I want to omit the inf(infitive) values and the values greater than 50. I need to get the table that has no inf and no values greater than 50. How can I get a table that contains no inf and no value greater than 50(may be by nullifying those cells)? However, For clustering part, I don't have any problem because I can do this using mfuzz package. So the only problem I have is that I want to scale the cluster within 0-50 margin.
mydf
s.no A B C
1 Inf Inf 999.9
2 0.43 30 23
3 34 22 233
4 3 43 45
You can use NA, the built in missing data indicator in R:
?NA
By doing this:
mydf[mydf > 50 | mydf == Inf] <- NA
mydf
s.no A B C
1 1 NA NA NA
2 2 0.43 30 23
3 3 34.00 22 NA
4 4 3.00 43 45
Any stuff you do downstream in R should have NA handling methods, even if it's just na.omit

Row Differences in Dataframe by Group

My problem has to do with finding row differences in a data frame by group. I've tried to do this a few ways. Here's an example. The real data set is several million rows long.
set.seed(314)
df = data.frame("group_id"=rep(c(1,2,3),3),
"date"=sample(seq(as.Date("1970-01-01"),Sys.Date(),by=1),9,replace=F),
"logical_value"=sample(c(T,F),9,replace=T),
"integer"=sample(1:100,9,replace=T),
"float"=runif(9))
df = df[order(df$group_id,df$date),]
I ordered it by group_id and date so that the diff function can find the sequential differences, which results in time ordered differences of the logical, integer, and float variables. I could easily do some sort of apply(df,2,diff), but I need it by group_id. Hence, doing apply(df,2,diff) results in extra unneeded results.
df
group_id date logical_value integer float
1 1 1974-05-13 FALSE 4 0.03472876
4 1 1979-12-02 TRUE 45 0.24493995
7 1 1980-08-18 TRUE 2 0.46662253
5 2 1978-12-08 TRUE 56 0.60039164
2 2 1981-12-26 TRUE 34 0.20081799
8 2 1986-05-19 FALSE 60 0.43928929
6 3 1983-05-22 FALSE 25 0.01792820
9 3 1994-04-20 FALSE 34 0.10905326
3 3 2003-11-04 TRUE 63 0.58365922
So I thought I could break up my data frame into chunks by group_id, and pass each chunk into a user defined function:
create_differences = function(data_group){
apply(data_group, 2, diff)
}
But I get errors using the code:
diff_df = lapply(split(df,df$group_id),create_differences)
Error in r[i1] - r[-length(r):-(length(r) - lag + 1L)] : non-numeric argument to binary operator
by(df,df$group_id,create_differences)
Error in r[i1] - r[-length(r):-(length(r) - lag + 1L)] : non-numeric argument to binary operator
As a side note, the data is nice, no NAs, nulls, blanks, and every group_id has at least 2 rows associated with it.
Edit 1: User alexis_laz correctly pointed out that my function needs to be sapply(data_group, diff).
Using this edit, I get a list of data frames (one list entry per group).
Edit 2:
The expected output would be a combined data frame of differences. Ideally, I would like to keep the group_id, but if not, it's not a big deal. Here is what the sample output should be like:
diff_df
group_id date logical_value integer float
[1,] 1 2029 1 41 0.2102112
[2,] 1 260 0 -43 0.2216826
[1,] 2 1114 0 -22 -0.3995737
[2,] 2 1605 -1 26 0.2384713
[1,] 3 3986 0 9 0.09112507
[2,] 3 3485 1 29 0.47460596
I think regarding the fact that you have millions of rows you can move to the data.table suitable for by group actions.
library(data.table)
DT <- as.data.table(df)
## this will order per group and per day
setkeyv(DT,c('group_id','date'))
## for all column apply diff
DT[,lapply(.SD,diff),group_id]
# group_id date logical_value integer float
# 1: 1 2029 days 1 41 0.21021119
# 2: 1 260 days 0 -43 0.22168257
# 3: 2 1114 days 0 -22 -0.39957366
# 4: 2 1604 days -1 26 0.23847130
# 5: 3 3987 days 0 9 0.09112507
# 6: 3 3485 days 1 29 0.47460596
It certainly won't be as quick compared to data.table but below is an only slightly ugly base solution using aggregate:
result <- aggregate(. ~ group_id, data=df, FUN=diff)
result <- cbind(result[1],lapply(result[-1], as.vector))
result[order(result$group_id),]
# group_id date logical_value integer float
#1 1 2029 1 41 0.21021119
#4 1 260 0 -43 0.22168257
#2 2 1114 0 -22 -0.39957366
#5 2 1604 -1 26 0.23847130
#3 3 3987 0 9 0.09112507
#6 3 3485 1 29 0.47460596

Resources