Transposing rows into columns, then split them - r

I have a data set containing data sorted in rows like this:
*VarName1* - *VarValue1*
*VarName2* - *VarValue2*
*Etc.*
I want it to be that the VarNames become individual columns. I have achieved this by using the following code:
DFP1 <- as.data.frame(t(DFP)) #DFP contains the data
Now, this is a very big data set. It contains multiple years (millions of rows) of data. Above code creates a dataframe which has > 1E6 columns. I need to split these columns by each entry. I saw that in the first piece of data, a new entry recurs at every 86th column. So, I tried this:
tmp <- data.frame(
X = DFP$noFloat,
ind = rep(1:86, nrow(DFP)/86)
)
y <- rbind(DFP$nmlVar[1:86], unstack(tmp, X~ind))
This works for a few rows. The problem is that the number of variables increased over the years and that I cannot simply assume that the number of variables per entry are the same. This results in variable values mismatching it's names. I am looking for a way to match variables and values based on their variable names.
I am new to advanced data-analysis, so please let me know if you need anything more.
EDIT: I created some sample data of how DFP looks like, to hopefully make you better understand my question:
DFP <- data.frame(
nmlVar = c("Batch", "Mass", "Length", "Product","Batch", "Mass",
"Length", "Product", "Batch", "Mass", "Length", "Width", "Product"),
noFloat = c(254578, 20, 24, 24547, 254579, 23, 24, 24547, 254580, 20,
24, 19, 24547)
)
Important to note here is the apperance of new variable width in the third recurrence. This is typical for my dataset, introduction of new variables. The key indicator here is batch and it should be split at each time the variable batch appears.
dput output of sample data:
structure(list(nmlVar = structure(c(1L, 3L, 2L, 4L, 1L, 3L, 2L,
4L, 1L, 3L, 2L, 5L, 4L), .Label = c("Batch", "Length", "Mass",
"Product", "Width"), class = "factor"), noFloat = c(254578, 20,
24, 24547, 254579, 23, 24, 24547, 254580, 20, 24, 19, 24547)), .Names = c("nmlVar",
"noFloat"), row.names = c(NA, -13L), class = "data.frame")

Is this what you are after?:
library(dplyr)
library(tidyr)
DFP %>%
mutate(sample = cumsum(nmlVar == 'Batch')) %>%
spread(nmlVar, noFloat)
Gives:
sample Batch Length Mass Product Width
1 1 254578 24 20 24547 NA
2 2 254579 24 23 24547 NA
3 3 254580 24 20 24547 19

Related

How can I apply case_when(mapply (adist, x, y) <= 3 ~ x, TRUE ~ y)) to columns of different length and order

Hi I have been trying for a while to match two large columns of names, several have different spellings etc... so far I have written some code to practice on a smaller dataset
examples%>% mutate(new_ID = case_when(mapply (adist, example_1 , example_2) <= 3 ~ example_1, TRUE ~ example_2))
This manages to create a new column with names the name from example 1 if it is less than an edit distance of 3 away. However, it does not give the name from example 2 if it does not meet this criteria which I need it to do.
This code also only works on the adjacent row of each column, whereas, I need it to work on a dataset which has two columns (one is larger- so cant be put in the same order).
Also needs to not try to match the NAs from the smaller column of names (there to fill it out to equal length to the other one).
Anyone know how to do something like this?
dput(head(examples))
structure(list(. = structure(c(4L, 3L, 2L, 1L, 5L), .Label = c("grarryfieldsred","harroldfrankknight", "sandramaymeres", "sheilaovensnew", "terrifrank"), class = "factor"), example_2 = structure(c(4L, 2L, 3L, 1L,
5L), .Label = c(" grarryfieldsred", "candramymars", "haroldfranrinight",
"sheilowansknew", "terryfrenk"), class = "factor")), row.names = c(NA,
5L), class = "data.frame")
The problem is that your columns have become factors rather than character vectors. When you try to combine two columns together with different factor levels, unexpected results can happen.
First convert your columns to character:
library(dplyr)
examples %>%
mutate(across(contains("example"),as.character)) %>%
mutate(new_ID = case_when(mapply (adist, example_1 , example_2) <= 3 ~ example_1,
TRUE ~ example_2))
# example_1 example_2 new_ID
#1 sheilaovensnew sheilowansknew sheilowansknew
#2 sandramaymeres candramymars candramymars
#3 harroldfrankknight haroldfranrinight harroldfrankknight
#4 grarryfieldsred grarryfieldsred grarryfieldsred
#5 terrifrank terryfrenk terrifrank
In your dput output, somehow the name of example_1 was changed. I ran this first:
names(examples)[1] <- "example_1"

Create function to count values across list of columns

R folks:
I have a dataframe with many sets of columns. Each set is a bank of survey items. I would like to count the number of columns in each set having a certain value. I wrote a function to do this but it results in a list of repeated values that is appended to my dataframe.
df<- structure(list(RespondentID = c(6764279930, 6779986023, 6760279439,
6759243066),
q1 = c(3L, 3L, 4L, 1L),
q2 = c(2L, 2L, 4L, 4L),
q3 = c(4L, 2L, 4L, 5L),
q0010_0004 = c(1L, 2L, 3L, 1L)),
.Names = c("RespondentID", "q1", "q2", "q3", "q4"),
row.names = c(NA, 4L), class = "data.frame")
group1<-c("q1","q2","q3","q4")
# Objective: Count number of ratings==4 for each row
# Make function that receives list of columns &
# then returns ONE column in dataframe with total # columns
# having certain value (in this case, 4)
countcol<-function(colgroup) {
s<-subset(df, select=c(colgroup)) #select only the columns designated by list
s$sum<-Reduce("+", apply(X=s,1,FUN=function(x) (sum(x==4, na.rm = TRUE)))) # count instances of value==4
s2<-subset(s,select=c(sum)) # return ONE column with result for each row
return(s2$sum) }
countcol(group1)
My function, countcol runs without errors but as stated above results in what appears to be a transposed list of results for each row. I would like to have ONE number for each row that indicates the count of values.
I attempted various apply functions here but could not prevail. Anyone have a tip?
Thanks!
rowSums can give you results OP is looking for. This return count of ratings==4 for each group.
rowSums(df[2:5]==4)
#1 2 3 4
#1 0 3 1
OR just part of function from OP can give answer.
apply(df[2:5], 1, function(x)(sum(x==4)))
#1 2 3 4
#1 0 3 1

Can I use %in% to search and match two columns?

I have a large dataframe and I have a vector to pull out terms of interest. for a previous project I was using:
a=data[data$rn %in% y, "Gene"]
To pull out information into a new vector. Now I have a another job Id like to do.
I have a large dataframe of 15 columns and >100000 rows. I want to search column 3 and 9 for the content in the vector and print this as a new dataframe.
To make this extra annoying the hit could be in v3 and not in v9 and visa versa.
Working example
I have striped the dataframe to 3 cols and few rows.
data <- structure(list(Gene = structure(c(1L, 5L, 3L, 2L, 4L), .Label = c("ibp","leuA", "pLeuDn_02", "repA", "repA1"), class = "factor"), LocusTag = structure(c(1L,2L, 5L, 3L, 4L), .Label = c("pBPS1_01", "pBPS1_02", "pleuBTgp4","pleuBTgp5", "pLeuDn_02"), class = "factor"), hit = structure(c(2L,4L, 3L, 1L, 5L), .Label = c("2-isopropylmalate synthase", "Ibp protein","ORF1", "repA1 protein", "replication-associated protein"), class = "factor")), .Names = c("Gene","LocusTag", "hit"), row.names = c(NA, 5L), class = "data.frame")
y <- c("ibp", "orf1")
First of all R is case sensitive so your example will not collect the third line but I guess you want that extracted. so you would have to change your y to
y <- c("ibp", "ORF1")
Ok from your example I try to see what you want to achieve I am not sure if this is really what you want but R knows the operator | as "or" so you could try something like:
new.data<-data[data$Gene %in% y|data$hit %in% y,]
if you only want to extract certain columns of your data set you can specify them behind the "," e.g.:
new.data<-data[data$Gene %in% y|data$hit %in% y, c("LocusTag","Gene")]

Conditional Seasonal Averaging Time-Series Data

Introduction
Summary:
Trying to average data by season (when necessary) when certain conditions are met.
Hello everyone.
I am currently working with numerous large data sets (>200 sets with >5000 rows each) of long-term time series data collection for multiple variables across different locations. So far, I've extracted data into separate CSV files per site and per station.
For the most part, the data reported per parameter is one instance per season.
Season here is defined ecologically as DJF, MAM, JJA, SON for months corresponding to Winter, Spring, Summer, and Fall respectively.
However, there are some cases where there were multiple readings during a seasonal event. Here, the parameter values and dates have to be averaged; this is before further analysis can take place on these data sets.
To complicate things even further, some of the data is marked by a Greater Than or Less Than (GTLT) symbol). In these cases, values and dates are not averaged unless the recorded value is the same.
Data Example
Summary:
Code and Tables show requested changes in data-set
So, for a data-driven example...
Here's a few rows from a data set.
Data.Example<-structure(list(
Station.ID = c(13402, 13402, 13402, 13402, 13402, 13402),
End.Date = structure(c(2L, 3L, 4L, 2L, 3L, 1L), .Label = c("10/13/2016", "7/13/2016", "8/13/2016", "8/15/2016"), class = "factor"),
Parameter.Name = structure(c(2L, 2L, 2L, 1L, 1L, 1L), .Label = c("Alkalinity", "Enterococci"), class = "factor"),
GTLT = structure(c(2L, 2L, 2L, 1L, 1L, 1L), .Label = c("", "<"), class = "factor"),
Value = c(10, 10, 20, 30, 15, 10)),
.Names = c("Station.ID", "End.Date", "Parameter.Name","GTLT", "Value"), row.names = c(NA, -6L), class = "data.frame")
This is ideally what I would like as output
Data.Example.New<-structure(list(
Station.ID.new = c(13402, 13402, 13402, 13402),
End.Date.new = structure(c(2L, 3L, 2L, 1L), .Label = c("10/13/2016", "7/28/2016", "8/15/2016"), class = "factor"),
Parameter.Name.new = structure(c(2L, 2L, 1L, 1L), .Label = c("Alkalinity", "Enterococci"), class = "factor"),
GTLT.new = structure(c(2L, 2L, 1L, 1L), .Label = c("", "<"), class = "factor"),
Value.new = c(10, 20, 22.5, 10)),
.Names = c("Station.ID.new", "End.Date.new", "Parameter.Name.new", "GTLT.new", "Value.new"), row.names = c(NA, -4L), class = "data.frame")
Here, the following things are occurring:
For Enterococci measured in July and Aug 13, there is a GTLT symbol, but Value for both == 10. So average dates. New row is 7/28/2016 and Value 10.
While Enterococci on Aug 15 is within same season as other values, since GTLT value is different, it would only be averaged in same season of same year with other values of 20. In this case, since it is only one where Value==20, that row does not change and is repeated in final data frame.
Alkalinity in July and August are same season, so average dates (7/28/16) and Value (22.5) in new row.
Alkalinity in October is different season, so keep row.
All other data (such as Station.ID and Parameter.Name) should just be copied since they shouldn't differ here.
If for some reason you have a GTLT and non-GTLT for same parameter:
End.Date GTLT Value Parameter
7/13/2015 < 10 Alk
7/13/2016 < 10 Alk
8/13/2016 10 Alk
8/15/2016 20 Alk
Then final result would be
End.Date GTLT Value Parameter
7/13/2015 < 10 Alk
7/13/2016 < 10 Alk
8/14/2016 15 Alk
Approach
Summary:
Define seasons and then aggregate using package like dplyr?
Create loop function to read row by row (after sort by Parameter.Name then Date?)
As one might expect, this is where I'm stuck.
I know seasons can be defined in R from prior Stack Q's:
New vector of seasons based on dates
And I know that average/aggregation packages such as dplyr (and possibly zoo?) can do chaining commands.
My issue is putting this thought process into code that can be repeated for each data set.
I'm not sure if that's the best approach (define seasons and then set conditions for averaging data), or if some sort of loop function would work here by going through row by row of the data set post-sort by Parameter.Name then End.Date.
I quickly sketched my thoughts on what some sort of loop function would have to include:
Rough idea of flow diagram
Note, you can't just average starting row [i] and [i+1] because [i+2], etc. might need averaged as well. Hence finding row [i+n] that breaks loop before last step, averaging all prior rows [i+n-1], and moving on to next new row [i+n].
Further, as clarification, the season would have to be within season of that annual cycle. So 7/13/2016 == 8/13/2016 for same season. 12/12/2015 == 01/01/2016 for same season. But 4/13/2016! == 4/13/2015 in regards to averaging.
Conclusion and Summary
In short, I need help designing code to average individual parameter time-series values by annual season with specific exceptions for multiple large data sets.
I'm not sure of the best approach in designing code to do this, whether it's a large loop function or a combination of code and specialized chaining-enabled packages.
Thank you for your time in advance.
Cheers,
soccernamlak
Using dplyr and lubridate I was able to come up with a solution. My output matches your example output, except I did not keep the exact dates, which I felt were misleading in the final result.
Data.Example<-structure(list(
Station.ID = c(13402, 13402, 13402, 13402, 13402, 13402),
End.Date = structure(c(2L, 3L, 4L, 2L, 3L, 1L), .Label = c("10/13/2016", "7/13/2016", "8/13/2016", "8/15/2016"), class = "factor"),
Parameter.Name = structure(c(2L, 2L, 2L, 1L, 1L, 1L), .Label = c("Alkalinity", "Enterococci"), class = "factor"),
GTLT = structure(c(2L, 2L, 2L, 1L, 1L, 1L), .Label = c("", "<"), class = "factor"),
Value = c(10, 10, 20, 30, 15, 10)),
.Names = c("Station.ID", "End.Date", "Parameter.Name","GTLT", "Value"), row.names = c(NA, -6L), class = "data.frame")
# Create season key
seasons <- data.frame(month = 1:12, season = c(rep("DJF",2), rep("MAM", 3), rep("JJA", 3), rep("SON",3), "DJF"))
# Isolate Month and Year, create Season column
Data.Example$Month <- lubridate::month(as.Date((Data.Example$End.Date), "%m/%d/%Y"))
Data.Example$Year <- lubridate::year(as.Date((Data.Example$End.Date), "%m/%d/%Y"))
Data.Example$Season <- seasons$season[Data.Example$Month]
# Update 'year' where month = December so that it is grouped with Jan and Feb of following year
Data.Example$Year[Data.Example$Month == 12] <- Data.Example$Year[Data.Example$Month == 12]+1
# Find out which station/year/season/paramaters have at least one record with a GTLT
GTLT.Test<- Data.Example %>%
group_by(Station.ID, Year, Season, Parameter.Name) %>%
summarize(has_GTLT = max(nchar(as.character(GTLT))))
# First only calculate averages for groups without any GTLT
Data.Example.New1 <- Data.Example %>%
anti_join(GTLT.Test[GTLT_test$has_GTLT == 1,],
by = c("Station.ID", "Year", "Season", "Parameter.Name")) %>%
group_by(Station.ID, Year, Season, Parameter.Name, GTLT) %>%
summarize(Value.new = mean(Value))
# Now do the same for groups with GTLT, only combining when values and GTLT symbols match.
Data.Example.New2 <- Data.Example %>%
anti_join(GTLT.Test[GTLT_test$has_GTLT == 0,],
by = c("Station.ID", "Year", "Season", "Parameter.Name")) %>%
group_by(Station.ID, Year, Season, Parameter.Name, GTLT, Value) %>%
summarize(Value.new = mean(Value)) %>%
select(-Value)
# Combine both
Data.Example.New <- rbind(Data.Example.New1, Data.Example.New2)
EDIT: I just noticed you linked to another SO question for converting dates to seasons. Mine simply converts by month, not date, and does not use actual seasons. I did this because in your example, Dec. 12 matches with Jan. 1. December 12 is technically fall, so I assumed you weren't using actual seasons, but were instead using four three-month groupings.

Using "apply" functions across multiple data frames

I'm having an issue using apply functions (which I assume is the right way to do the following) across multiple data frames.
Some example data (3 different data frames, but the problem I'm working on has upwards of 50):
biz <- data.frame(
country = c("england","canada","australia","usa"),
businesses = sample(1000:2500,4))
pop <- data.frame(
country = c("england","canada","australia","usa"),
population = sample(10000:20000,4))
restaurants <- data.frame(
country = c("england","canada","australia","usa"),
restaurants = sample(500:1000,4))
Here's what I ultimately want to do:
1) Sort eat data frame from largest to smallest, according to the variable that's included
dataframe <- dataframe[order(dataframe$VARIABLE,)]
2) then create a vector variable that gives me the rank for each
dataframe$rank <- 1:nrow(dataframe)
3) Then create another data frame that has one column of the countries and the rank for each of the variables of interest as other columns. Something that would look like (rankings aren't real here):
country.rankings <- structure(list(country = structure(c(5L, 1L, 6L, 2L, 3L, 4L), .Label = c("brazil",
"canada", "england", "france", "ghana", "usa"), class = "factor"),
restaurants = 1:6, businesses = c(4L, 5L, 6L, 3L, 2L, 1L),
population = c(4L, 6L, 3L, 2L, 5L, 1L)), .Names = c("country",
"restaurants", "businesses", "population"), class = "data.frame", row.names = c(NA,
-6L))
So I'm guessing there's a way to put each of these data frames together into a list, something like:
lib <- c(biz, pop, restaurants)
And then do an lapply across that to 1) sort, 2)create the rank variable and 3) create the matrix or data frame of rankings for each variable (# of businesses, population size, # of restaurants) for each country. Problem I'm running into is that writing the lapply function to sort each data frame runs into issues when I try to order by the variable:
sort <- lapply(lib,
function(x){
x <- x[order(x[,2]),]
})
returns the error message:
Error in `[.default`(x, , 2) : incorrect number of dimensions
because I'm trying to apply column headings to a list. But how else would I tackle this problem when the variable names are different for every data frame (but keeping in mind that the country names are consistent)
(would also love to know how to use this using plyr)
Ideally I'd would recommend data.table for this.
However, here is a quick solution using data.frame
Try this:
Step1: Create a list of all data.frames
varList <- list(biz,pop,restaurants)
Step2: Combine all of them in one data.frame
temp <- varList[[1]]
for(i in 2:length(varList)) temp <- merge(temp,varList[[i]],by = "country")
Step3: Get ranks:
cbind(temp,apply(temp[,-1],2,rank))
You can remove the undesired columns if you want!!
cbind(temp[,1:2],apply(temp[,-1],2,rank))[,-2]
Hope this helps!!
totaldatasets <- c('biz','pop','restaurants')
totaldatasetslist <- vector(mode = "list",length = length(totaldatasets))
for ( i in seq(length(totaldatasets)))
{
totaldatasetslist[[i]] <- get(totaldatasets[i])
}
totaldatasetslist2 <- lapply(
totaldatasetslist,
function(x)
{
temp <- data.frame(
country = totaldatasetslist[[i]][,1],
countryrank = rank(totaldatasetslist[[i]][,2])
)
colnames(temp) <- c('country', colnames(x)[2])
return(temp)
}
)
Reduce(
merge,
totaldatasetslist2
)
Output -
country businesses population restaurants
1 australia 3 3 3
2 canada 2 2 2
3 england 1 1 1
4 usa 4 4 4

Resources