This question already has answers here:
Plotting two variables as lines using ggplot2 on the same graph
(5 answers)
Closed 4 years ago.
I'm trying to create a line plot in R, showing lines for different places over time.
My data is in a table with Year in the first column, the places England, Scotland, Wales, NI as separate columns:
Year England Scotland Wales NI
1 2006/07 NA 411 188 111
2 2007/08 NA 415 193 112
3 2008/09 NA 424 194 114
4 2009/10 NA 429 194 115
5 2010/11 NA 428 199 116
6 2011/12 NA 428 200 116
7 2012/13 NA 425 199 117
8 2013/14 NA 427 202 117
9 2014/15 NA 431 200 121
10 2015/16 3556 432 199 126
11 2016/17 3436 431 200 129
12 2017/18 3467 NA NA NA
I'm using ggplot, and can get a lineplot for any of the places, but I'm having difficulty getting lines for all the places on the same plot.
It seems like this might work if I had the places in a column as well (instead of across the top), as then I could set y in the code below to be that column, as opposed to the column that is a specific place. But that seems a bit convoluted and as I have lots of data in the existing format, I'm hoping there's either a way to do this with the format I have or a quick way of transforming it.
ggplot(data=mysheets$sheet1, aes(x=Year, y=England, group=1)) +
geom_line()+
geom_point()
From what I can tell, I'll need to reshape my data (into long form?) but I haven't found a way to do that where I don't have a column for places (i.e., I have a column for each place but the table doesn't have a way of saying these are all places and the same kind of thing).
I've also tried transposing my data, so the places are down the side and the years are along the top, but R still has its own headers for the columns - I guess another option might be if it was possible to have the years as headers and have that recognised by R?
As you said, you have to convert to long format to make the most out of ggplot2.
library(ggplot2)
library(dplyr)
mydata_raw <- read.table(
text = "
Year England Scotland Wales NI
1 2006/07 NA 411 188 111
2 2007/08 NA 415 193 112
3 2008/09 NA 424 194 114
4 2009/10 NA 429 194 115
5 2010/11 NA 428 199 116
6 2011/12 NA 428 200 116
7 2012/13 NA 425 199 117
8 2013/14 NA 427 202 117
9 2014/15 NA 431 200 121
10 2015/16 3556 432 199 126
11 2016/17 3436 431 200 129
12 2017/18 3467 NA NA NA"
)
# long format
mydata <- mydata_raw %>%
tidyr::gather(country, value, England:NI) %>%
dplyr::mutate(Year = as.numeric(substring(Year, 1, 4))) # convert to numeric date
ggplot(mydata, aes(x = Year, y = value, color = country)) +
geom_line() +
geom_point()
Related
I have a large dataset, 4666972 obs. of 5 variables.
I want to impute one column, MPR, with Kalman method based on each groups.
> str(dt)
Classes ‘data.table’ and 'data.frame': 4666972 obs. of 5 variables:
$ Year : int 1999 2000 2001 1999 2000 2001 1999 2000 2001 1999 ...
$ State: int 1 1 1 1 1 1 1 1 1 1 ...
$ CC : int 1 1 1 1 1 1 1 1 1 1 ...
$ ID : chr "1" "1" "1" "2" ...
$ MPR : num 54 54 55 52 52 53 60 60 65 70 ...
I tried the code below but it crashed after a while.
> library(imputeTS)
> data.table::setDT(dt)[, MPR_kalman := with(dt, ave(MPR, State, CC, ID, FUN=na_kalman))]
I don't know how to improve the time efficiency and impute successfully without crashed.
Is it better to split the dataset with ID to list and impute each of them with for loop?
> length(unique(hpms_S3$Section_ID))
[1] 668184
> split(dt, dt$ID)
However, I think this will not save too much of memory use or avoid crashed since when I split the dataset to 668184 lists and impute, I need to do multiple times and then combine to one dataset at last.
Is there any great way to do or how can I optimize code I did?
I provide the simple sample here:
# dt
Year State CC ID MPR
2002 15 3 3 NA
2003 15 3 3 NA
2004 15 3 3 193
2005 15 3 3 193
2006 15 3 3 348
2007 15 3 3 388
2008 15 3 3 388
1999 53 33 1 NA
2000 53 33 1 NA
2002 53 33 1 NA
2003 53 33 1 NA
2004 53 33 1 NA
2005 53 33 1 170
2006 53 33 1 170
2007 53 33 1 330
2008 53 33 1 330
EDIT:
As #r2evans mentioned in comment, I modified the code:
> setDT(dt)[, MPR_kalman := ave(MPR, State, CC, ID, FUN=na_kalman), by = .(State, CC, ID)]
Error in optim(init[mask], getLike, method = "L-BFGS-B", lower = rep(0, :
L-BFGS-B needs finite values of 'fn'
I got the error above. I found the post here for this error discussions. However, even I use na_kalman(MPR, type = 'level'), I still got error. I think there might be some repeated values within groups so that it produced error.
Perhaps splitting should be done using data.table's by= operator, perhaps more efficient.
Since I don't have imputeTS installed (there are several nested dependencies I don't have), I'll fake imputation using zoo::na.locf, both forward/backwards. I'm not suggesting this be your imputation mechanism, I'm using it to demonstrate a more-common pattern with data.table.
myimpute <- function(z) zoo::na.locf(zoo::na.locf(z, na.rm = FALSE), fromLast = TRUE, na.rm = FALSE)
Now some equivalent calls, one with your with(dt, ...) and my alternatives (which are really walk-throughs until my ultimate suggestion of 5):
dt[, MPR_kalman1 := with(dt, ave(MPR, State, CC, ID, FUN = myimpute))]
dt[, MPR_kalman2 := with(.SD, ave(MPR, State, CC, ID, FUN = myimpute))]
dt[, MPR_kalman3 := with(.SD, ave(MPR, FUN = myimpute)), by = .(State, CC, ID)]
dt[, MPR_kalman4 := ave(MPR, FUN = myimpute), by = .(State, CC, ID)]
dt[, MPR_kalman5 := myimpute(MPR), by = .(State, CC, ID)]
# Year State CC ID MPR MPR_kalman1 MPR_kalman2 MPR_kalman3 MPR_kalman4 MPR_kalman5
# 1: 2002 15 3 3 NA 193 193 193 193 193
# 2: 2003 15 3 3 NA 193 193 193 193 193
# 3: 2004 15 3 3 193 193 193 193 193 193
# 4: 2005 15 3 3 193 193 193 193 193 193
# 5: 2006 15 3 3 348 348 348 348 348 348
# 6: 2007 15 3 3 388 388 388 388 388 388
# 7: 2008 15 3 3 388 388 388 388 388 388
# 8: 1999 53 33 1 NA 170 170 170 170 170
# 9: 2000 53 33 1 NA 170 170 170 170 170
# 10: 2002 53 33 1 NA 170 170 170 170 170
# 11: 2003 53 33 1 NA 170 170 170 170 170
# 12: 2004 53 33 1 NA 170 170 170 170 170
# 13: 2005 53 33 1 170 170 170 170 170 170
# 14: 2006 53 33 1 170 170 170 170 170 170
# 15: 2007 53 33 1 330 330 330 330 330 330
# 16: 2008 53 33 1 330 330 330 330 330 330
The two methods produce the same results, but the latter preserves many of the memory-efficiencies that can make data.table preferred.
The use of with(dt, ...) is an anti-pattern in one case, and a strong risk in another. For the "risk" part, realize that data.table can do a lot of things behind-the-scenes so that the calculations/function-calls within the j= component (second argument) only sees data that is relevant. A clear example is grouping, but another (unrelated to this) data.table example is conditional replacement, as in dt[is.na(x), x := -1]. With the reference to the enter table dt inside of this, if there is ever something in the first argument (conditional replacement) or a by= argument, then it fails.
MPR_kalman2 mitigates this by using .SD, which is data.table's way of replacing the data-to-be-used with the "Subset of the Data" (ref). But it's still not taking advantage of data.table's significant efficiencies in dealing in-memory with groups.
MPR_kalman3 works on this by grouping outside, still using with but not (as in 2) in a more friendly way.
MPR_kalman4 removes the use of with, since really the MPR visible to ave is only within each group anyway. And then when you think about it, since ave is given no grouping variables, it really just passes all of the MPR data straight-through to myimpute. From this, we have MPR_kalman5, a direct method that is along the normal patterns of data.table.
While I don't know that it will mitigate your crashing, it is intended very much to be memory-efficient (in data.table's ways).
My data (crsp.daily) look roughly like this (the numbers are made up and there are more variables):
PERMCO PERMNO date price VOL SHROUT
103 201 19951006 8.8 100 823
103 203 19951006 7.9 200 1002
1004 10 19951006 5 277 398
2 5 19951110 5.3 NA 579
1003 2 19970303 10 67 NA
1003 1 19970303 11 77 1569
1003 20 19970401 6.7 NA NA
I want to sum VOL and SHROUT by groups defined by PERMCO and date, but leaving the original number of rows unchanged, thus my desired output is the following:
PERMCO PERMNO date price VOL SHROUT VOL.sum SHROUT.sum
103 201 19951006 8.8 100 823 300 1825
103 203 19951006 7.9 200 1002 300 1825
1004 10 19951006 5 277 398 277 398
2 5 19951110 5.3 NA 579 NA 579
1003 2 19970303 10 67 NA 21 1569
1003 1 19970303 11 77 1569 21 1569
1003 20 19970401 6.7 NA NA NA NA
My data have more than 45 millions of observations, and 8 columns. I have tried using ave:
crsp.daily$VOL.sum=ave(crsp.daily$VOL,c("PERMCO","date"),FUN=sum)
or sapply:
crsp.daily$VOL.sum=sapply(crsp.daily[,"VOL"],ave,crsp.daily$PERMCO,crsp.daily$date)
The problem is that it takes an infinite amount of time (like more than 30 min and I still did not see the result). Another thing that I tried was to create a variable called "group" by pasting PERMCO and date like this:
crsp.daily$group=paste0(crsp.daily$PERMCO,crsp.daily$date)
and then apply ave using crsp.daily$group as groups. This also did not work because from a certain observation on, R did not distinguish anymore the different values of crsp.daily$groups and treated them as a unique group.
The solution of creating the variable "groups" worked on a smaller dataset.
Any advise is greatly appreciated!
With data.table u could use the following code
require(data.table)
dt <- as.data.table(crsp.daily)
dt[, VOL.sum := sum(VOL), by = list(PERMCO, date)]
With the command := u create a new variable (VOL.sum) and group those by PERMCO and date.
Output
permco permno date price vol shrout vol.sum
1 103 201 19951006 8.8 100 823 300
2 103 203 19951006 7.9 200 1002 300
3 1004 10 19951006 5.0 277 398 277
4 2 5 19951110 5.3 NA 579 NA
I was searching for an answer to my specific problem, but I didn't find a conclusion. I found this: Add column to Data Frame based on values of other columns , but it was'nt exactly what I need in my specific case.
I'm really a beginner in R, so I hope maybe someone can help me or has a good hint for me.
Here an example of what my data frame looks like:
ID answer 1.partnerID
125 3 715
235 4 845
370 7 985
560 1 950
715 5 235
950 5 560
845 6 370
985 6 125
I try to describe what I want to do on an example:
In the first row is the data of the person with the ID 125. The first partner of this person is the person with ID 715. I want to create a new column, with the value of the answer of each person´s partner in it. It should look like this:
ID answer 1.partnerID 1.partneranswer
125 3 715 5
235 4 845 6
370 7 985 6
560 1 950 5
715 5 235 4
950 5 560 1
845 6 370 7
985 6 125 3
So R should take the value of the column 1.partnerID, which is in this case "715" and search for the row, where "715" is the value in the column ID (there are no IDs more than once).
From this specific row R should take the value from the column answer (in this example that´s the "5") and put it into the new column "1.partneranswer", but in the row from person 125.
I hope someone can understand what I want to do ...
My problem is that I can imagine how to write this for each row per hand, but I think there need to be an easiear way to do it for all rows in once? (especially because in my original data.frame are 5 partners per person and there are more than one column from which the values should be transfered, so it would come to many hours work to write it for each single row per hand).
I hope someone can help.
Thank you!
One solution is to use apply as follows:
df$partneranswer <- apply(df, 1, function(x) df$answer[df$ID == x[3]])
Output will be as desired above. There may be a loop-less approach.
EDIT: Adding a loop-less (vectorized answer) using match:
df$partneranswer <- df$answer[match(df$X1.partnerID, df$ID)]
df
ID answer X1.partnerID partneranswer
1 125 3 715 5
2 235 4 845 6
3 370 7 985 6
4 560 1 950 5
5 715 5 235 4
6 950 5 560 1
7 845 6 370 7
8 985 6 125 3
Update: This can be done with self join; The first two columns define a map relationship from ID to answer, in order to find the answers for the partner IDs, you can merge the data frame with itself with first data frame keyed on partnerID and the second data frame keyed on ID:
Suppose df is (fixed the column names a little bit):
df
# ID answer partnerID
#1 125 3 715
#2 235 4 845
#3 370 7 985
#4 560 1 950
#5 715 5 235
#6 950 5 560
#7 845 6 370
#8 985 6 125
merge(df, df[c('ID', 'answer')], by.x = "partnerID", by.y = "ID")
# partnerID ID answer.x answer.y
#1 125 985 6 3
#2 235 715 5 4
#3 370 845 6 7
#4 560 950 5 1
#5 715 125 3 5
#6 845 235 4 6
#7 950 560 1 5
#8 985 370 7 6
Old answer:
If the ID and partnerID are mapped to each other one on one, you can try:
df$partneranswer <- with(df, answer[sapply(X1.partnerID, function(partnerID) which(ID == partnerID))])
df
# ID answer X1.partnerID partneranswer
#1 125 3 715 5
#2 235 4 845 6
#3 370 7 985 6
#4 560 1 950 5
#5 715 5 235 4
#6 950 5 560 1
#7 845 6 370 7
#8 985 6 125 3
I have a dataframe as following
year month increment
113 6 464
113 7 132
113 8 165
113 9 43
113 10 658
113 11 54
113 12 463
114 1 231
114 2 21
Despite being ordered as indicated, when I plot increment~factor(month), the resulting x axis in the plot starts from month 1, instead of starting with month 6 like the dataframe
qplot(month,data=monthly,fill=treatment,weight=increment,position="dodge")
What should I do to make x axis respect the order of the month I need?
Something like this, perhaps:
qplot(interaction(year, month, lex.order=TRUE), data=monthly, fill=treatment,weight=increment,position="dodge")
Removing the fill=treatment argument (as I do not have the data) results in this:
qplot(interaction(year, month, lex.order=TRUE), data=monthly, weight=increment,position="dodge")
I have a table produced by calling table(...) on a column of data, and I get a table that looks like:
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
346 351 341 333 345 415 421 425 429 437 436 469 379 424 387 419 392 396 381 421
I'd like to draw a boxplot of these frequencies, but calling boxplot on the table results in an error:
Error in Axis.table(x = c(333, 368.5, 409.5, 427, 469), side = 2) :
only for 1-D table
I've tried coercing the table to an array with as.array but it seems to make no difference. What am I doing wrong?
If I understand you correctly, boxplot(c(tab)) or boxplot(as.vector(tab)) should work (credit to #joran as well).