I have a dataset (df1) on hundreds of national crises, where each observation is a crisis event at the country level with a start and an end date. I also have the date when the crisis was announced (yyyy-mm-dd format), and a bunch of other crisis characteristics.
df1 <- data.frame(cbind(eventID=c(1,2,3,4), country=c("ALB","ALB","ARG","ARG"), start=c(1994, 1998, 1998, 1991), end=c(1996,1999,1999,1993), announcement=c("1994-11-01","1998-03-01","1998-07-01","1992-01-01"), x1=c(6,2,8,7), x2=c("a","q","k","b")))
eventID country start end announcement x1 x2
1 ALB 1994 1996 1994-11-01 6 a
2 ALB 1998 1999 1998-03-01 2 q
3 ARG 1998 1999 1998-07-01 8 k
4 ARG 1991 1993 1992-01-01 7 b
I need to make df2, a panel of countries with annual observations from the earliest "start" year to the latest "end" year. I want to have a dummy variable, "crisis", that equals 1 for the years between "start" and "end" in df1, and 0 otherwise. I want "announcement" to contain the announcement date in df1 for the year with an announcement, and "NA" otherwise. I would like the extra crisis characteristics, x1 and x2, to show up for crisis years to which they correspond, and "NA" otherwise.
I also need observations for each country for years in which no country has a crisis (in df2: 1997).
df2 <- data.frame(cbind(year=c(1991,1992,1993,1994,1995,1996,1997,1998,1999,1991,1992,1993,1994,1995,1996,1997,1998,1999), country=c("ALB","ALB","ALB","ALB","ALB","ALB","ALB","ALB","ALB","ARG","ARG","ARG","ARG","ARG","ARG","ARG","ARG","ARG"),crisis=c(0,0,0,1,1,1,0,1,1,1,1,1,0,0,0,0,1,1), announcement=c(NA, NA,NA,"1994-11-01",NA,NA,NA,"1998-03-01",NA,NA,"1992-01-01",NA,NA,NA,NA,NA,"1998-07-01"), x1=c(NA,NA,NA,6,6,6,NA,2,2,8,8,8,NA,NA,NA,NA,7,7), x2=c(NA,NA,NA,"a","a","a",NA,"q","q","k","k","k",NA,NA,NA,NA,"b","b")))
year country crisis announcement x1 x2
1991 ALB 0 NA NA NA
1992 ALB 0 NA NA NA
1993 ALB 0 NA NA NA
1994 ALB 1 1994-11-01 6 a
1995 ALB 1 NA 6 a
1996 ALB 1 NA 6 a
1997 ALB 0 NA NA NA
1998 ALB 1 1998-03-01 2 q
1999 ALB 1 NA 2 q
1991 ARG 1 NA 8 k
1992 ARG 1 1992-01-01 8 k
1993 ARG 1 NA 8 k
1994 ARG 0 NA NA NA
1995 ARG 0 NA NA NA
1996 ARG 0 NA NA NA
1997 ARG 0 NA NA NA
1998 ARG 1 1998-07-01 7 b
1999 ARG 1 NA 7 b
I would love any suggestions! I'm stumped as to how to replicate the observations for each year, but only include x1 and x2 values when my new "crisis" dummy = 1
Thanks!
Making use of dplyr and tidyr this could be achieved like so:
library(dplyr)
library(tidyr)
df1 <- data.frame(cbind(eventID=c(1,2,3,4), country=c("ALB","ALB","ARG","ARG"), start=c(1994, 1998, 1998, 1991), end=c(1996,1999,1999,1993), announcement=c("1994-11-01","1998-03-01","1998-07-01","1992-01-01"), x1=c(6,2,8,7), x2=c("a","q","k","b")))
df1 %>%
mutate(year = factor(start, levels = min(start):max(end))) %>%
complete(year, country) %>%
mutate(year = as.numeric(as.character(year))) %>%
arrange(country, year) %>%
group_by(country) %>%
fill(eventID, end, x1, x2) %>%
ungroup() %>%
mutate(across(c(eventID, end, x1, x2), ~ ifelse(end < year, NA, .)),
crisis = as.numeric(!is.na(eventID)))
#> # A tibble: 18 x 9
#> year country eventID start end announcement x1 x2 crisis
#> <dbl> <chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl>
#> 1 1991 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 2 1992 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 3 1993 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 4 1994 ALB 1 1994 1996 1994-11-01 6 a 1
#> 5 1995 ALB 1 <NA> 1996 <NA> 6 a 1
#> 6 1996 ALB 1 <NA> 1996 <NA> 6 a 1
#> 7 1997 ALB <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 8 1998 ALB 2 1998 1999 1998-03-01 2 q 1
#> 9 1999 ALB 2 <NA> 1999 <NA> 2 q 1
#> 10 1991 ARG 4 1991 1993 1992-01-01 7 b 1
#> 11 1992 ARG 4 <NA> 1993 <NA> 7 b 1
#> 12 1993 ARG 4 <NA> 1993 <NA> 7 b 1
#> 13 1994 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 14 1995 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 15 1996 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 16 1997 ARG <NA> <NA> <NA> <NA> <NA> <NA> 0
#> 17 1998 ARG 3 1998 1999 1998-07-01 8 k 1
#> 18 1999 ARG 3 <NA> 1999 <NA> 8 k 1
I want to repeat the value within each group (year), which is equal to the value of the first category "A".
For example. My data frame is:
data = expand.grid(
category = LETTERS[1:3],
year = 2000:2005)
data$value = runif(nrow(data))
I tried to do the following, however, it does not repeat the value three times
test<-data %>% group_by(year) %>% mutate(value2 =value[category == "A"])
test
# A tibble: 18 x 4
# Groups: year [6]
category year value value2
<fct> <int> <dbl> <dbl>
1 A 2000 0.783 0.783
2 B 2000 0.351 0.467
3 C 2000 0.296 0.895
4 A 2001 0.467 0.102
5 B 2001 0.168 0.546
6 C 2001 0.459 0.447
7 A 2002 0.895 0.783
I need the following result:
1 A 2000 0.783 0.783
2 B 2000 0.351 0.783
3 C 2000 0.296 0.783
4 A 2001 0.467 0.467
5 B 2001 0.168 0.467
6 C 2001 0.459 0.467
Edit: After a comment that it might relate to the packages conflict I add the list of packages that I load before:
# install packages if not installed already
list.of.packages <- c("stringr", "timeDate", "bizdays",
"lubridate", "readxl", "dplyr","plyr",
"rootSolve", "RODBC", "glue",
"ggplot2","gridExtra","bdscale", "gtools", "scales", "shiny", "leaflet", "data.table", "plotly")
new.packages <- list.of.packages[!(list.of.packages %in% installed.packages()[,"Package"])]
if(length(new.packages)) install.packages(new.packages)
#========== Libraries to be loaded ===============
lapply(list.of.packages, require, character.only = TRUE)
#------
here it is little R freak
> data %>% group_by(year) %>%
+ mutate(value_tmp = if_else(category == "A", value, NA_real_),
+ value2 = mean(value_tmp, na.rm = TRUE))
# A tibble: 18 x 5
# Groups: year [6]
category year value value_tmp value2
<fct> <int> <dbl> <dbl> <dbl>
1 A 2000 0.01818495 0.01818495 0.01818495
2 B 2000 0.5649932 NA 0.01818495
3 C 2000 0.5483291 NA 0.01818495
4 A 2001 0.9175864 0.9175864 0.9175864
5 B 2001 0.2415837 NA 0.9175864
6 C 2001 0.2250608 NA 0.9175864
7 A 2002 0.6037224 0.6037224 0.6037224
8 B 2002 0.8712926 NA 0.6037224
9 C 2002 0.6293625 NA 0.6037224
10 A 2003 0.8126948 0.8126948 0.8126948
11 B 2003 0.7540445 NA 0.8126948
12 C 2003 0.02220114 NA 0.8126948
13 A 2004 0.3961279 0.3961279 0.3961279
14 B 2004 0.3638186 NA 0.3961279
15 C 2004 0.8682010 NA 0.3961279
16 A 2005 0.04196315 0.04196315 0.04196315
17 B 2005 0.4879482 NA 0.04196315
18 C 2005 0.8605212 NA 0.04196315
I have obtained the desired results, by slightly modifying the response of Noobie and using fill from tidyverse:
test <- data %>% group_by(year) %>%
mutate(value_tmp = if_else(category == "A", value, NA_real_))%>%
fill(value_tmp)
I've got a list with more than 5000 elements and I want to save them in a .csv data frame with specific disposition.
library(XML)
url <- "http://www.omie.es/aplicaciones/datosftp/datosftp.jsp?path=/marginalpdbc/"
doc <- htmlParse(url)
links <- xpathSApply(doc, "//a/#href")
free(doc)
head(links)
wanted <- links[grepl("http*", links)]
head(wanted)
GetMe <- paste("", wanted, sep = "")
datos<-lapply(seq_along(GetMe),
function(x) read.csv(GetMe[x], header = F, sep = ";", as.is = TRUE,skip=1))
Like this I've got 7 variables with 25 instances in each list element.
V1 V2 V3 V4 V5 V6 V7
1 1999 1 1 1 3.350 0.02030303 NA
2 1999 1 1 2 3.595 0.02178788 NA
3 1999 1 1 3 3.293 0.01995758 NA
4 1999 1 1 4 2.800 0.01696970 NA
5 1999 1 1 5 2.516 0.01524848 NA
6 1999 1 1 6 2.516 0.01524848 NA
7 1999 1 1 7 2.516 0.01524848 NA
8 1999 1 1 8 2.516 0.01524848 NA
9 1999 1 1 9 2.516 0.01524848 NA
10 1999 1 1 10 2.516 0.01524848 NA
11 1999 1 1 11 2.516 0.01524848 NA
12 1999 1 1 12 2.840 0.01721212 NA
13 1999 1 1 13 2.840 0.01721212 NA
14 1999 1 1 14 3.595 0.02178788 NA
15 1999 1 1 15 3.586 0.02173333 NA
16 1999 1 1 16 2.840 0.01721212 NA
17 1999 1 1 17 2.840 0.01721212 NA
18 1999 1 1 18 2.840 0.01721212 NA
19 1999 1 1 19 4.172 0.02528485 NA
20 1999 1 1 20 3.639 0.02205455 NA
21 1999 1 1 21 3.661 0.02218788 NA
22 1999 1 1 22 3.661 0.02218788 NA
23 1999 1 1 23 3.661 0.02218788 NA
24 1999 1 1 24 3.638 0.02204848 NA
25 * NA NA NA NA NA NA
I want to have them all in the same dataframe with the following disposition:
FECHA A„O MES DIASEM DIA H1 H2 H3 H4 H5 H6 H7 H8 H9 H10 H11 H12 H13 H14 H15
01/01/2003 2003 1 M 1 15 10.97 8.22 5.24 2.65 2.13 2.06 0.02 0 0 0.77 2.1 3.5 5.33 6.33
02/01/2003 2003 1 J 2 8.33 4.2 2.87 2.63 2.56 2.56 3.51 5.15 10 17.17 20 21.02 21.02 20 17.62
03/01/2003 2003 1 V 3 14.27 9.47 5.08 3.57 3.01 3.01 4.61 9.41 12.83 16.27 17.62 19.66 19.6 17.62 16.2
Where V1 is the year, V2 is the month, V3 is the day, V4 is de hour and V6 of the list corresponds to the values of each row.
In the final data frame each hour has to be one column.
Thanks for your help!
Using R, I am about to calculate groupwise means with aggregate(..., mean). The mean return however is wrong.
testdata <-read.table(text="
a b c d year
2 10 1 NA 1998
1 7 NA NA 1998
4 6 NA NA 1998
2 2 NA NA 1998
4 3 2 1 1998
2 6 NA NA 1998
3 NA NA NA 1998
2 7 NA 3 1998
1 8 NA 4 1998
2 7 2 5 1998
1 NA NA 4 1998
2 5 NA 6 1998
2 4 NA NA 1998
3 11 2 7 1998
1 18 4 10 1998
3 12 7 5 1998
2 17 NA NA 1998
2 11 4 5 1998
1 3 1 1 1998
3 5 1 3 1998
",header=TRUE,sep="")
aggregate(. ~ year, testdata,
function(x) c(mean = round(mean(x, na.rm=TRUE), 2)))
colMeans(subset(testdata, year=="1998", select=d), na.rm=TRUE)
aggregate says the mean of d for group 1998 is 4.62, but it is 4.5.
Reducing the data to one column only, aggregate gets it right:
aggregate(. ~ year, test[4:5],
function(x) c(mean = round(mean(x, na.rm=TRUE), 2)))
What's wrong with my aggregate() + mean() function?
aggregate is taking out your rows containing NAs in any column before passing it to the mean function. Try running your aggregate call without na.rm=TRUE - it will still work.
To fix this, you need to change the default na.action in aggregate to na.pass:
aggregate(. ~ year, testdata,
function(x) c(mean = round(mean(x, na.rm=TRUE), 2)), na.action = na.pass)
year a b c d
1 1998 2.15 7.89 2.67 4.5
I have a data frame (panel data): Ctry column indicates the name of countries in my data frame. In any column (for example: Carx) if number of NAs is larger 3; I want to drop the related country in my data fame. For example,
Country A has 2 NA
Country B has 4 NA
Country C has 3 NA
I want to drop country B in my data frame. I have a data frame like this (This is for illustration, my data frame is actually very huge):
Ctry year Carx
A 2000 23
A 2001 18
A 2002 20
A 2003 NA
A 2004 24
A 2005 18
B 2000 NA
B 2001 NA
B 2002 NA
B 2003 NA
B 2004 18
B 2005 16
C 2000 NA
C 2001 NA
C 2002 24
C 2003 21
C 2004 NA
C 2005 24
I want to create a data frame like this:
Ctry year Carx
A 2000 23
A 2001 18
A 2002 20
A 2003 NA
A 2004 24
A 2005 18
C 2000 NA
C 2001 NA
C 2002 24
C 2003 21
C 2004 NA
C 2005 24
A fairly straightforward way in base R is to use sum(is.na(.)) along with ave, to do the counting, like this:
with(mydf, ave(Carx, Ctry, FUN = function(x) sum(is.na(x))))
# [1] 1 1 1 1 1 1 4 4 4 4 4 4 3 3 3 3 3 3
Once you have that, subsetting is easy:
mydf[with(mydf, ave(Carx, Ctry, FUN = function(x) sum(is.na(x)))) <= 3, ]
# Ctry year Carx
# 1 A 2000 23
# 2 A 2001 18
# 3 A 2002 20
# 4 A 2003 NA
# 5 A 2004 24
# 6 A 2005 18
# 13 C 2000 NA
# 14 C 2001 NA
# 15 C 2002 24
# 16 C 2003 21
# 17 C 2004 NA
# 18 C 2005 24
You can use by() function to group by Ctry and count NA's of each group :
DF <- read.csv(
text='Ctry,year,Carx
A,2000,23
A,2001,18
A,2002,20
A,2003,NA
A,2004,24
A,2005,18
B,2000,NA
B,2001,NA
B,2002,NA
B,2003,NA
B,2004,18
B,2005,16
C,2000,NA
C,2001,NA
C,2002,24
C,2003,21
C,2004,NA
C,2005,24',
stringsAsFactors=F)
res <- by(data=DF$Carx,INDICES=DF$Ctry,FUN=function(x)sum(is.na(x)))
validCtry <-names(res)[res <= 3]
DF[DF$Ctry %in% validCtry, ]
# Ctry year Carx
#1 A 2000 23
#2 A 2001 18
#3 A 2002 20
#4 A 2003 NA
#5 A 2004 24
#6 A 2005 18
#13 C 2000 NA
#14 C 2001 NA
#15 C 2002 24
#16 C 2003 21
#17 C 2004 NA
#18 C 2005 24
EDIT :
if you have more columns to check, you could adapt the previous code as follows:
res <- by(data=DF,INDICES=DF$Ctry,
FUN=function(x){
return(sum(is.na(x$Carx)) <= 3 &&
sum(is.na(x$Barx)) <= 3 &&
sum(is.na(x$Tarx)) <= 3)
})
validCtry <- names(res)[res]
DF[DF$Ctry %in% validCtry, ]
where, of course, you may change the condition in FUN according to your needs.
Since you mention that you data is "very huge" (whatever that means exactly), you could try a solution with dplyr and see if it's perhaps faster than the solutions in base R. If the other solutions are fast enough, just ignore this one.
require(dplyr)
newdf <- df %.% group_by(Ctry) %.% filter(sum(is.na(Carx)) <= 3)