Monthly Correlation for 19 variables - r

I have the following dataset with 21 columns - 19 variables and Month and Date as date type columns.
The aim is to analyze how correlation change over time calculating a daily correlation between variables summarized in one month. For example, see this "monthly correlation" over time. (X-axis as month type)
+------------+---------+-----+-----+--------+---------+-------------+
| Date | Month | AOV | ASP | Clicks | Traffic | Impressions |
+------------+---------+-----+-----+--------+---------+-------------+
| 2017-01-01 | 2017-01 | 50 | 6 | 700 | 10000 | 4500 |
+------------+---------+-----+-----+--------+---------+-------------+
| 2017-01-02 | 2017-01 | 55 | 7 | 800 | 20000 | 4600 |
+------------+---------+-----+-----+--------+---------+-------------+
| 2017-02 | 2017-02 | 58 | 8 | 700 | 4599 | 2300 |
+------------+---------+-----+-----+--------+---------+-------------+
At the moment I have the following code but I only can compare two variables at the same time
ddply(corr,"Month",summarise,corr=cor(AOV,ASP))
I get the table below
+---------+------------+
| Month | corr |
+---------+------------+
| 2017-1 | 0.4958738 |
+---------+------------+
| 2017-10 | 0.8527522 |
+---------+------------+
| 2017-11 | -0.2751771 |
+---------+------------+
| 2017-12 | NA |
+---------+------------+
| 2017-2 | 0.6596346 |
+---------+------------+
| 2017-3 | 0.6399969 |
+---------+------------+
| 2017-4 | 0.7926245 |
+---------+------------+
| 2017-5 | 0.6429613 |
+---------+------------+
| 2017-6 | 0.3824414 |
+---------+------------+
| 2017-7 | 0.9154873 |
+---------+------------+
| 2017-8 | 0.7235767 |
+---------+------------+
| 2017-9 | 0.8264006 |
+---------+------------+
I have been using combn to create the combinations set but I'm not quite sure how to use it with ddply. I get 171 combinations in pairs.
combn(corr,2,simplify = F)

You can just do:
cor(your_data_frame)

Related

R - Join two dataframes based on date difference

Let's consider two dataframes df1 and df2. I would like to join dataframes based on the date difference only. For Example;
Dataframe 1: (df1)
| version_id | date_invoiced | product_id |
-------------------------------------------
| 1 | 03-07-2020 | 201 |
| 1 | 02-07-2020 | 2013 |
| 3 | 02-07-2020 | 2011 |
| 6 | 01-07-2020 | 2018 |
| 7 | 01-07-2020 | 201 |
Dataframe 2: (df2)
| validfrom | pricelist| pricelist_id |
------------------------------------------
|02-07-2020 | 10 | 101 |
|01-07-2020 | 20 | 102 |
|29-06-2020 | 30 | 103 |
|28-07-2020 | 10 | 104 |
|25-07-2020 | 5 | 105 |
I need to map the pricelist_id and the pricelist based on the the validfrom column present in df2. Say that, based on the least difference between the date_invoiced (df1) and validfrom (df2), the row should be mapped.
Expected Outcome:
| version_id | date_invoiced | product_id | date_diff | pricelist_id | pricelist |
----------------------------------------------------------------------------------
| 1 | 03-07-2020 | 201 | 1 | 101 | 10 |
| 1 | 02-07-2020 | 2013 | 1 | 102 | 20 |
| 3 | 02-07-2020 | 2011 | 1 | 102 | 20 |
| 6 | 01-07-2020 | 2018 | 1 | 103 | 30 |
| 7 | 01-07-2020 | 201 | 1 | 103 | 30 |
I need to map purely based on the difference and the difference should be the least. Always, the date_invoiced (df1), should have closest difference comparing to validfrom (df2). Thanks
Perhaps you might want to try using date.table and nearest roll. Here, the join is made on DATE which would be DATEINVOICED from df1 and VALIDFROM in df2.
library(data.table)
setDT(df1)
setDT(df2)
df1$DATEINVOICED <- as.Date(df1$DATEINVOICED, format = "%d-%m-%y")
df2$VALIDFROM <- as.Date(df2$VALIDFROM, format = "%d-%m-%y")
setkey(df1, DATEINVOICED)[, DATE := DATEINVOICED]
setkey(df2, VALIDFROM)[, DATE := VALIDFROM]
df2[df1, on = "DATE", roll='nearest']

How to add a column from one dataframe to another dataframe when two other columns match [duplicate]

This question already has answers here:
How to join (merge) data frames (inner, outer, left, right)
(13 answers)
Closed 3 years ago.
I have two datasets, db1 and db2, like the following ones:
db1
+---------+-------+-------+------+------+-----------------+
| Authors| IDs | Title | Year | ISSN | Other columns...|
+---------+-------+-------+------+------+-----------------+
| Abad J.| 16400 | 1 | 2014 |14589 | |
| Ares K.| 70058 | 2 | 2012 |15874 | |
| Anto E.| 71030 | 3 | 2011 |16999 | |
| A Banul| 57196 | 1 | 2011 |21546 | |
| A Berat| 56372 | 2 | 2011 |12554 | |
+---------+-------+-------+------+------+-----------------+
and
db2
+---------+-------+-------+------+------+-------+---------------------------+
| Authors| IDs | Title | Year | ISSN | IF | Other different columns...|
+---------+-------+-------+------+------+-------+---------------------------+
| Abad J.| 16400 | 1 | 2013 |14589 | 2,3 | |
| Ares K.| 70058 | 2 | 2012 |15874 | 3,3 | |
| Anto E.| 71030 | 3 | 2011 |14587 | 1,2 | |
| A Banul| 57196 | 1 | 2011 |21546 | 7,8 | |
| A Berat| 56372 | 2 | 2011 |75846 | 4,5 | |
+---------+-------+-------+------+------+-------+---------------------------+
Basically, what i want is to add to db1 the column IF from db2 when the two columns Year and ISSN have the same values. So what i want to achive is the following output in my example:
db1
+---------+-------+-------+------+------+-------+----------------+
| Authors| IDs | Title | Year | ISSN | IF |Other columns...|
+---------+-------+-------+------+------+-------+----------------+
| Abad J.| 16400 | 1 | 2014 |14589 | NA | |
| Ares K.| 70058 | 2 | 2012 |15874 | 3,3 | |
| Anto E.| 71030 | 3 | 2011 |16999 | NA | |
| A Banul| 57196 | 1 | 2011 |21546 | 7,8 | |
| A Berat| 56372 | 2 | 2011 |12554 | NA | |
+---------+-------+-------+------+------+-------+----------------+
i have tried with merge but, since i have also different columns, i obtain a very big dataset.
What i want is to use the function match but with more than one condition applied at the same time.
Any guess ?
dplyr::left_join(db1, db2 %>% dplyr::select(Year, ISSN, IF))
This should work providing the two dataframes have no other columns in common besides the ones you've shown here.

Convert decimal date to year and week number [duplicate]

This question already has answers here:
Converting date in Year.decimal form in R
(2 answers)
Closed 3 years ago.
I am running an arima model the library forecast, the output of this model consists in something like this:
+----------+----------------+------------+----------+-----------+----------+
| | Point Forecast | Lo 80 | Hi 80 | Lo 95 | Hi 95 |
+----------+----------------+------------+----------+-----------+----------+
| 2016.261 | 335.0697 | 267.368566 | 402.7707 | 231.52977 | 438.6095 |
| 2016.281 | 346.7667 | 234.935713 | 458.5978 | 175.73594 | 517.7975 |
| 2016.300 | 296.3013 | 174.495528 | 418.1070 | 110.01547 | 482.5870 |
| 2016.319 | 379.0095 | 255.265230 | 502.7537 | 189.75899 | 568.2600 |
+----------+----------------+------------+----------+-----------+----------+
What I would like to achieve is to convert the decimal date (for example 2016.261), by adding two columns, one representing the year and the other one the number of week, achieveing something like this:
+----------+---------+------+----------------+------------+----------+-----------+----------+
| | year | week | Point Forecast | Lo 80 | Hi 80 | Lo 95 | Hi 95 |
+----------+---------+------+----------------+------------+----------+-----------+----------+
| 2016.261 | 20.. | n1 | 335.0697 | 267.368566 | 402.7707 | 231.52977 | 438.6095 |
| 2016.281 | 20.. | n1 | 346.7667 | 234.935713 | 458.5978 | 175.73594 | 517.7975 |
| 2016.300 | 20.. | n3 | 296.3013 | 174.495528 | 418.1070 | 110.01547 | 482.5870 |
| 2016.319 | 20.. | n4 | 379.0095 | 255.265230 | 502.7537 | 189.75899 | 568.2600 |
+----------+---------+------+----------------+------------+----------+-----------+----------+
Well, with dataframe like this for example:
df1 <- data.frame(x =c(2016.01, 2016.32, 2016.261, 2016.281 , 2016.300 , 2016.319))
df1$date <- as.Date(as.character(df1$x), format="%Y.%j")
df1$year <- format(df1$date, "%Y")
df1$week <- format(df1$date, "%W")
df1
# x date year week
# 1 2016.010 2016-01-01 2016 00
# 2 2016.320 2016-02-01 2016 05
# 3 2016.261 2016-09-17 2016 37
# 4 2016.281 2016-10-07 2016 40
# 5 2016.300 2016-01-03 2016 00
# 6 2016.319 2016-11-14 2016 46
NB: I added first two dates just to check that the dates were correct. And istead of df1 you can use your dataframe. All information is actually from here.

change column value only if two other columns are duplicates

I am having a hard time to figure this out in R.
This is what I would like to do.
In a data frame like below, I would like to do if Name and Class duplicates add two row's score and if not, leave it as it is.
+------------------+-----------+-------+
| Name | Class | Score |
+------------------+-----------+-------+
| Sara | Sophomore | 10 |
| John | Freshman | 20 |
| Taylor | Sophomore | 30 |
| Tyler | Junior | 10 |
| Keith | Junior | 20 |
| Andrew | Senior | 30 |
| Victor | Senior | 10 |
| Nancy |Sophomore | 20 |
| Taylor | Junior | 30 |
| John | Senior | 10 |
| Victor | Freshman | 20 |
| Sara | Sophomore | 30 |
| John | Freshman | 10 |
| Taylor | Sophomore | 20 |
| John | Senior | 30 |
+------------------+-----------+-------+
So basically, the end result should look like:
+--------+-----------+-------+--+--+--+--+
| Name | Class | Score | | | | |
+--------+-----------+-------+--+--+--+--+
| Sara | Sophomore | 40 | | | | |
| John | Freshman | 30 | | | | |
| Taylor | Sophomore | 50 | | | | |
| Tyler | Junior | 10 | | | | |
| Keith | Junior | 20 | | | | |
| Andrew | Senior | 30 | | | | |
| Victor | Senior | 10 | | | | |
| Nancy | Sophomore | 20 | | | | |
| Taylor | Junior | 30 | | | | |
| John | Senior | 40 | | | | |
| Victor | Freshman | 20 | | | | |
+--------+-----------+-------+--+--+--+--+
As you see if name is the only duplicated value, it does not change (Example of John Freshman and John Senior). If class is the only duplicated value, it does not change either... Two columns in a row have to be duplicated to make the change.
My try is as below, but it is not working and am getting error message
'Error in if ((experiment[i, 1] == experiment[j, 1]) & (experiment[i, 2] == : missing value where TRUE/FALSE needed'
My code:
# creating an empty data frame
experiment1<-data.frame(matrix(ncol=3, nrow=15))
for(i in 1: nrow(experiment)){
for(j in i+1: nrow(experiment)){
if((experiment[i,1] == experiment[j,1]) & (experiment[i,2] == experiment[j,2])){
experiment1[i,1] <- experiment[i,1]
experiment1[i,2] <- experiment[i,2]
experiment1[i,3] <- experiment[i,3] + experiment[j,3]}
else{
experiment1[i,1] <- experiment[i,1]
experiment1[i,2] <- experiment[i,2]
experiment1[i,3] <- experiment[i,3]}}}
Could anyone help fixing my code or figuring out "nobler" code?
Aggregation is like the first argument explained in any basic R tutorial, I suggest you go and follow some.
base R
aggregate(formula = Score ~ Name + Class, data = mydf, FUN = sum)
dplyr
mydf %>% group_by(Name, Class) %>% summarize(scoreSum = sum(Score))
data.table
setDT(mydf)[ , .(scoreSum = sum(number)), by = .(Name, Class)]

Hmisc Table Creation

Just starting out with R and trying to figure out what works for my needs when it comes to creating "summary tables." I am used to Custom Tables in SPSS, and the CrossTable function in the package gmodels gets me almost where I need to be; not to mention it is easy to navigate for someone just starting out in R.
That said, it seems like the Hmisc table is very good at creating various summaries and exporting to LaTex (ultimately what I need to do).
My questions are:1)can you create the table below easily in the Hmsic page? 2) if so, can I interact variables (2 in the the column)? and finally 3) can I access p-values of significance tests (chi square).
Thanks in advance,
Brock
Cell Contents
|-------------------------|
| Count |
| Row Percent |
| Column Percent |
|-------------------------|
Total Observations in Table: 524
| asq[, 23]
asq[, 4] | 1 | 2 | 3 | 4 | 5 | Row Total |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
0 | 76 | 54 | 93 | 46 | 54 | 323 |
| 23.529% | 16.718% | 28.793% | 14.241% | 16.718% | 61.641% |
| 54.286% | 56.250% | 63.265% | 63.889% | 78.261% | |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
1 | 64 | 42 | 54 | 26 | 15 | 201 |
| 31.841% | 20.896% | 26.866% | 12.935% | 7.463% | 38.359% |
| 45.714% | 43.750% | 36.735% | 36.111% | 21.739% | |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
Column Total | 140 | 96 | 147 | 72 | 69 | 524 |
| 26.718% | 18.321% | 28.053% | 13.740% | 13.168% | |
-------------|-----------|-----------|-----------|-----------|-----------|-----------|
The gmodels package has a function called CrossTable, which is very nice for those used to SPSS and SAS output. Try this example:
library(gmodels) # run install.packages("gmodels") if you haven't installed the package yet
x <- sample(c("up", "down"), 100, replace = TRUE)
y <- sample(c("left", "right"), 100, replace = TRUE)
CrossTable(x, y, format = "SPSS")
This should provide you with an output just like the one you displayed on your question, very SPSS-y. :)
If you are coming from SPSS, you may be interested in the package Deducer ( http://www.deducer.org ). It has a contingency table function:
> library(Deducer)
> data(tips)
> tables<-contingency.tables(
+ row.vars=d(smoker),
+ col.vars=d(day),data=tips)
> tables<-add.chi.squared(tables)
> print(tables,prop.r=T,prop.c=T,prop.t=F)
================================================================================================================
==================================================================================
========== Table: smoker by day ==========
| day
smoker | Fri | Sat | Sun | Thur | Row Total |
-----------------------|-----------|-----------|-----------|-----------|-----------|
No Count | 4 | 45 | 57 | 45 | 151 |
Row % | 2.649% | 29.801% | 37.748% | 29.801% | 61.885% |
Column % | 21.053% | 51.724% | 75.000% | 72.581% | |
-----------------------|-----------|-----------|-----------|-----------|-----------|
Yes Count | 15 | 42 | 19 | 17 | 93 |
Row % | 16.129% | 45.161% | 20.430% | 18.280% | 38.115% |
Column % | 78.947% | 48.276% | 25.000% | 27.419% | |
-----------------------|-----------|-----------|-----------|-----------|-----------|
Column Total | 19 | 87 | 76 | 62 | 244 |
Column % | 7.787% | 35.656% | 31.148% | 25.410% | |
Large Sample
Test Statistic DF p-value | Effect Size est. Lower (%) Upper (%)
Chi Squared 25.787 3 <0.001 | Cramer's V 0.325 0.183 (2.5) 0.44 (97.5)
-----------
================================================================================================================
You can get the counts and test to latex or html using the xtable package:
> library(xtable)
> xtable(drop(extract.counts(tables)[[1]]))
> test <- contin.tests.to.table((tables[[1]]$tests))
> xtable(test)

Resources