I have one data.frame with three columns Year, Nominal_Revenue and COEFFICIENT. So I want to forecast with this data like example below
library(dplyr)
TEST<-data.frame(
Year= c(2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021),
Nominal_Revenue=c(8634,5798,6011,6002,6166,6478,6731,7114,6956,6968,7098,7610,7642,8203,9856,10328,11364,12211,13150,NA,NA,NA),
COEFFICIENT=c(NA,1.016,1.026,1.042,1.049,1.106,1.092,1.123,1.121,0.999,1.059,1.066,1.006,1.081,1.055,1.063,1.071,1.04,1.072,1.062,1.07, 1.075))
SIMULATION<-mutate(TEST,
FORECAST=lag(Nominal_Revenue)*COEFFICIENT
)
And results from this code is like picture below, or in other words this code calculate forecasting only for one year or more precisely 2019.
So my intention is get results only for NA in column Nominal_Revenue,like picture below.
So can anybody help me how to fix this code ?
Because each time you need the previously computed value, we can loop for the number of NAs in your variable and apply a dplyr
for (i in 1:length(which(is.na(TEST$Nominal_Revenue)))){
TEST=TEST%>%mutate(Nominal_Revenue=if_else(is.na(Nominal_Revenue),COEFFICIENT*lag(Nominal_Revenue),Nominal_Revenue))
}
> TEST
Year Nominal_Revenue COEFFICIENT
1 2000 8634.00 NA
2 2001 5798.00 1.016
3 2002 6011.00 1.026
4 2003 6002.00 1.042
5 2004 6166.00 1.049
6 2005 6478.00 1.106
7 2006 6731.00 1.092
8 2007 7114.00 1.123
9 2008 6956.00 1.121
10 2009 6968.00 0.999
11 2010 7098.00 1.059
12 2011 7610.00 1.066
13 2012 7642.00 1.006
14 2013 8203.00 1.081
15 2014 9856.00 1.055
16 2015 10328.00 1.063
17 2016 11364.00 1.071
18 2017 12211.00 1.040
19 2018 13150.00 1.072
20 2019 13965.30 1.062
21 2020 14942.87 1.070
22 2021 16063.59 1.075
Related
I am running a GAMM using package mgcv. The model is running fine and gives an output that makes sense, but when I use vis.gam(plot.type="persp") my graph appears like this:
enter image description here
Why is this happening? When I use vis.gam(plot.type="contour") there is no area which is transparent.
It appears to not simply be a problem with the heat color pallete; the same thing happens when I change the color scheme of the "persp" plot:
persp plot, "topo" colour
The contour plot is completely filled while the persp plot is still transparent at the top.
Data:
logcpue assnage distkm fsamplingyr
1 -1.5218399 7 3.490 2015
2 -1.6863990 4 3.490 2012
3 -1.4534337 6 3.490 2014
4 -1.5207723 5 3.490 2013
5 -2.4061258 2 3.490 2010
6 -2.5427262 3 3.490 2011
7 -1.6177367 3 3.313 1998
8 -4.4067192 10 3.313 2005
9 -4.3438054 11 3.313 2006
10 -2.8834031 7 3.313 2002
11 -2.3182512 2 3.313 1997
12 -4.1108738 1 3.235 2010
13 -2.0149030 3 3.235 2012
14 -1.4900912 6 3.235 2015
15 -3.7954892 2 3.235 2011
16 -1.6499840 4 3.235 2013
17 -1.9924302 5 3.235 2014
18 -1.2122716 4 3.189 1998
19 -0.6675703 3 3.189 1997
20 -4.7957905 7 3.106 1998
21 -3.8763958 6 3.106 1997
22 -1.2205021 4 3.073 2010
23 -1.9262374 7 3.073 2013
24 -3.3463891 9 3.073 2015
25 -1.7805862 2 3.073 2008
26 -3.2451931 8 3.073 2014
27 -1.4441139 5 3.073 2011
28 -1.4395389 6 3.073 2012
29 -1.6357552 4 2.876 2014
30 -1.3449091 5 2.876 2015
31 -2.3782225 3 2.876 2013
32 -4.4886364 1 2.876 2011
33 -2.6026897 2 2.876 2012
34 -3.5765503 1 2.147 2002
35 -4.8040211 9 2.147 2010
36 -1.3993664 5 2.147 2006
37 -1.2712250 4 2.147 2005
38 -1.8495790 7 2.147 2008
39 -2.5073795 1 2.034 2012
40 -2.0654553 4 2.034 2015
41 -3.6309855 2 2.034 2013
42 -2.2643639 3 2.034 2014
43 -2.2643639 6 1.452 2006
44 -3.3900241 8 1.452 2008
45 -4.9628446 2 1.452 2002
46 -2.0088240 5 1.452 2005
47 -3.9186675 1 1.323 2013
48 -4.3438054 2 1.323 2014
49 -3.5695327 3 1.323 2015
50 -1.6986690 7 1.200 2005
51 -3.2451931 8 1.200 2006
52 -0.9024016 4 1.200 2002
library(mgcv)
f1 <- formula(logcpue ~ s(assnage)+distkm)
m1 <- gamm(f1,random = list(fsamplingyr =~ 1),
method = "REML",
data =ycsnew)
vis.gam(m1$gam,color="topo",plot.type = "persp",theta=180)
vis.gam(m1$gam,color="heat",plot.type = "persp",theta=180)
vis.gam(m1$gam,view=c("assnage","distkm"),
plot.type="contour",color="heat",las=1)
vis.gam(m1$gam,view=c("assnage","distkm"),
plot.type="contour",color="terrain",las=1,contour.col="black")
The code of vis.gam has this:
surf.col[surf.col > max.z * 2] <- NA
I am unable to understand what it is doing and it appears to be rather ad_hoc. NA values of colors are generally transparent. If you comment out that line (and assign the environment of the new function as:
environment(vis.gam2) <- environment(vis.gam)
.... you get complete coloring of the surface.
My friend is currently working on his assignment about estimation of parameter of a time series model, SARIMAX (Seasonal ARIMA Exogenous), with Maximum Likelihood Estimation (MLE) method. The data used by him is about the monthly rainfall from 2000 - 2012 with Indian Ocean Dipole (IOD) index as the exogenous variable.
Here is data:
MONTH YEAR RAINFALL IOD
1 1 2000 15.3720526 0.0624
2 2 2000 10.3440804 0.1784
3 3 2000 14.6116392 0.3135
4 4 2000 18.6842179 0.3495
5 5 2000 15.2937896 0.3374
6 6 2000 15.0233152 0.1946
7 7 2000 11.1803399 0.3948
8 8 2000 11.0589330 0.4391
9 9 2000 10.1488916 0.3020
10 10 2000 21.1187121 0.2373
11 11 2000 15.3980518 -0.0324
12 12 2000 18.9393770 -0.0148
13 1 2001 19.1075901 -0.2448
14 2 2001 14.9097284 0.1673
15 3 2001 19.2379833 0.1538
16 4 2001 19.6900990 0.3387
17 5 2001 8.0684571 0.3578
18 6 2001 14.0463518 0.3394
19 7 2001 5.9916609 0.1754
20 8 2001 8.4439327 0.0048
21 9 2001 11.8321596 0.1648
22 10 2001 24.3700636 -0.0653
23 11 2001 22.3584436 0.0291
24 12 2001 23.6114379 0.1731
25 1 2002 17.8409641 0.0404
26 2 2002 14.7377067 0.0914
27 3 2002 21.2226294 0.1766
28 4 2002 16.6403125 -0.1512
29 5 2002 10.8074049 -0.1072
30 6 2002 6.3796552 0.0244
31 7 2002 17.0704423 0.0542
32 8 2002 1.7606817 0.0898
33 9 2002 5.3665631 0.6736
34 10 2002 8.3246622 0.7780
35 11 2002 17.8044938 0.3616
36 12 2002 16.7062862 0.0673
37 1 2003 13.5572859 -0.0628
38 2 2003 17.1113997 0.2038
39 3 2003 14.9899967 0.1239
40 4 2003 14.0996454 0.0997
41 5 2003 11.4017542 0.0581
42 6 2003 6.7749539 0.3490
43 7 2003 7.1484264 0.4410
44 8 2003 10.3004854 0.4063
45 9 2003 10.6630202 0.3289
46 10 2003 20.6518764 0.1394
47 11 2003 20.8638443 0.1077
48 12 2003 20.5548048 0.4093
49 1 2004 16.0436903 0.2257
50 2 2004 17.2568827 0.2978
51 3 2004 20.2361063 0.2523
52 4 2004 11.6619038 0.1212
53 5 2004 12.8296532 -0.3395
54 6 2004 8.4202138 -0.1764
55 7 2004 15.5916644 0.0118
56 8 2004 0.9486833 0.1651
57 9 2004 7.2732386 0.2825
58 10 2004 18.0083314 0.3747
59 11 2004 14.4672043 0.1074
60 12 2004 17.3637554 0.0926
61 1 2005 18.9420168 0.0551
62 2 2005 17.0146995 -0.3716
63 3 2005 23.3002146 -0.2641
64 4 2005 17.8689675 0.2829
65 5 2005 17.2365890 0.1883
66 6 2005 14.0178458 0.0347
67 7 2005 12.6925175 -0.0680
68 8 2005 9.3861600 -0.0420
69 9 2005 11.7132404 -0.1425
70 10 2005 18.5768673 -0.0514
71 11 2005 19.6723156 -0.0008
72 12 2005 18.3248465 -0.0659
73 1 2006 18.6252517 0.0560
74 2 2006 18.7002674 -0.1151
75 3 2006 23.4882950 -0.0562
76 4 2006 19.5652754 0.1862
77 5 2006 13.6857590 0.0105
78 6 2006 11.1265448 0.1504
79 7 2006 11.0227038 0.3490
80 8 2006 7.6550637 0.5267
81 9 2006 1.8708287 0.8089
82 10 2006 5.4129474 0.9479
83 11 2006 15.2249795 0.7625
84 12 2006 14.1703917 0.3941
85 1 2007 22.8691932 0.4027
86 2 2007 14.3317829 0.3353
87 3 2007 13.0766968 0.2792
88 4 2007 23.2335964 0.2960
89 5 2007 12.2474487 0.4899
90 6 2007 11.3357840 0.2445
91 7 2007 9.3112835 0.3629
92 8 2007 1.6431677 0.5396
93 9 2007 6.8483575 0.6252
94 10 2007 13.1529464 0.4540
95 11 2007 14.5120639 0.2489
96 12 2007 18.7909553 0.0054
97 1 2008 17.6493626 0.3037
98 2 2008 13.3828248 0.1166
99 3 2008 19.0525589 0.2730
100 4 2008 17.3262806 0.0467
101 5 2008 5.2345009 0.4020
102 6 2008 3.3166248 0.4263
103 7 2008 10.1094016 0.5558
104 8 2008 11.7260394 0.4236
105 9 2008 10.7470926 0.4762
106 10 2008 15.1591557 0.4127
107 11 2008 25.5558213 0.1474
108 12 2008 18.2455474 0.1755
109 1 2009 14.5430396 0.2185
110 2 2009 12.8569048 0.3521
111 3 2009 24.0707291 0.2680
112 4 2009 16.0374562 0.3234
113 5 2009 7.2387844 0.4757
114 6 2009 13.8021737 0.3078
115 7 2009 7.5232972 0.1179
116 8 2009 6.3403470 0.1999
117 9 2009 4.6583259 0.2814
118 10 2009 13.0958008 0.3646
119 11 2009 15.3329710 0.1914
120 12 2009 19.0394328 0.3836
121 1 2010 15.5080624 0.4732
122 2 2010 17.1551742 0.2134
123 3 2010 23.9729014 0.6320
124 4 2010 18.2537667 0.5644
125 5 2010 18.2236111 0.1881
126 6 2010 14.6082169 0.0680
127 7 2010 13.6161669 0.3111
128 8 2010 11.1220502 0.2472
129 9 2010 20.7870152 0.1259
130 10 2010 19.5371441 -0.0529
131 11 2010 24.8837296 -0.2133
132 12 2010 15.5016128 0.0233
133 1 2011 17.3435867 0.3739
134 2 2011 17.6096564 0.4228
135 3 2011 19.0682983 0.5413
136 4 2011 20.4890214 0.3569
137 5 2011 12.0540450 0.1313
138 6 2011 12.5896783 0.2642
139 7 2011 5.0990195 0.5356
140 8 2011 6.5726707 0.6490
141 9 2011 2.5099801 0.5884
142 10 2011 17.6380271 0.7376
143 11 2011 17.5128524 0.6004
144 12 2011 17.2655727 0.0990
145 1 2012 16.6883193 0.2272
146 2 2012 20.8374663 0.1049
147 3 2012 16.7002994 0.1991
148 4 2012 18.7962762 -0.0596
149 5 2012 16.9292646 -0.1165
150 6 2012 11.6490343 0.2207
151 7 2012 6.2529993 0.8586
152 8 2012 5.8991525 0.9473
153 9 2012 7.8485667 0.8419
154 10 2012 12.5817328 0.4928
155 11 2012 24.7770055 0.1684
156 12 2012 23.2486559 0.4899
In doing this, he works with R because it has the package for analysing the SARIMAX model. And so far, he's been doing it good with arimax() function of TSA package with seasonal ARIMA order (1,0,1).
So here I attach his syntax:
#Importing data
data=read.csv("C:/DATA.csv", header=TRUE)
rainfall=data$RAINFALL
exo=data$IOD
#Creating the suitable model of data that is able to be read by R with ts() function
library(forecast)
rainfall_ts=ts(rainfall, start=c(2000, 1), end=c(2012, 12), frequency = 12)
exo_ts=ts(exo, start=c(2000, 1), end=c(2012, 12), frequency = 12)
#Fitting SARIMAX model with seasonal ARIMA order (1,0,1) & estimation method is MLE (or ML)
library(TSA)
model_ts=arimax(log(rainfall_ts), order=c(1,0,1), seasonal=list(order=c(1,0,1), period=12), xreg=exo_ts, method='ML')
Below is the result:
> model_ts
Call:
arimax(x = log(rainfall_ts), order = c(1, 0, 1), seasonal = list(order = c(1,
0, 1), period = 12), xreg = exo_ts, method = "ML")
Coefficients:
ar1 ma1 sar1 sma1 intercept xreg
0.5730 -0.4342 0.9996 -0.9764 2.6757 -0.4894
s.e. 0.2348 0.2545 0.0018 0.0508 0.1334 0.1489
sigma^2 estimated as 0.1521: log likelihood = -86.49, aic = 184.99
Although he claimed the syntax is working, but his lecturer expected more.
Theoretically, because he used MLE, he has proven that the first derivatives of the log-likelihood function give implicit solutions. It means that the estimation process couldn't be done analytically with MLE so we need
to continue our working with the numerical method to get it done.
So this is the expectation of my friend's lecturer. He expected him that he can at least convince him that the estimation is truly need to be done numerically
and if so, he might be able to show him what method that is used by R (the numerical method such as Newton-Raphson, BFGS, BHHH, etc).
But the thing here is the arimax() function doesn't give us the choice on numerical method to choose if the estimation need to be executed numerically like below:
model_ts=arimax(log(rainfall_ts), order=c(1,0,1), seasonal=list(order=c(1,0,1), period=12), xreg=exo_ts, method='ML')
The 'method' above is for the estimation method and the available method are ML, CSS, and CSS-ML. It is clear that the sintax above doesn't consist of the numerical method and this is the matter.
So is there any possible way to know what numerical method used by R? Or my friend just got to construct his own program without depending to arimax() function?
If there are any errors in my code, please let me know. I also apologize for any grammatical or vocabulary mistakes. English is not my native language.
Some suggestions:
Estimate the model with each of the methods: ML, CSS, CSS-ML. Do the parameter estimates agree?
You can view the source code of the arimax() function by typing arimax, View(arimax) or getAnywhere(arimax) in the console.
Or you can do a debug by placing a debug bullet before the line model_ts=arimax(...) and then sourcing or debugSource()-ing your script. You can then step into the arimax function and see/verify yourself which optimization method arimax uses.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I want to run regression for my panel data.
I have a panel data in the following format:
Column 1 has years column 2 has company name and column 3 has Equity variable
Year company name EQUITY
2006 A 12
2007 A 13
2008 A 23
2009 A 24
2010 A 13
2011 A 14
2012 A 12
2013 A 14
2014 A 14
2015 A 15
2006 B 221
2007 B 242
2008 B 262
2009 B 250
2010 B 400
2011 B 411
2012 B 420
2013 B 420
2014 B 422
2015 B 450
I have a data of 10 years for 200 companies. I want to regress the log of equity of each company on number of years(time- 10 years ). I want only slope coefficient.
I want my output like this:
Column 1-years column 2-company name column 3- beta values
Year company name slope(beta) p-value
2006 A beta value (assumed)
2007 A "
2008 A "
2009 A
2010 A
2011 A
2012 A "
2013 A
2014 A "
2015 A "
I mean slope coefficient of each comany.
Can't see what you've tried so far so here's a solution to get you up and running. The final output you sketch out doesn't really make sense since you have a slope for each company - not for each company for each year.
Here's a base R version for running the regressions. by is used to split the data and then lm for the estimation.
res <- by(indata, indata$company, FUN=function(x) { coef(lm(log(EQUITY) ~ Year+0, data=x))} )
This results in the following output of the slopes and the output can be used for plotting or listing
> res
indata$company: A
[1] 0.001344837
-------------------------------------------------------
indata$company: B
[1] 0.002896053
Update
if you want to add the slopes to the dataset for each year you can add
indata$slope <- res[indata$company]
which gives
> indata
Year company EQUITY slope
1 2006 A 12 0.001344837
2 2007 A 13 0.001344837
3 2008 A 23 0.001344837
4 2009 A 24 0.001344837
5 2010 A 13 0.001344837
6 2011 A 14 0.001344837
7 2012 A 12 0.001344837
8 2013 A 14 0.001344837
9 2014 A 14 0.001344837
10 2015 A 15 0.001344837
11 2006 B 221 0.002896053
12 2007 B 242 0.002896053
13 2008 B 262 0.002896053
14 2009 B 250 0.002896053
15 2010 B 400 0.002896053
16 2011 B 411 0.002896053
17 2012 B 420 0.002896053
18 2013 B 420 0.002896053
19 2014 B 422 0.002896053
20 2015 B 450 0.002896053
I have a database with the columns: "Year", "Month", "T1",......"T31":
For example df_0 is the original format and I want to convert it in the new_df (second part)
id0 <- c ("Year", "Month", "T_day1", "T_day2", "T_day3", "T_day4", "T_day5")
id1 <- c ("2010", "January", 10, 5, 2,3,3)
id2 <- c ("2010", "February", 20,36,5,8,1)
id3 <- c ("2010", "March", 12,23,23,5,25)
df_0 <- rbind (id1, id2, id3)
colnames (df_0)<- id0
head(df_0)
I would like to create a new dataframe in which the data from T1....T31 for each month and year will join to a column with all dates for example from 1st January 2010 to 4th January 2012:
date<-seq(as.Date("2010-01-01"), as.Date("2012-01-04"), by="days")
or join the value in a new column of a dataframe based on the values of other three columns (year, month and day):
year <- lapply(strsplit(as.character(date), "\\-"), "[", 1)
month <- lapply(strsplit(as.character(date), "\\-"), "[", 2)
day <- lapply(strsplit(as.character(date), "\\-"), "[", 3)
df <- cbind (year, month, day)
I would like to have a data frame with the information in this way:
Year <- rep(2010,15)
Month <- c(rep("January", 5), rep("February",5), rep("March",5))
Day<- rep(c(1,2,3,4,5))
Value <- c(10,5,2,3,3,20,36,5,8,1,12,23,23,5,25)
new_df <- cbind (Year, Month, Day, Value)
head(new_df)
Thanks in advance
What you're looking for is to reshape your data. One library which you can use is the reshape2 library. Here we can use the melt function in the reshape2 library:
melt(data.frame(df_0), id.vars=c("Year", "Month"))
Based on the data you have, the output would have:
Year Month variable value
1 2010 January T_day1 10
2 2010 February T_day1 20
3 2010 March T_day1 12
4 2010 January T_day2 5
5 2010 February T_day2 36
6 2010 March T_day2 23
7 2010 January T_day3 2
8 2010 February T_day3 5
9 2010 March T_day3 23
10 2010 January T_day4 3
11 2010 February T_day4 8
12 2010 March T_day4 5
13 2010 January T_day5 3
14 2010 February T_day5 1
15 2010 March T_day5 25
Which you can then alter the variable column to the days depending on how you have formatted that column.
Firstly, I generated my own test data. I used a reduced date vector for easier demonstration: 2010-01-01 to 2010-03-04. In my df_0 I generated a value for each date in my reduced date vector not including the last date, and including one additional date not in my date vector: 2010-03-05. It will become clear later why I did this.
set.seed(1);
date <- seq(as.Date('2010-01-01'),as.Date('2010-03-04'),by='day');
df_0 <- reshape(setNames(as.data.frame(cbind(do.call(rbind,strsplit(strftime(c(date[-length(date)],as.Date('2010-03-05')),'%Y %B %d'),' ')),round(rnorm(length(date)),3))),c('Year','Month','Day','T_day')),dir='w',idvar=c('Year','Month'),timevar='Day');
attr(df_0,'reshapeWide') <- NULL;
df_0;
## Year Month T_day.01 T_day.02 T_day.03 T_day.04 T_day.05 T_day.06 T_day.07 T_day.08 T_day.09 T_day.10 T_day.11 T_day.12 T_day.13 T_day.14 T_day.15 T_day.16 T_day.17 T_day.18 T_day.19 T_day.20 T_day.21 T_day.22 T_day.23 T_day.24 T_day.25 T_day.26 T_day.27 T_day.28 T_day.29 T_day.30 T_day.31
## 1 2010 January -0.626 0.184 -0.836 1.595 0.33 -0.82 0.487 0.738 0.576 -0.305 1.512 0.39 -0.621 -2.215 1.125 -0.045 -0.016 0.944 0.821 0.594 0.919 0.782 0.075 -1.989 0.62 -0.056 -0.156 -1.471 -0.478 0.418 1.359
## 32 2010 February -0.103 0.388 -0.054 -1.377 -0.415 -0.394 -0.059 1.1 0.763 -0.165 -0.253 0.697 0.557 -0.689 -0.707 0.365 0.769 -0.112 0.881 0.398 -0.612 0.341 -1.129 1.433 1.98 -0.367 -1.044 0.57 <NA> <NA> <NA>
## 60 2010 March -0.135 2.402 -0.039 <NA> 0.69 <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA> <NA>
The first half of the solution is a reshaping from wide format to long, and can be done with a single call to reshape(). Additionally, I wrapped it in a call to na.omit() to prevent NA values from being generated from the unavoidable NA cells in df_0:
df_1 <- na.omit(reshape(df_0,dir='l',idvar=c('Year','Month'),timevar='Day',varying=grep('^T_day\\.',names(df_0)),v.names='Value'));
rownames(df_1) <- NULL;
df_1[order(match(df_1$Month,month.name),df_1$Day),];
## Year Month Day Value
## 1 2010 January 1 -0.626
## 4 2010 January 2 0.184
## 7 2010 January 3 -0.836
## 10 2010 January 4 1.595
## 12 2010 January 5 0.33
## 15 2010 January 6 -0.82
## 17 2010 January 7 0.487
## 19 2010 January 8 0.738
## 21 2010 January 9 0.576
## 23 2010 January 10 -0.305
## 25 2010 January 11 1.512
## 27 2010 January 12 0.39
## 29 2010 January 13 -0.621
## 31 2010 January 14 -2.215
## 33 2010 January 15 1.125
## 35 2010 January 16 -0.045
## 37 2010 January 17 -0.016
## 39 2010 January 18 0.944
## 41 2010 January 19 0.821
## 43 2010 January 20 0.594
## 45 2010 January 21 0.919
## 47 2010 January 22 0.782
## 49 2010 January 23 0.075
## 51 2010 January 24 -1.989
## 53 2010 January 25 0.62
## 55 2010 January 26 -0.056
## 57 2010 January 27 -0.156
## 59 2010 January 28 -1.471
## 61 2010 January 29 -0.478
## 62 2010 January 30 0.418
## 63 2010 January 31 1.359
## 2 2010 February 1 -0.103
## 5 2010 February 2 0.388
## 8 2010 February 3 -0.054
## 11 2010 February 4 -1.377
## 13 2010 February 5 -0.415
## 16 2010 February 6 -0.394
## 18 2010 February 7 -0.059
## 20 2010 February 8 1.1
## 22 2010 February 9 0.763
## 24 2010 February 10 -0.165
## 26 2010 February 11 -0.253
## 28 2010 February 12 0.697
## 30 2010 February 13 0.557
## 32 2010 February 14 -0.689
## 34 2010 February 15 -0.707
## 36 2010 February 16 0.365
## 38 2010 February 17 0.769
## 40 2010 February 18 -0.112
## 42 2010 February 19 0.881
## 44 2010 February 20 0.398
## 46 2010 February 21 -0.612
## 48 2010 February 22 0.341
## 50 2010 February 23 -1.129
## 52 2010 February 24 1.433
## 54 2010 February 25 1.98
## 56 2010 February 26 -0.367
## 58 2010 February 27 -1.044
## 60 2010 February 28 0.57
## 3 2010 March 1 -0.135
## 6 2010 March 2 2.402
## 9 2010 March 3 -0.039
## 14 2010 March 5 0.69
The second part of the solution requires merging the above long-format data.frame with the exact dates you stated you want in the resulting data.frame. This requires a fair amount of scaffolding code to transform the date vector into a data.frame with Year Month Day columns, but once that's done, you can simply call merge() with all.x=T to preserve every date in the date vector whether or not it was present in df_1, and to exclude any date in df_1 that is not also present in the date vector:
df_2 <- merge(transform(setNames(as.data.frame(do.call(rbind,strsplit(strftime(date,'%Y %B %d'),' '))),c('Year','Month','Day')),Day=as.integer(Day)),df_1,all.x=T);
df_2[order(match(df_2$Month,month.name),df_2$Day),];
## Year Month Day Value
## 29 2010 January 1 -0.626
## 30 2010 January 2 0.184
## 31 2010 January 3 -0.836
## 32 2010 January 4 1.595
## 33 2010 January 5 0.33
## 34 2010 January 6 -0.82
## 35 2010 January 7 0.487
## 36 2010 January 8 0.738
## 37 2010 January 9 0.576
## 38 2010 January 10 -0.305
## 39 2010 January 11 1.512
## 40 2010 January 12 0.39
## 41 2010 January 13 -0.621
## 42 2010 January 14 -2.215
## 43 2010 January 15 1.125
## 44 2010 January 16 -0.045
## 45 2010 January 17 -0.016
## 46 2010 January 18 0.944
## 47 2010 January 19 0.821
## 48 2010 January 20 0.594
## 49 2010 January 21 0.919
## 50 2010 January 22 0.782
## 51 2010 January 23 0.075
## 52 2010 January 24 -1.989
## 53 2010 January 25 0.62
## 54 2010 January 26 -0.056
## 55 2010 January 27 -0.156
## 56 2010 January 28 -1.471
## 57 2010 January 29 -0.478
## 58 2010 January 30 0.418
## 59 2010 January 31 1.359
## 1 2010 February 1 -0.103
## 2 2010 February 2 0.388
## 3 2010 February 3 -0.054
## 4 2010 February 4 -1.377
## 5 2010 February 5 -0.415
## 6 2010 February 6 -0.394
## 7 2010 February 7 -0.059
## 8 2010 February 8 1.1
## 9 2010 February 9 0.763
## 10 2010 February 10 -0.165
## 11 2010 February 11 -0.253
## 12 2010 February 12 0.697
## 13 2010 February 13 0.557
## 14 2010 February 14 -0.689
## 15 2010 February 15 -0.707
## 16 2010 February 16 0.365
## 17 2010 February 17 0.769
## 18 2010 February 18 -0.112
## 19 2010 February 19 0.881
## 20 2010 February 20 0.398
## 21 2010 February 21 -0.612
## 22 2010 February 22 0.341
## 23 2010 February 23 -1.129
## 24 2010 February 24 1.433
## 25 2010 February 25 1.98
## 26 2010 February 26 -0.367
## 27 2010 February 27 -1.044
## 28 2010 February 28 0.57
## 60 2010 March 1 -0.135
## 61 2010 March 2 2.402
## 62 2010 March 3 -0.039
## 63 2010 March 4 <NA>
Notice how 2010-03-04 is included, even though I didn't generate a value for it in df_0, and 2010-03-05 is excluded, even though I did.
I have managed to aggregate some data into the following:
Month Year Number
1 1 2011 3885
2 2 2011 3713
3 3 2011 6189
4 4 2011 3812
5 5 2011 916
6 6 2011 3813
7 7 2011 1324
8 8 2011 1905
9 9 2011 5078
10 10 2011 1587
11 11 2011 3739
12 12 2011 3560
13 1 2012 1790
14 2 2012 1489
15 3 2012 1907
16 4 2012 1615
I am trying to create a barplot where the bars for the months are next to each other, so for the above example January through April will have two bars (one for 2011 and one for 2012) and the remaining months will only have one bar representing 2011.
I know I have to use beside=T, but I guess I need to create some sort of matrix in order to get the barplot to display properly. I am having an issue figuring out what that step is. I have a feeling it may involve matrix but for some reason I am completely stumped to what seems like a very simple solution.
Also, I have this data: y=c('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec') which I would like to use in my names.arg. When I try to use it with the above data it tells me undefined columns selected which I am taking to mean that I need 16 variables in y. How can I fix this?
To use barplot you need to rearrange your data:
dat <- read.table(text = " Month Year Number
1 1 2011 3885
2 2 2011 3713
3 3 2011 6189
4 4 2011 3812
5 5 2011 916
6 6 2011 3813
7 7 2011 1324
8 8 2011 1905
9 9 2011 5078
10 10 2011 1587
11 11 2011 3739
12 12 2011 3560
13 1 2012 1790
14 2 2012 1489
15 3 2012 1907
16 4 2012 1615",sep = "",header = TRUE)
y <- c('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec')
barplot(rbind(dat$Number[1:12],c(dat$Number[13:16],rep(NA,8))),
beside = TRUE,names.arg = y)
Or you can use ggplot2 with the data pretty much as is:
dat$Year <- factor(dat$Year)
dat$Month <- factor(dat$Month)
ggplot(dat,aes(x = Month,y = Number,fill = Year)) +
geom_bar(position = "dodge") +
scale_x_discrete(labels = y)