I am following this tutorial by Rob Hyndman for initialization (additive).
Steps to calculate initial values are specified as:
I am running above steps manually (with pen/paper) on data set provided in Rob Hydman free online text book. Values I got after first two steps are:
I used same data set on "R", but seasonal output values in R are drastically different (screenshot below)
Not sure what I am doing wrong. Any help would be appreciated.
Another interesting thing I have observed just now is, initial level (l(t)) in text book is 33.8, but in R output it is : 48.24, which proves that I am missing something while calculating manually.
EDIT:
Here is how I am calculating Moving Averages Smooth (Based on formula used in Section 2 of this link. )
After calculating I have de-trended, means original value - smoothed value.
Then seasonal values: Which is
S1 =Average of Q1
S2 = Average of Q2
...
The first two values of your moving average are incorrect. You have assumed that the values prior to the first observation are zero. They are not zero, they are missing, which is quite different. It is impossible to compute the moving average for the first two observations for this reason.
The third and subsequent values of your moving average are only approximately correct because you have rounded the data to the first decimal point instead of using the data as provided in the fpp package in R.
The values obtained following this procedure are used as initial values in the optimization within ets(). So the output from ets() will not contain the initial values but the optimized values. The table in the book gives the optimized values. You will not be able to reproduce them using a simple procedure.
However, you can reproduce what is provided by HoltWinters because it does not do any optimization of initial values. Using HoltWinters, the initial seasonal values are given as:
> HoltWinters(y)$fitted[1:4,]
xhat level trend season
[1,] 43.73934 33.21330 1.207739 9.318302
[2,] 28.25863 35.65614 1.376490 -8.774002
[3,] 36.86581 37.57569 1.450688 -2.160566
[4,] 41.87604 38.83521 1.424568 1.616267
(The output in coefficients gives the final states not the initial states.)
The seasonal indices in the last column can be computed as follows:
y MAsmooth detrend detrend.adj
41.72746 NA NA NA
24.04185 NA NA NA
32.32810 34.41724 -2.089139 -2.160566
37.32871 35.64101 1.687695 1.616267
46.21315 36.82342 9.389730 9.318302
29.34633 38.04890 -8.702575 -8.774002
36.48291 NA NA NA
42.97772 NA NA NA
The last column is the adjusted detrended data (so they add to zero).
Related
I am trying to use MatchIt to perform Propensity Score Matching (PSM) for my panel data. The data is panel data that contains multi-year observations from the same group of companies.
The data is basically describing a list of bond data and the financial data of their issuers, also the bond terms such as issued date, coupon rate, maturity, and bond type of bonds issued by them. For instance:
Firmnames
Year
ROA
Bond_type
AAPL US Equity
2015
0.3
0
AAPL US Equity
2015
0.3
1
AAPL US Equity
2016
0.3
0
AAPL US Equity
2017
0.3
0
C US Equity
2015
0.3
0
C US Equity
2016
0.3
0
C US Equity
2017
0.3
0
......
I've already known how to match the observations by the criteria I want and I use exact = Year to make sure I match observations from the same year. The problem now I am facing is that the observations from the same companies will be matched together, this is not what I want. The code I used:
matchit(Bond_type ~ Year + Amount_Issued + Cpn + Total_Assets_bf + AssetsEquityRatio_bf + Asset_Turnover_bf, data = rdata, method = "nearest", distance = "glm", exact = "Year")
However, as you can see, in the second raw of my sample, there might be two observations in one year from the same companies due to the nature of my study (the company can issue bonds more than one time a year). The only difference between them is the Bond_type. Therefore, the MathcIt function will, of course, treat them as the best control and treatment group and match these two observations together since they have the same ROA and other matching factors in that year.
I have two ways to solve this in my opinion:
Remove the observations from the same year and company, however, removing the observations might lead to bias results and ruined the study.
Preventing MatchIt function match the observations from the same company (or with the same Frimnames)
The second approach will be better since it will not lead to bias, however, I don't know if I can do this in MatchIt function. Hope someone can give me some advice on this or maybe there's any better solution to this problem, please be so kind to share with me, thanks in advance!
Note: If there's any further information or requirement I should provide, please just inform me. This is my first time raising the question here!
This is not possible with MatchIt at the moment (though it's an interesting idea and not hard to implement, so I may add it as a feature).
In the optmatch package, which perfroms optimal pair and full matching, there is a constraint that can be added called "anti-exact matching", which sounds exactly like what you want. Units with the same value of the anti-exact matching variable will not be matched with each other. This can be implemented using optmatch::antiExactMatch().
In the Matching package, which performs nearest neighbor and genetic matching, the restrict argument can be supplied to the matching function to restrict certain matches. You could manually create the restriction matrix by restricting all pairs of observations in the same company and then supply the matrix to Match().
I am trying to fit values in my algorithm so that I could predict a next month's number. I am getting a No data for variable errror when clearly I've defined what the objects are that I am putting into the equation.
I've tried to place them in vectors so that it could use one vector as a training data set to predict the new values. Current script has worked for me for a different dataset but for some reason isn't working here.
The data is small so I was wondering if that has anything to do with it. The data is:
Month io obs Units Sold
12 in 1 114
1 in 2 29
2 in 3 105
3 in 4 30
4 in 5
I'm trying to predict Units Sold with the code below
matt<-TEST1
isdf<-matt[matt$month<=3,]
isdf<-na.omit(isdf)
osdf<-matt[matt$Units.Sold==4,]
lmfit<-lm(Units.Sold~obs+Month,data=isdf,na.action=na.omit)
predict(lmFit,osdf[1,1])
I am expecting to be able to place lmfit in predict and get an output.
I am working on a research paper on graph manipulation and I have the following data:
returns 1+returns cum_return price period_ret(step=25)
1 7.804919e-03 1.0078049 0.007804919 100.78355 NA
2 3.560800e-03 1.0035608 0.011393511 101.14306 NA
3 -1.490719e-03 0.9985093 0.009885807 100.99239 NA
. -2.943304e-03 0.9970567 0.006913406 100.69558 NA
. 1.153007e-03 1.0011530 0.008074385 100.81175 NA
. -2.823012e-03 0.9971770 0.005228578 100.52756 NA
25 -7.110762e-03 0.9928892 -0.001919363 99.81526 -0.02364
. -1.807268e-02 0.9819273 -0.019957356 98.02754 NA
. -3.300315e-03 0.9966997 -0.023191805 97.70455 NA
250 5.846750e-03 1.0058467 -0.017480652 98.27748 0.12125
These are 250 daily stock returns, the cummulative return, price and the 25-day period returns (returns between days 0-25; 25-50;...;200-250).
What I want to do is the following:
I want to rearrange the returns but the period returns should be identical although their order can change. So there are 10! possible combinations of the subsets.
What I did so far: I wrote a code using the sample, repeat and identical functions and here is a shortened version:
repeat{
temp <- tibble(
returns = sample(x$returns, 250, replace=TRUE) )
if(identical(sort(round(c(x$period_ret[(!is.na(x$period_ret))]),2)),sort(round(c(temp$period_ret[(!is.na(temp$period_ret))]),2)))) break
}
This took me quite some time and unfortunately it isn't of any real use. Only later I began thinking of the math and that there are 250! possible samples so I would spend days waiting for any result.
What do I need this for?
I would like to create graphs with different orders of the returns. Thus, all the graphs have the same summary statistics but look different. Its important that they have the same period_returns (no matter of their order) to fulfil a utility formula.
I'm wondering how I would utilize the depmixs4 package for R to run HMM on a dataset. What functions would I use so I get a classification of a testing data set?
I have a file of training data, a file of label data, and a test data.
Training data consists of 4620 rows. Each row has 1079 values. These values are 83 windows with 13 values per window so in otherwords the 1079 is data that is made up of 83 states and each category has 13 observations. Each of these rows with 1079 values is a spoken word so it have 4620 utterances. But in total the data only has 7 distinct words. each of these distinct words have 660 different utterances hence the 4620 rows of words.
So we have words (0-6)
The label file is a list where each row is labeled 0-6 corresponding to what word they are. For example row 300 is labeled 2, row 450 is labeled 6 and 520 is labeled 0.
The test file contains about 5000 rows structured exactly like the training data except there are no labels assocaiated with it.
I want to use HMM to using the training data to classify the test data.
How would I use depmixs4 to output a classification of my test data?
I'm looking at :
depmix(response, data=NULL, nstates, transition=~1, family=gaussian(),
prior=~1, initdata=NULL, respstart=NULL, trstart=NULL, instart=NULL,
ntimes=NULL,...)
but I don't know what response refers to or any of the other parameters.
Here's a quick, albeit incomplete, test to get you started, if only to familiarize you with the basic outline. Please note that this is a toy example and it merely scratches the surface for HMM design/analysis. The vignette for the depmixs4 package, for instance, offers quite a lot of context and examples. Meanwhile, here's a brief intro.
Let's say that you wanted to investigate if industrial production offers clues about economic recessions. First, let's load the relevant packages and then download the data from the St. Louis Fed:
library(quantmod)
library(depmixS4)
library(TTR)
fred.tickers <-c("INDPRO")
getSymbols(fred.tickers,src="FRED")
Next, transform the data into rolling 1-year percentage changes to minimize noise in the data and convert data into data.frame format for analysis in depmixs4:
indpro.1yr <-na.omit(ROC(INDPRO,12))
indpro.1yr.df <-data.frame(indpro.1yr)
Now, let's run a simple HMM model and choose just 2 states--growth and contraction. Note that we're only using industrial production to search for signals:
model <- depmix(response=INDPRO ~ 1,
family = gaussian(),
nstates = 2,
data = indpro.1yr.df ,
transition=~1)
Now let's fit the resulting model, generate posterior states
for analysis, and estimate probabilities of recession. Also, we'll bind the data with dates in an xts format for easier viewing/analysis. (Note the use of set.seed(1), which is used to create a replicable starting value to launch the modeling.)
set.seed(1)
model.fit <- fit(model, verbose = FALSE)
model.prob <- posterior(model.fit)
prob.rec <-model.prob[,2]
prob.rec.dates <-xts(prob.rec,as.Date(index(indpro.1yr)),
order.by=as.Date(index(indpro.1yr)))
Finally, let's review and ideally plot the data:
head(prob.rec.dates)
[,1]
1920-01-01 1.0000000
1920-02-01 1.0000000
1920-03-01 1.0000000
1920-04-01 0.9991880
1920-05-01 0.9999549
1920-06-01 0.9739622
High values (>0.80 ??) indicate/suggest that the economy is in recession/contraction.
Again, a very, very basic introduction, perhaps too basic. Hope it helps.
I have been writing a code to ease performing a discriminant analysis using the lda function. But actually I have a step which I cannot solve. And it is when I have to introduce the name of the categorical column in the code. Imagine we have the next table (called smoke), in which the column Factor represents the groups (in our cases, smoker and nsmok).
smoke
Factor Lung Heart Blood
1 smoker 7 22 15
2 smoker 8 21 12
3 nsmok 22 9 5
This is the code I have been preparing. Please, look at the XXXX's in the code (it appears twice). I want them to write automatically the name of the categorical column, instead of writing directly it twice.
lda=lda(XXXX~.,data=Smoke)
plot(lda)
lda
lda$counts
lda$svd
lda.p=predict(lda)
Tabla=table(Smoke$XXXX,lda.p$class)
Tabla
diag(prop.table(Tabla, 1))
sum(diag(prop.table(Tabla)))
I thought that writing...
colnames(Table)[1]
... would solve it. But actually there still exist some errors when running the code.
Otherwise, I though that introducing directly the name in this way:
Column_Factor-> Factor
and writing Column_Factor in the two places in the code would solve it. But it isn't.
Any ideas?
You could do something like this:
library(MASS)
#gets the column name of the factor, maybe check if there is only one factor column first
Column_Factor <- names(Smoke)[sapply(Smoke, class)=="factor"]
#creates the formula by pasting the name and the RHS
lda <- lda(as.formula(paste(Column_Factor,"~.",sep="")),data=Smoke)
plot(lda)
lda
lda$counts
lda$svd
lda.p=predict(lda)
#selects the column using the variable
Tabla=table(Smoke[,Column_Factor],lda.p$class)
Tabla
diag(prop.table(Tabla, 1))
sum(diag(prop.table(Tabla)))