I am having a problem with the calculating of accuracy in ETS of test set.
train_ts<- ts(head(t$value,141), frequency=7) # this is train set (first 141 rows)
fit=auto.arima(train_ts)
forecasts = forecast(fit,h=12)
vector = ts(tail(t$value,12),frequency=7) # this is test set (last 12 rows)
accuracy(forecasts, vector, test=NULL, d=NULL, D=NULL) # I try to calculate accuracy
And I have this error:
Error in window.default(x, ...) : 'start' cannot be after 'end'
In addition: Warning message:
In window.default(x, ...) : 'start' value not changed
Result of forecasting:
Point Forecast Lo 80 Hi 80 Lo 95 Hi 95
191 4742.038402 3781.130910 5702.945894 3272.457210 6211.619593
192 5068.467231 4105.169285 6031.765177 3595.230155 6541.704307
193 5233.951079 4270.487205 6197.414954 3760.460238 6707.441921
194 4883.850503 3910.172814 5857.528191 3394.738981 6372.962025
195 4857.666612 3883.140593 5832.192631 3367.257681 6348.075543
196 5180.408585 4203.616284 6157.200886 3686.533674 6674.283496
197 5091.348011 4112.687519 6070.008503 3594.615948 6588.080074
198 4833.290365 3848.222297 5818.358433 3326.758761 6339.821969
199 5003.034291 4017.771775 5988.296807 3496.205304 6509.863278
200 5175.020752 4189.555595 6160.485908 3667.881854 6682.159650
201 4963.008654 3972.665298 5953.352010 3448.409193 6477.608114
202 4882.858876 3890.856391 5874.861360 3365.721997 6399.995754
vector:
Time Series:
Start = 1
End = 12
Frequency = 1
[1] 5243 5010 5374 4952 6911 4260 6063 5597 4536 5522 4254 5048
How can I fix my error or how can I calculate accuracy correctly?
Example data (t$value):
[1] 5564 6657 7184 6456 5597 5951 6771 5990 6289 6885 6171 4739 5737 5950 6721
[16] 6579 6763 6829 5779 5346 5652 6319 6407 7232 6600 6244 5631 5198 6360 7922
[31] 6035 4221 4361 4475 5585 4845 5958 6833 3617 5036 4560 3820 5724 6352 5773
[46] 6200 4378 5614 5165 6345 5769 6228 6378 4827 4402 5829 4880 6333 6406 434
[61] 4754 4303 5498 5048 6042 6664 5492 5684 6194 5349 5846 5916 5069 5071 4367
[76] 5381 5694 5731 6029 5639 5539 4490 5223 5436 5819 941 6576 5235 3574 6319
[91] 5063 5765 5919 6006 5479 3653 4281 5433 4851 5543 5995 5049 4728 5449 5728
[106] 6009 5378 5730 5206 4764 5458 5970 5254 5653 5539 1907 4438 5421 5529 5225
[121] 6158 5572 4777 4575 5275 4742 5648 5198 5624 4781 3959 4368 5478 4681 5288
[136] 5758 4540 3899 5760 4797 5580 5433 4898 4473 3566 4779 4897 5099 5866 6231
[151] 4982 4375 5976
Firstly, something seems off in the forecast output you posted; it starts at point 191 which means the fitted series ended at 190, but that doesn't seem right given the code you posted.
Regardless, DatamineR is correct in his comment. You are providing two time series with different ranges of time. The forecast function will pick up where the fitted time series left off, but when you use ts(tail(t$value,12),frequency=7) you are creating a new time series that starts at 1.
One option is to convert one (or both) into numeric vectors, as DatamineR suggested. Otherwise you can set the start time for your test set to the correct value doing something like:
vector = ts(tail(t,12),start=end(train_ts)+c(0, 1), frequency=7)
where end(train_ts) gives you the last time point of the training series, and then I added one more time step (in the same cycle) by adding c(0,1) to set the start time of the test series.
Related
i have a list with indexes like this:
> mid_cp
[1] 3065 4871 13153 15587 18100 24010 26324 25648 38195 38196 39384 42237 45686 54217 55032 63684 62800 9134 35261 36449 36866 53968 16969
[24] 43529 46995 52351 4174 7011 18962 18151 18889 24036 32916 34061 34815 36866 51973 55802 53593 55421 56615 88 150 161 192 781
[47] 830 1300 1573 2396 2784 2547 3214 3135 3297 3301 4053 4249 4919 5856 6297 7328 7621 7708 8063 8219 8864 8887 9201
[70] 9214 9533 10334 10301 11235 10529 11356 10566 10872 12228 12250 12507 12048 12643 12913 13224 14297 16772 15363 18759 18979 16264 17363
[93] 20732 17971 22194 22422 19417 22903 22929 23087 19627 19961 23954 24297 25422 25423 25704 25765 25780 22769 22796 26871 27095 23789 24066
[116] 24069 27423 24366 24600 24871 25110 28374 26280 27873 29722 28839 29063 31031 31150 31546 32491 30356 33045 30863 33555 34201 34404 34684
[139] 35498 32912 33207 35874 33488 33716 36761 34543 36807 37000 35157 38195 38196 38458 36438 36619 39484 40109 37532 40143 40160 40458 41257
[162] 38434 38653 41866 41899 39429 42818 40001 43398 43441 40282 40566 43979 43996 40793 40806 40992 41065 41102 41330 41964 46322 43351 46670
and I have a table like this:
> head(movie.cp)
name id
252 $ (Dollars) (The Heist) 252
253 $5 a Day (Five Dollars a Day) 253
1 $9.99 1
254 $windle (Swindle) 254
255 "BBC2 Playhouse" Caught on a Train 255
256 "Independent Lens" Race to Execution 256
How do i get the mid_cp list to be a name list using the movie.cp table?
P.S.: I am completely newbie regarding R
are the numbers in mid_cp equivalent to movie.cp$id? if so try mid_cp <- movie.cp$name[match(mid_cp,movie.cp$id)]
I am using the R package mRMRe for feature selection and trying to get the indices of most common feature from the results of ensemble:
ensemble <- mRMR.ensemble(data = dd, target_indices = target_idx,solution_count = 5, feature_count = 30)
features_indices = as.data.frame(solutions(ensemble))
This give me the below data:
MR_1 MR_2 MR_3 MR_4 MR_5
2793 2794 2796 2795 2918
1406 1406 1406 1406 1406
2798 2800 2798 2798 2907
2907 2907 2907 2907 2800
2709 2709 2709 2709 2709
1350 2781 1582 1350 1582
2781 1350 2781 2781 636
2712 2712 2712 2712 2781
636 636 636 636 2779
2067 2067 2067 2067 2712
2328 2328 2357 2357 2067
2357 783 2328 2328 2328
772 2357 772 772 772
I want to use some sort of voting logic to select the most frequent index for each row across all columns.
For example in the above image :
1. For the first row there is no match - so select the first one.
2. There are some rows where min occurrence is 2 - so select that one.
3. In case of tie - check if any occurs thrice, if yes select that one, or else from the tied indices select the first occurring one.
May be I am making it too complex, but basically I want to select best indices from all the indices for each row from the dataframe.
Can someone please help me on this?
Here's a simple solution using apply:
apply(df, 1, function(x) { names(which.max(table(x))) })
which gives:
[1] "2793" "1406" "2798" "2907" "2709" "1350" "2781" "2712" "636" "2067" "2328" "2328" "772"
For each row, the function table counts occurrences of each unique element, then we return the name of the element with the maximum number of occurrences (if there is a tie, the first one is selected).
I have an undirected network that I am working with in Igraph with weighted edges.
For a particular graph out, I can calculate communities in Igraph using the function edge.betweenness.community and the betweenness of each edge by using edge.betweenness.
I can tell igraph to include the weights of each edge by writing the following:
largest <- which.max(sapply(modules, vcount))
out <- modules[largest][[1]]
bt <- edge.betweenness(out, weights = E(out)$value, directed = FALSE)
Returning:
bt
[1] 20.0 11.0 27.0 11.0 8.0 12.0 8.0 8.5 7.5 6.0 3.0 3.0 7.0 8.5 7.5 4.0 11.0
Where the weights are:
E(out)$value
[1] 0.2829 0.2880 0.2997 0.1842 0.2963 0.2714 0.2577 0.2850 0.2850 0.2577 0.2305 0.2305 0.2577 0.1488 0.1488 0.1215 0.2997
The weights in this case have limits 0 - 1, where 1 = highest cost to traverse an edge, 0 = lowest cost. However, these limits do not get passed to igraph in any of the betweenness calculations.
My question: How does igraph evaluate the lower and upper limits of the listed weights in terms of normalisation?
Does it automatically scale the weights based on the min and max values of the specified weights? (in this case min = 0.1215, max = 0.2997)
What I want: How do I tell it to take the true limits of the full data set (min=0 - max=1) into account?
Additional Information:
If I multiply the weights E(out)$value by some constant and recalculate the betweenness, I get a similar answer (I assume there is some floating error and they are in fact the same):
new_weights <- as.numeric(E(out)$value*2.5)
new_weights
[1] 0.70725 0.72000 0.74925 0.46050 0.74075 0.67850 0.64425 0.71250 0.71250 0.64425 0.57625 0.57625 0.64425 0.37200 0.37200 0.30375 0.74925
bt <- edge.betweenness(out, weights = new_weights, directed = FALSE)
Giving:
bt
[1] 20 11 27 11 8 12 8 8 8 6 3 3 7 8 8 4 11
This implies there is some auto scaling going on:
With this in mind, how do I manually scale the betweenness calculation to my required limits of 0 and 1?
Research:
Edit 4 Jun2 2016 -
I have tried to review the source code for edge.betweenness on the igraph R Github page https://github.com/igraph/rigraph/tree/dev/R
The closest function I could find was cluster_edge_betweenness at https://github.com/igraph/rigraph/blob/dev/R/community.R
This function make a call to the C function C_R_igraph_community_edge_betweenness. The closest reference to this I could find in the igraph C documentation is igraph_community_edge_betweenness at https://github.com/igraph/igraph/blob/master/include/igraph_community.h
Neither of these links however makes any reference to how the limits of the weights are calculated.
Original Research:
I have looked through the igraph documentation on betweenness algorithms, and explored other questions related to normalisation, but have found nothing that deals specifically with the normalisation of the weights themselves.
Modularity calculation for weighted graphs in igraph
Calculation of betweenness in iGraph
http://igraph.org/r/doc/betweenness.html
The network data and visualisation are as follows:
plot(out)
get.data.frame(out)
from to value sourceID targetID
1 74 80 0.2829 255609 262854
2 74 61 0.2880 255609 179585
3 80 1085 0.2997 262854 3055482
4 1045 1046 0.1842 2970629 2971615
5 1046 1085 0.2963 2971615 3055482
6 1046 1154 0.2714 2971615 3087803
7 1085 1154 0.2577 3055482 3087803
8 1085 1187 0.2850 3055482 3101131
9 1085 1209 0.2850 3055482 3110186
10 1154 1243 0.2577 3087803 3130848
11 1154 1187 0.2305 3087803 3101131
12 1154 1209 0.2305 3087803 3110186
13 1154 1244 0.2577 3087803 3131379
14 1243 1187 0.1488 3130848 3101131
15 1243 1209 0.1488 3130848 3110186
16 1243 1244 0.1215 3130848 3131379
17 1243 1281 0.2997 3130848 3255811
(The weights in this case are in the frame$value column with limits 0 - 1, where 1 = highest cost to traverse an edge, 0 = lowest cost)
I have a numeric matrix from which I want to retrieve the index given specific values.
I am trying the which() function to find values in the matrix.
The problem is that some values are found and some are not.
My matrix is as follows:
x_lat <- as.double(seq(48.0 ,60.0, by=0.1))
y_long <- as.double(seq(-10.0 ,2.0, by=0.1))
xv <- as.double(rep(x_lat,each = 121))
yv <- as.double(rep(y_long, 121))
vMatrix <- as.matrix(cbind(xv,yv))
If I want to retrieve the indices where the value -2.3 is TRUE the function returns correctly a vector with the indices where -2.3 appears.
xx<- which(vMatrix==-2.3,arr.ind=TRUE)
> xx
[1] 78 199 320 441 562 683 804 925 1046 1167 1288 1409 1530 1651 1772 1893 2014 2135 2256 2377 2498
[22] 2619 2740 2861 2982 3103 3224 3345 3466 3587 3708 3829 3950 4071 4192 4313 4434 4555 4676 4797 4918 5039
[43] 5160 5281 5402 5523 5644 5765 5886 6007 6128 6249 6370 6491 6612 6733 6854 6975 7096 7217 7338 7459 7580
[64] 7701 7822 7943 8064 8185 8306 8427 8548 8669 8790 8911 9032 9153 9274 9395 9516 9637 9758 9879 10000 10121
[85] 10242 10363 10484 10605 10726 10847 10968 11089 11210 11331 11452 11573 11694 11815 11936 12057 12178 12299 12420 12541 12662
[106] 12783 12904 13025 13146 13267 13388 13509 13630 13751 13872 13993 14114 14235 14356 14477 14598
But for some numbers (that appear in the matrix) the function does not work, e.g.,
xx<- which(vMatrix==-2.2,arr.ind=TRUE)
> xx
integer(0)
Floating point numbers can be misleading. Two such numbers are usually not "equal", even though the console may display the same output. The machine only has a certain accuracy with which it can represent the numbers.
Here's a simple example:
a <- 0.15 - 1/8
b <- 0.025
> a
[1] 0.025
> b
[1] 0.025
However, if we compare these numbers with "==", we obtain:
> a==b
[1] FALSE
That is because there are differences resulting from the floating point arithmetic which are beyond the machine's accuracy:
> a-b
[1] -6.938894e-18
Probably you can resolve the issue by simply rounding the numbers in the matrix to the necessary amount of relevant digits, like, e.g.,
xx<- which(round(vMatrix,3)==-2.2,arr.ind=TRUE)
I plotted my data and also suppressed the auto x-axis labeling successfully.
Now I'm using the following command to customize my x=axis labels:
axis(
1,
at = min(LoopVariable[ ,"TW"]) - 1 : max(LoopVariable[ ,"TW"]) + 1,
labels = min(LoopVariable[ ,"TW"]) - 1 : max(LoopVariable[ ,"TW"]) + 1,
las = 2
)
And I'm getting:
This is correct in the sense that I'm having 28 data points, but when I do:
LoopVariable[ ,"TW"]
Then I get:
[1] 2801 2808 2813 2825 2833 2835 2839 2840 2844 2856 2858 2863 2865 2868 2870 2871 2873 2879 2881 2903 2904 2914 2918 2947 2970 2974 2977 2986
These are the the values I want as x-axis labels rather than 1:28. There is obviously a little bit missing in my line I seem not to figure out.