I trained my neural network with a sigmoid activation function so that the predicted values lie in the range [0,1). However, the range of real data in which the z-score transformation has been performed goes beyond [0,1). In this case what would be the appropriate way to evaluate my model. Should I rescale as well the original test data to the same range and then evaluate with criteria like mean square forecast error?
> real_predicted_neural
predicted real
1 1.909219e-07 -3.57877473
2 4.161819e-08 -2.28704595
3 1.754706e-11 -1.08509429
4 1.149891e-13 -0.46573114
5 7.777560e-02 0.42381300
6 4.173448e-07 -0.44060297
7 1.119703e-01 0.21075550
8 8.682557e-01 -0.01292402
9 4.736056e-08 -0.29830701
10 7.506821e-08 -1.20302227
11 7.341235e-01 -0.03986571
12 7.501776e-05 -0.94315815
13 1.145697e-04 0.49730175
14 2.214929e-13 0.04252241
15 4.597199e-01 -0.38539901
16 2.324931e-03 -0.74468628
17 4.366025e-06 -0.77037244
18 1.394450e-06 0.16679048
19 5.869884e-11 -0.75876486
20 1.817941e-04 0.04303387
21 7.060773e-04 0.06099372
22 8.267170e-06 -1.21687318
23 9.388680e-02 0.61135319
24 1.099290e-01 0.55715201
25 9.757236e-01 -0.33480226
26 9.544055e-01 0.09061006
27 7.322074e-07 0.09290822
28 1.014327e-06 -0.61658893
29 7.848382e-08 -0.78739456
30 1.791908e-04 -0.44073540
31 1.357918e-03 -0.22099008
32 5.192233e-06 -0.32744703
33 2.624779e-06 -0.37644068
34 6.414216e-02 -0.36947939
35 1.388143e-06 -0.00994845
36 3.010872e-05 -0.05984833
37 9.873201e-03 -0.21815268
38 3.896163e-04 -0.24009094
39 2.718760e-02 0.33383333
40 1.025650e-02 0.09779867
envelope of the K funcition (and its derivative such as L) is very useful for validating a fitted spatial points process model. for instance, I fit a poisson model for a data J1a2, which is as following:
J1a2.points:
# X.1 X Y
1 1 118.544 1638.445
2 2 325.995 1761.223
3 3 681.625 1553.771
4 4 677.392 1816.261
5 5 986.451 1685.016
6 6 1469.093 1354.787
7 7 1608.805 1625.744
8 8 1994.071 1782.391
9 9 1968.669 1375.955
10 10 2362.403 1337.852
11 11 2701.099 1773.924
12 12 2900.083 1820.495
13 13 2963.588 1668.081
14 14 3412.360 1676.549
15 15 3378.490 1456.396
16 16 3721.420 1464.863
17 17 3823.028 1701.951
18 18 4072.817 1790.859
19 19 4089.751 1388.656
20 20 97.375 715.497
21 21 376.799 1033.025
22 22 563.082 1126.166
23 23 935.647 1206.607
24 24 512.277 486.876
25 25 935.647 757.834
26 26 1409.821 410.670
27 27 1435.223 639.290
28 28 1706.180 1045.726
29 29 1968.669 876.378
30 30 2307.365 711.263
31 31 2624.892 897.546
32 32 2654.528 1236.243
33 33 2857.746 423.371
34 34 3039.795 639.290
35 35 3298.050 707.029
36 36 3111.767 1011.856
37 37 3361.555 1227.775
38 38 4047.414 1185.438
39 39 3569.007 508.045
40 40 4250.632 469.942
41 41 4386.110 872.144
42 42 93.141 237.088
43 43 554.614 186.283
44 44 757.832 148.180
45 45 965.283 220.153
46 46 1723.115 296.360
47 47 1744.283 423.371
48 48 1913.631 203.218
49 49 2167.653 292.126
50 50 2629.126 211.685
51 51 3217.610 283.658
52 52 3827.262 325.996
and:
J1a2.Win<-owin(c(0, 4500.42),c(0, 1917.87))
if you draw evelope for the data with Lest:
library(spatstat)
env.data<-envelope(J1a2, Lest,correction="border",
nsim=19, global=TRUE)
plot(env.data,.-r~r, shade=NULL, legend=FALSE,
xlab=expression(paste("r(",mu,"m)")),ylab="L(r)-r", main = "")
the Lest() curve goes out of the envelope. however, if you use Linhom instead of Lest, you will find the Linhom() are all inside of the envelope.
it seems that this suggest a inhomogenous density kernel of the data. so I use y as covariate in fitting:
poisson.J1a2<-ppm(J1a2~1,Poisson(),correction="border")
y1.J1a2<-ppm(J1a2~y,correction="border")
anova(poisson.J1a2,y.J1a2,test="LR") #p=0.6484
I don't find any evidence of a spatial trend of density along y, or x, or their combinations.
then why the Linhom() outperform the Lest() in this case?
furthermore, when should one decide to use Linhom() instead of Lest?
You should first decide whether or not the intensity can be assumed to be constant. To help you with this you can look at kernel density estimates or do formal tests such as a quadrat test etc. If you decide that the intensity can be assumed to be constant you use Lest() if this is not the case you use Linhom().
Im trying Moran's I and respective plot in r. But the plot has only one point. I have no idea of what is going wrong. The code is based on<
http://rstudio-pubs-static.s3.amazonaws.com/9688_a49c681fab974bbca889e3eae9fbb837.html>
my data called "coordenata"
resid x y
1 0.07785411 -53.20342 -22.66700
2 -0.28358702 -53.20389 -22.66864
3 -0.64011338 -53.21392 -22.68122
4 1.22071249 -53.21311 -22.72369
5 0.95734778 -53.28469 -22.75289
6 0.35345302 -53.25822 -22.74850
7 -0.68357738 -53.28344 -22.70694
8 -1.24596010 -53.32950 -22.72872
9 -0.19944162 -53.33669 -22.73561
10 0.67544909 -53.36756 -22.80767
11 0.64002961 -53.35947 -22.79958
12 0.04564233 -53.21889 -22.67419
13 0.01618436 -53.24522 -22.70144
14 -2.65436794 -53.23017 -22.69292
15 0.72096256 -53.25539 -22.69978
16 0.89656515 -53.28489 -22.72222
17 1.85358579 -53.33069 -22.79161
18 -0.03590077 -53.33200 -22.78336
19 0.32348975 -53.33494 -22.78586
20 2.06771402 -53.37781 -22.77869
21 -1.02190709 -53.30492 -22.77244
22 -2.02813250 -53.53917 -22.79856
23 -1.20702445 -53.53858 -22.79406
24 -1.24091732 -53.55272 -22.80536
25 -1.13491596 -53.56181 -22.82914
26 -0.82934613 -53.56422 -22.83417
27 1.23418758 -53.60017 -22.85531
28 -1.72808514 -53.65900 -22.97828
29 -0.02144049 -53.65908 -22.97497
30 0.49174568 -53.64597 -22.95439
31 -0.54408149 -53.64217 -22.91033
32 -0.37111342 -53.61447 -22.86269
33 -0.31121931 -53.27153 -22.70036
34 0.32419211 -53.30308 -22.72183
35 1.57980287 -53.33053 -22.72947
36 -1.91156060 -53.34633 -22.74722
37 -0.79036645 -53.23667 -22.68925
the code
coordinates(coordenata)<-c("x","y")
fit2<-correlog(coordenata$x,coordenata$y,coordenata$resid,increment=5,resamp=100,quiet=T)
plot(fit2)
Thanks in advance for any help!
I'm having trouble puting this little data.frame into a plot. I use the plot() fx but it just gives me back a plot which X axis is not the date in the first column.
> DDDhabd
Mes DDD.1000hab.día
1 Ene-14 0.03564701
2 Feb-14 0.03959695
3 Mar-14 0.04677090
4 Abr-14 0.04928782
5 May-14 0.03783808
6 Jun-14 0.04939231
7 Jul-14 0.05464189
8 Ago-14 0.05208003
9 Set-14 0.05475650
10 Oct-14 0.05290589
11 Nov-14 0.05714252
12 Dic-14 0.05056313
13 Ene-15 0.05688352
14 Feb-15 0.05710022
15 Mar-15 0.05754084
16 Abr-15 0.04362755
17 May-15 0.06209153
18 Jun-15 0.05715994
19 Jul-15 0.04373711
20 Ago-15 0.02462424
21 Set-15 0.03812404
22 Oct-15 0.08368198
23 Nov-15 0.07506378
24 Dic-15 0.05974877
I would really appreciate if you could give me a hint about where is my mistake.
Thanks
df.sorted <- c("binned_walker1_1.grd", "binned_walker1_2.grd", "binned_walker1_3.grd",
"binned_walker1_4.grd", "binned_walker1_5.grd", "binned_walker1_6.grd",
"binned_walker2_1.grd", "binned_walker2_2.grd", "binned_walker3_1.grd",
"binned_walker3_2.grd", "binned_walker3_3.grd", "binned_walker3_4.grd",
"binned_walker3_5.grd", "binned_walker4_1.grd", "binned_walker4_2.grd",
"binned_walker4_3.grd", "binned_walker4_4.grd", "binned_walker4_5.grd",
"binned_walker5_1.grd", "binned_walker5_2.grd", "binned_walker5_3.grd",
"binned_walker5_4.grd", "binned_walker5_5.grd", "binned_walker5_6.grd",
"binned_walker6_1.grd", "binned_walker7_1.grd", "binned_walker7_2.grd",
"binned_walker7_3.grd", "binned_walker7_4.grd", "binned_walker7_5.grd",
"binned_walker8_1.grd", "binned_walker8_2.grd", "binned_walker9_1.grd",
"binned_walker9_2.grd", "binned_walker9_3.grd", "binned_walker9_4.grd",
"binned_walker10_1.grd", "binned_walker10_2.grd", "binned_walker10_3.grd")
One would expect that order of this vector would be 1:length(df.sorted), but that appears not to be the case. It looks like R internally sorts the vector according to its logic but tries really hard to display it the way it was created (and is seen in the output).
order(df.sorted)
[1] 37 38 39 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
[26] 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Is there a way to "reset" the ordering to 1:length(df.sorted)? That way, ordering, and the output of the vector would be in sync.
Use the mixedsort (or) mixedorder functions in package gtools:
require(gtools)
mixedorder(df.sorted)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
[28] 28 29 30 31 32 33 34 35 36 37 38 39
construct it as an ordered factor:
> df.new <- ordered(df.sorted,levels=df.sorted)
> order(df.new)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
EDIT :
After #DWins comment, I want to add that it is even not nessecary to make it an ordered factor, just a factor is enough if you give the right order of levels :
> df.new2 <- factor(df.sorted,levels=df.sorted)
> order(df.new)
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 ...
The difference will be noticeable when you use those factors in a regression analysis, they can be treated differently. The advantage of ordered factors is that they let you use comparison operators as < and >. This makes life sometimes a lot easier.
> df.new2[5] < df.new2[10]
[1] NA
Warning message:
In Ops.factor(df.new[5], df.new[10]) : < not meaningful for factors
> df.new[5] < df.new[10]
[1] TRUE
Isn't this simply the same thing you get with all lexicographic shorts (as e.g. ls on directories) where walker10_foo sorts higher than walker1_foo?
The easiest way around, in my book, is to use a consistent number of digits, i.e. I would change to binned_walker01_1.grd and so on inserting a 0 for the one-digit counts.
In response to Dwin's comment on Dirk's answer: the data are always putty in your hands. "This is R. There is no if. Only how." -- Simon Blomberg
You can add 0 like so:
df.sorted <- gsub("(walker)([[:digit:]]{1}_)", "\\10\\2", df.sorted)
If you needed to add 00, you do it like this:
df.sorted <- gsub("(walker)([[:digit:]]{1}_)", "\\10\\2", df.sorted)
df.sorted <- gsub("(walker)([[:digit:]]{2}_)", "\\10\\2", df.sorted)
...and so on.