I created in r-studio a null logistic model.
nullModel <- glm(train$bigFire ~ 1, data = train, family = binomial)
Then it is asked to the model to make predictions on the test-set.
nullModel.pred <- predict(nullModel, test, type = "response")
At this point i want to compute the confusion matrix in order to evaluate the performances of the model.
CM <- table(test$bigFire, nullModel.pred>0.5)
The resulting output is the following:
TRUE
0 58
1 46
Even if i change the cutoff value (now set to 0.5) the result is always the same. I don't understand why since the model should perform in a different way having different cutoff values.
The dataset is the following:
month day FFMC DMC DC ISI temp RH wind rain zone bigFire
1 mar fri 86.2 26.2 94.3 5.1 8.2 51 6.7 0.0 75 0
2 oct tue 90.6 35.4 669.1 6.7 18.0 33 0.9 0.0 74 0
3 oct sat 90.6 43.7 686.9 6.7 14.6 33 1.3 0.0 74 0
4 mar fri 91.7 33.3 77.5 9.0 8.3 97 4.0 0.2 86 0
5 mar sun 89.3 51.3 102.2 9.6 11.4 99 1.8 0.0 86 0
6 aug sun 92.3 85.3 488.0 14.7 22.2 29 5.4 0.0 86 0
It counts 517 rows.
The test and train are generated from the previous datafram with a split of 80% for train and 20% for test (104 rows).
The length of the prediction vector is:
> length(nullModel.pred)
[1] 104
and contains always the same value -> 0.542.
This is reasonable since it is only able to estimate the expected value for the response to be 1.
Related
I have weather dataset my data is date-dependent
I want to predict the temperature from 07 May 2008 until 18 May 2008 (which is maybe a total of 10-15 observations) my data size is around 200
I will be using decision tree/RF and SVM & NN to make my prediction
I've never handled data like this so I'm not sure how to sample non random data
I want to sample data 80% train data and 30% test data but I want to sample the data in the original order not randomly. Is that possible ?
install.packages("rattle")
install.packages("RGtk2")
library("rattle")
seed <- 42
set.seed(seed)
fname <- system.file("csv", "weather.csv", package = "rattle")
dataset <- read.csv(fname, encoding = "UTF-8")
dataset <- dataset[1:200,]
dataset <- dataset[order(dataset$Date),]
set.seed(321)
sample_data = sample(nrow(dataset), nrow(dataset)*.8)
test<-dataset[sample_data,] # 30%
train<-dataset[-sample_data,] # 80%
output
> head(dataset)
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed
1 2007-11-01 Canberra 8.0 24.3 0.0 3.4 6.3 NW 30
2 2007-11-02 Canberra 14.0 26.9 3.6 4.4 9.7 ENE 39
3 2007-11-03 Canberra 13.7 23.4 3.6 5.8 3.3 NW 85
4 2007-11-04 Canberra 13.3 15.5 39.8 7.2 9.1 NW 54
5 2007-11-05 Canberra 7.6 16.1 2.8 5.6 10.6 SSE 50
6 2007-11-06 Canberra 6.2 16.9 0.0 5.8 8.2 SE 44
WindDir9am WindDir3pm WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am
1 SW NW 6 20 68 29 1019.7
2 E W 4 17 80 36 1012.4
3 N NNE 6 6 82 69 1009.5
4 WNW W 30 24 62 56 1005.5
5 SSE ESE 20 28 68 49 1018.3
6 SE E 20 24 70 57 1023.8
Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RISK_MM RainTomorrow
1 1015.0 7 7 14.4 23.6 No 3.6 Yes
2 1008.4 5 3 17.5 25.7 Yes 3.6 Yes
3 1007.2 8 7 15.4 20.2 Yes 39.8 Yes
4 1007.0 2 7 13.5 14.1 Yes 2.8 Yes
5 1018.5 7 7 11.1 15.4 Yes 0.0 No
6 1021.7 7 5 10.9 14.8 No 0.2 No
> head(test)
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed
182 2008-04-30 Canberra -1.8 14.8 0.0 1.4 7.0 N 28
77 2008-01-16 Canberra 17.9 33.2 0.0 10.4 8.4 N 59
88 2008-01-27 Canberra 13.2 31.3 0.0 6.6 11.6 WSW 46
58 2007-12-28 Canberra 15.1 28.3 14.4 8.8 13.2 NNW 28
96 2008-02-04 Canberra 18.2 22.6 1.8 8.0 0.0 ENE 33
126 2008-03-05 Canberra 12.0 27.6 0.0 6.0 11.0 E 46
WindDir9am WindDir3pm WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am
182 E N 2 19 80 40 1024.2
77 N NNE 15 20 58 62 1008.5
88 N WNW 4 26 71 28 1013.1
58 NNW NW 6 13 73 44 1016.8
96 SSE ENE 7 13 92 76 1014.4
126 SSE WSW 7 6 69 35 1025.5
Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RISK_MM RainTomorrow
182 1020.5 1 7 5.3 13.9 No 0.0 No
77 1006.1 6 7 24.5 23.5 No 4.8 Yes
88 1009.5 1 4 19.7 30.7 No 0.0 No
58 1013.4 1 5 18.3 27.4 Yes 0.0 No
96 1011.5 8 8 18.5 22.1 Yes 9.0 Yes
126 1022.2 1 1 15.7 26.2 No 0.0 No
> head(train)
Date Location MinTemp MaxTemp Rainfall Evaporation Sunshine WindGustDir WindGustSpeed
7 2007-11-07 Canberra 6.1 18.2 0.2 4.2 8.4 SE 43
9 2007-11-09 Canberra 8.8 19.5 0.0 4.0 4.1 S 48
11 2007-11-11 Canberra 9.1 25.2 0.0 4.2 11.9 N 30
16 2007-11-16 Canberra 12.4 32.1 0.0 8.4 11.1 E 46
22 2007-11-22 Canberra 16.4 19.4 0.4 9.2 0.0 E 26
25 2007-11-25 Canberra 15.4 28.4 0.0 4.4 8.1 ENE 33
WindDir9am WindDir3pm WindSpeed9am WindSpeed3pm Humidity9am Humidity3pm Pressure9am
7 SE ESE 19 26 63 47 1024.6
9 E ENE 19 17 70 48 1026.1
11 SE NW 6 9 74 34 1024.4
16 SE WSW 7 9 70 22 1017.9
22 ENE E 6 11 88 72 1010.7
25 SSE NE 9 15 85 31 1022.4
Pressure3pm Cloud9am Cloud3pm Temp9am Temp3pm RainToday RISK_MM RainTomorrow
7 1022.2 4 6 12.4 17.3 No 0.0 No
9 1022.7 7 7 14.1 18.9 No 16.2 Yes
11 1021.1 1 2 14.6 24.0 No 0.2 No
16 1012.8 0 3 19.1 30.7 No 0.0 No
22 1008.9 8 8 16.5 18.3 No 25.8 Yes
25 1018.6 8 2 16.8 27.3 No 0.0 No
I use mtcars as an example. An option to non-randomly split your data in train and test is to first create a sample size based on the number of rows in your data. After that you can use split to split the data exact at the 80% of your data. You using the following code:
smp_size <- floor(0.80 * nrow(mtcars))
split <- split(mtcars, rep(1:2, each = smp_size))
With the following code you can turn the split in train and test:
train <- split$`1`
test <- split$`2`
Let's check the number of rows:
> nrow(train)
[1] 25
> nrow(test)
[1] 7
Now the data is split in train and test without losing their order.
I tried to fit a linear model using Leave one out cross-validation split.
I used this DATASET, the dataset has 517 rows and 13 columns (two of them are categorical variables). The dependent variable is "area".
I would like to build a model with and without the categorical variables. Then is to calculate the coefficients mean. When I removed the categorical variables, the code runs fine, but when I keep them I got the following error " Error in estcoef[i, ] <- coef(model1) :
number of items to replace is not a multiple of replacement length"
My code is as follows
wdbc<- read.csv("forestfires.csv") ## upload the dataset
wdbc<-wdbc[-(3:4)] ## If I want to build the model without the catogrical variables
#####################################
fitted_value <- rep(0,nrow(wdbc))
estcoef<-matrix(0,nrow=nrow(wdbc),ncol=ncol(wdbc)) #estimation coefficients
# LOOCV split leave one out cross validation
for(i in 1:nrow(wdbc)){
validation<-wdbc[i,]
training<-wdbc[-i,]
model1<-lm(area ~ ., data = training)
fitted_value[i] <- predict(model1, newdata = validation)
estcoef[i,]<-coef(model1)
}
meancoef<-colMeans(estcoef) #coefficent means
Could anyone help me with this problem, please?
I am happy to provide any other additional information.
UPDATED
I included the first 10 rows of my data
X Y month day FFMC DMC DC ISI temp RH wind rain area
1 7 5 mar fri 86.2 26.2 94.3 5.1 8.2 51 6.7 0.0 0
2 7 4 oct tue 90.6 35.4 669.1 6.7 18.0 33 0.9 0.0 0
3 7 4 oct sat 90.6 43.7 686.9 6.7 14.6 33 1.3 0.0 0
4 8 6 mar fri 91.7 33.3 77.5 9.0 8.3 97 4.0 0.2 0
5 8 6 mar sun 89.3 51.3 102.2 9.6 11.4 99 1.8 0.0 0
6 8 6 aug sun 92.3 85.3 488.0 14.7 22.2 29 5.4 0.0 0
7 8 6 aug mon 92.3 88.9 495.6 8.5 24.1 27 3.1 0.0 0
8 8 6 aug mon 91.5 145.4 608.2 10.7 8.0 86 2.2 0.0 0
9 8 6 sep tue 91.0 129.5 692.6 7.0 13.1 63 5.4 0.0 0
10 7 5 sep sat 92.5 88.0 698.6 7.1 22.8 40 4.0 0.0 0
I also included the last 10 rows which show that we have only one row with month= Nov(last row), that return errors even with the provided answer.
> tail(wdbc,10)
X Y month day FFMC DMC DC ISI temp RH wind rain area
508 2 4 aug fri 91.0 166.9 752.6 7.1 25.9 41 3.6 0.0 0.00
509 1 2 aug fri 91.0 166.9 752.6 7.1 25.9 41 3.6 0.0 0.00
510 5 4 aug fri 91.0 166.9 752.6 7.1 21.1 71 7.6 1.4 2.17
511 6 5 aug fri 91.0 166.9 752.6 7.1 18.2 62 5.4 0.0 0.43
512 8 6 aug sun 81.6 56.7 665.6 1.9 27.8 35 2.7 0.0 0.00
513 4 3 aug sun 81.6 56.7 665.6 1.9 27.8 32 2.7 0.0 6.44
514 2 4 aug sun 81.6 56.7 665.6 1.9 21.9 71 5.8 0.0 54.29
515 7 4 aug sun 81.6 56.7 665.6 1.9 21.2 70 6.7 0.0 11.16
516 1 4 aug sat 94.4 146.0 614.7 11.3 25.6 42 4.0 0.0 0.00
517 6 3 nov tue 79.5 3.0 106.7 1.1 11.8 31 4.5 0.0 0.00
The problem can easily be solved if instead of creating a matrix to store the coefficients, they are stored in a list. Like this there will be no items to replace, all will be taken care of later.
In the code that follows I use built in data set iris, changing the name of the response to area.
wdbc <- iris
names(wdbc)[1] <- "area"
fitted_value <- rep(0, nrow(wdbc))
estcoef <- vector("list", length = nrow(wdbc))
# LOOCV split leave one out cross validation
for(i in 1:nrow(wdbc)){
validation <- wdbc[i,]
training <- wdbc[-i, ]
model1 <- lm(area ~ ., data = training)
fitted_value[i] <- predict(model1, newdata = validation)
estcoef[[i]] <- coef(model1)
}
estcoef <- do.call(rbind, estcoef)
meancoef <- colMeans(estcoef) # coefficent means
Edit.
The Op complains about an error:
Error in model.frame.default(Terms, newdata, na.action = na.action, xlev = object$xlevels) :
factor month has new level nov
This is because the data set only has one row with month == "nov" so when fitting the model with that one row left out, the validation data set has a value in the month column that is not present in the training data set and predict does not know what to do with it.
The solution is to use tryCatch to trap the error and let the code continue running. The function below is the repetition of the code above, with a call to tryCatch. It returns the coefficients means and the fitted values. When predict gives an error the returned value is NaN. Change this at will.
fitModelLOOCV <- function(DF){
fitted_value <- rep(0, nrow(DF))
estcoef <- vector("list", length = nrow(DF))
# LOOCV split leave one out cross validation
for(i in 1:nrow(DF)){
validation <- DF[i,]
training <- DF[-i, ]
model1 <- lm(area ~ ., data = training)
fitted_value[i] <- tryCatch(predict(model1, newdata = validation),
error = function(e) {print(e); NaN})
estcoef[[i]] <- coef(model1)
}
estcoef <- do.call(rbind, estcoef)
meancoef <- colMeans(estcoef) # coefficent means
list(meancoef = meancoef, fitted = fitted_value)
}
fitModelLOOCV(wdbc)
An alternative is to remove the problem value before running the function. (Maybe just one data point is not that important.)
wdbc2 <- wdbc[-which(wdbc$month == "nov"), ]
fitModelLOOCV(wdbc2)
When I try to fit an exponential decay and my x axis has decimal number, the fit is never correct. Here's my data below:
exp.decay = data.frame(time,counts)
time counts
1 0.4 4458
2 0.6 2446
3 0.8 1327
4 1.0 814
5 1.2 549
6 1.4 401
7 1.6 266
8 1.8 182
9 2.0 140
10 2.2 109
11 2.4 83
12 2.6 78
13 2.8 57
14 3.0 50
15 3.2 31
16 3.4 22
17 3.6 23
18 3.8 20
19 4.0 19
20 4.2 9
21 4.4 7
22 4.6 4
23 4.8 6
24 5.0 4
25 5.2 6
26 5.4 2
27 5.6 7
28 5.8 2
29 6.0 0
30 6.2 3
31 6.4 1
32 6.6 1
33 6.8 2
34 7.0 1
35 7.2 2
36 7.4 1
37 7.6 1
38 7.8 0
39 8.0 0
40 8.2 0
41 8.4 0
42 8.6 1
43 8.8 0
44 9.0 0
45 9.2 0
46 9.4 1
47 9.6 0
48 9.8 0
49 10.0 1
fit.one.exp <- nls(counts ~ A*exp(-k*time),data=exp.decay, start=c(A=max(counts),k=0.1))
plot(exp.decay, col='darkblue',xlab = 'Track Duration (seconds)',ylab = 'Number of Particles', main = 'Exponential Fit')
lines(predict(fit.one.exp), col = 'red', lty=2, lwd=2)
I always get this weird fit. Seems to me that the fit is not recognizing the right x axis, because when I use a different set of data, with only integers in the x axis (time) the fit works! I don't understand why it's different with different units.
You need one small modification:
lines(predict(fit.one.exp), col = 'red', lty=2, lwd=2)
should be
lines(exp.decay$time, predict(fit.one.exp), col = 'red', lty=2, lwd=2)
This way you make sure to plot against the desired values on your abscissa.
I tested it like this:
data = read.csv('exp_fit_r.csv')
A0 <- max(data$count)
k0 <- 0.1
fit <- nls(data$count ~ A*exp(-k*data$time), start=list(A=A0, k=k0), data=data)
plot(data)
lines(data$time, predict(fit), col='red')
which gives me the following output:
As you can see, the fit describes the actual data very well, it was just a matter of plotting against the correct abscissa values.
I have a very large data set, structured as the sample below.
I have been trying to use the na.spline function in order to
1) identify the "fips" category with missing Yield.
2) if less than than 3 Yield values are NA per fips (here 1-3) the spline function should kick in and fill in the NA.
3) If 3 or more Yields are NA for a "fips" the code should remove the entire "fips" subset, in this case fips 2 should be removed.
My code so far:
finX <- dataset
finxx <- transform(subset(finX, ave(na.spline(finX$Yield), fips, FUN=sum)<2))
#or
finxx <- transform(subset(finX, ave(is.na(finX$Yield), fips, FUN=sum)<2))
Year fips Max Min Rain Yield
1980 1 24.7 0.0 71 37
1981 1 22.8 0.0 62 40
1982 1 22.6 0.0 47 37
1983 1 24.2 0.0 51 39
1984 1 23.8 0.0 61 47
1985 1 25.1 0.0 67 43
1980 2 24.8 0.0 72 34
1981 2 23.2 0.4 54 **NA**
1982 2 25.3 0.1 83 55
1983 2 23.0 0.0 68 **NA**
1984 2 22.4 0.7 70 **NA**
1985 2 24.6 0.0 47 31
1980 3 25.5 0.0 51 31
1981 3 25.5 0.0 51 31
1982 3 25.5 0.0 51 31
1983 3 25.5 0.0 51 **NA**
1984 3 25.5 0.0 51 31
...
Currently the codes above either do not fill in all the NA's in the final product, or simply have no result at all.
Any guidance would be very useful, thank you.
Yield needs to be converted from character to numeric or NA. Then use by to divide finX into separate data frames by fips value. For each data frame with less than 3 NA's, do the spline interpolation. Those with 3 or greater are returned as NULL. Combine the list of returned data frames into single data frame. Code would look like:
library(zoo)
# convert finX$Yield values from character to either numeric or NA
finX$Yield <- sapply(finX$Yield, function(x) if(x =="**NA**") NA_real_ else as.numeric(x))
# use spline interpolation on fips sets with less than 3 NA's
finxx <- by(finX, finX$fips, function(x) if(sum(is.na(x$Yield)) < 3) transform(x, Yield=na.spline(object=Yield, x=Year)) )
# combine results into a single data frame
finxx <- do.call(rbind, finxx)
Alternatively after the conversion to numeric values, you could use ave on the Yield column where spline interpolation returns values on fips sets with less than 3 NA's and all NA's on any other sets. All rows with any NA's in the final result would then be deleted. Code is as follows:
finxx2 <- transform(finX, Yield=ave(Yield, fips, FUN=function(x) if(sum(is.na(x)) < 3) na.spline(object=x) else NA))
finxx2 <- na.omit(finxx2)
Both versions give the same result for the sample data but the first version using by allows you to work with a full data frame for each fips set rather than with just Yield. In this case, this allowed Year to be specified for the x values in the spline interpolation so any data set with a missing Year would still give the correct interpolation. The ave version would get an incorrect answer. So the by version seems more robust.
There's also the dplyr version which is very much like the by version above and gives the same answer as the base R versions. If you're OK with working with dplyr, this is probably the most straightforward and robust approach.
library(dplyr)
finxx3 <- finX %>% group_by(fips) %>%
filter(sum(is.na(Yield)) < 3) %>%
mutate(Yield=na.spline(object=Yield, x=Year))
The first version returns
Year fips Max Min Rain Yield
1.1 1980 1 24.7 0 71 37
1.2 1981 1 22.8 0 62 40
1.3 1982 1 22.6 0 47 37
1.4 1983 1 24.2 0 51 39
1.5 1984 1 23.8 0 61 47
1.6 1985 1 25.1 0 67 43
3.13 1980 3 25.5 0 51 31
3.14 1981 3 25.5 0 51 31
3.15 1982 3 25.5 0 51 31
3.16 1983 3 25.5 0 51 31
3.17 1984 3 25.5 0 51 31
Hi
i have a 10 year, 5 minutes resolution data set of dust concentration
and i have seperetly a 15 year data set with a day resolution of the synoptic clasification
how can i combine these two datasets they are not the same length or resolution
here is a sample of the data
> head(synoptic)
date synoptic
1 01/01/1995 8
2 02/01/1995 7
3 03/01/1995 7
4 04/01/1995 20
5 05/01/1995 1
6 06/01/1995 1
>
head(beit.shemesh)
X........................ StWd SHT PRE GSR RH Temp WD WS PM10 CO O3
1 NA 64 19.8 0 -2.9 37 15.2 61 2.2 241 0.9 40.6
2 NA 37 20.1 0 1.1 38 15.2 344 2.1 241 0.9 40.3
3 NA 36 20.2 0 0.7 39 15.1 32 1.9 241 0.9 39.4
4 NA 52 20.1 0 0.9 40 14.9 20 2.1 241 0.9 38.7
5 NA 42 19.0 0 0.9 40 14.6 11 2.0 241 0.9 38.7
6 NA 75 19.9 0 0.2 40 14.5 341 1.3 241 0.9 39.1
No2 Nox No SO2 date
1 1.4 2.9 1.5 1.6 31/12/2000 24:00
2 1.7 3.1 1.4 0.9 01/01/2001 00:05
3 2.1 3.5 1.4 1.2 01/01/2001 00:10
4 2.7 4.2 1.5 1.3 01/01/2001 00:15
5 2.3 3.8 1.5 1.4 01/01/2001 00:20
6 2.8 4.3 1.5 1.3 01/01/2001 00:25
any idea's
Make an extra column for calculating the dates, and then merge. To do this, you have to generate a variable in each dataframe bearing the same name, hence you first need some renaming. Also make sure that the merge column you use has the same type in both dataframes :
beit.shemesh$datetime <- beit.shemesh$date
beit.shemesh$date <- as.Date(beith.shemesh$datetime,format="%d/%m/%Y")
synoptic$date <- as.Date(synoptic$date,format="%d/%m/%Y")
merge(synoptic, beit.shemesh,by="date",all.y=TRUE)
Using all.y=TRUE keeps the beit.shemesh dataset intact. If you also want empty rows for all non-matching rows in synoptic, you could use all=TRUE instead.