Saving outputs as dataframe in loop - r

Here I have tried to test my data for presence of acf and select a trend test method for daily climate station data ordered into multiple columns. The code intends to work as such:
find monthly average
test the acf
Select trend test method
save result of trend test for each month in excel.
so far i have managed to write a code that does for each month. But, I encountered a problem saving the result as dataframe and export into excel(Error in Result[k, ] <- Outcome : incorrect number of subscripts on matrix). Can any one help me? I have attached a sample data and code I have written.
Data:
structure(list(Date = structure(c(5479, 5480, 5481, 5482, 5483,
5484, 5485, 5486, 5487, 5488, 5489, 5490, 5491, 5492, 5493, 5494,
5495, 5496, 5497, 5498, 5499, 5500, 5501, 5502, 5503, 5504, 5505,
5506, 5507, 5508, 5509, 5510, 5511, 5512, 5513, 5514), class = "Date"),
Adaba = c(6.7, 7.6, 4.9, 6.2, 7.8, 3.1, 4.5, 4.9, 4.2, 5.8,
6.7, 6.1, 5.7, 5.8, 6.4, 5.3, 5.1, 7.6, 7.1, 5.8, 6.7, 6.5,
8.9, 7.6, 7.6, 11.3, 9.5, 11.3, 7.8, 7.6, 6.7, 7.1, 7.6,
7.5, 6.7, 6.5), Bedessa = c(15.1, 14.1, 10.8, 9.9, 10.7,
10.7, 12.4, 13.5, 13, 11.4, 12.9, 13, 13.6, 13, 10.8, 11.9,
13, 10.8, 9.7, 10.8, 9.2, 8.7, 9.2, 10.9, 9.7, 8.8, 12, 10.8,
11.4, 10.3, 10.8, 14.1, 13.5, 13, 14.1, 15.5), Beletu = c(15.3,
14.9, 15.1, 15.7, 15.5, 15.3, 14.8, 15.3, 15.5, 15.2, 14.7,
15.8, 15.9, 14.6, 13.7, 15.2, 15.3, 15.7, 16.2, 15, 15.4,
12.5, 12.6, 12.9, 13.4, 13.2, 11.5, 11.6, 11.7, 12.5, 12.6,
12.6, 12.7, 12, 10.7, 11.8)), row.names = c(NA, 36L), class = "data.frame")`enter code here`
code:
Wabi <- read.csv("Tmin_17.csv",TRUE,",")
# makes the file as data frame
class(Wabi)
# this the package to identify the date type
library(xlsx)
library(lubridate)
library(dplyr)
library(modifiedmk)
# to make new columns of month and year from the original data
Wabi$Date <- as.Date(Wabi$Date, format("%m/%d/%Y"))
# add the package lubridate
Wabi$month <- lubridate::month(Wabi$Date)
Wabi$year <- lubridate::year(Wabi$Date)
# to view the change in the original data
head(Wabi)
N=34
Result <- matrix(nrow = 100,ncol = 2)
# this function is written to sum monthly values
for (k in 1:192){
for(j in 2:17) {
colnum <- colnames(Wabi[j])
Wabi_mon <- Wabi%>%group_by(year,month)%>%summarise_at(.vars = colnum,.funs = mean)
for (i in 1:12)
{
test = acf((Wabi_mon %>% group_by(month) %>% filter(month == i))[3],lag.max = 1)
Trendtest1 <- as.data.frame(mmky(as.data.frame((Wabi_mon %>% group_by(month) %>% filter(month == i))[3])[,]))
Trendtest2 <- as.data.frame(mkttest(as.data.frame((Wabi_mon %>% group_by(month) %>% filter(month == i))[3])[,]))
if (abs(test$acf[2])>abs(((-1-1.96*(N-1)^0.5))/N-1))
Outcome <- Trendtest1
else
Outcome <- Trendtest2
Result[k,] <- Outcome
}
}
}
Result <- data.frame(Result)
class(Result)
write.xlsx(Result,file = "tmin_trend.xlsx",sheetName = "Sheet1")

Related

Imputing based on percentage of NA values

I want to impute temperature values from 6 different weather stations. The data are measured every 30 minutes. I want to impute the values only when there are more than 20 % NA values in a day and month. So I am grouping the values per date/month, calculate the mean of NAs per date/month and then I want to filter to keep the days/months which have less than 20 % NA in order to impute on the rest. What is the best way to do that? I have problems coding the filter, because I am not really sure if it filters the way I want it. Also what is the best method to impute the missing values later on? I tried to familarize myself with the imputeTS package, but I am not sure which method I should be using. na_seadec or na_seasplit or something else?
My data (sample, created with slice_sample, n=20 from the dplyr package)
df <- structure(list(td = structure(c(1591601400, 1586611800, 1574420400,
1583326800, 1568898000, 1561969800, 1577010600, 1598238000, 1593968400,
1567800000, 1590967800, 1584981000, 1563597000, 1589117400, 1599796800,
1563467400, 1569819600, 1571014800, 1573320600, 1577154600), tzone = "UTC", class = c("POSIXct",
"POSIXt")), Temp_Dede = c(13.7, NA, NA, 6.4, 14.9, 19.1, 1.3,
14.2, 21.1, 15.1, 10, 5, 14.1, 24.2, 8.8, 25.3, 14.9, 19.7, NA,
6.2), Temp_188 = c(13.1, 12.6, 8.9, 6.3, 14.5, 18.8, 1.4, 14.2,
20.9, 13.1, 10.4, 5.1, 12.2, 24.2, 9.4, 25.9, 14.8, 18.9, NA,
6.1), Temp_275 = c(13.9, 12.6, 8.8, 6, 14.3, 18.9, 1.4, 13.5,
20.4, 12.2, 11.1, 4.6, 12.5, 23.3, 9.9, 24, 14.8, 19.2, 6.9,
5.9), Temp_807 = c(13.9, 13.1, 8.8, 6.2, 14.3, 19.1, 1.4, 14.7,
20.5, 13.3, 10.6, 4.9, 12.8, 23.1, 10.3, 24.8, 14.7, 19.1, 6.9,
6.1), Temp_1189 = c(13.7, 12.3, 8.8, 5.6, 14.1, 18.4, 1.4, 13.3,
19.9, 13.3, 10.7, 4.4, 13.6, 24, 9.8, 24.9, 14.7, 19.1, 6.9,
5.7), Temp_1599 = c(13.2, 12.7, 8.8, 5.1, 14.3, 18.3, 1.8, 14.2,
20.3, 13.2, 10.6, 4.4, 12.1, 22.9, 9.8, 25.8, 14.8, 19.2, 6.9,
5.9)), row.names = c(NA, -20L), class = "data.frame")
The code I've been using so far. I am only grouping by days in the first step. There are some months of the data which have several complete days missing, so I need to filter months with > 20 % NAs after that.
df %>% group_by(Datum) %>%
filter_at(vars(Temp_Dede, Temp_188, Temp_275, Temp_807, Temp_1189, Temp_1599),~mean(is.na(.) <0.2))
I am not sure what to do next and I am stuck.

Kolmogorov-Smirnov test in R - For-loop

I have a problem with comparing two sets of curves by using the Kolmogorow-Smirnow-test.
What I would like the program to do, is to compare each variation of Curve 1 with each variation of Curve 2. To accomplish that, I have tried to build a for-loop that iterates through Curve 1, and within that loop another loop that iterates through Curve 2.
Unfortunately, when executing the code, I get an error message about
"not enough x-Data“
When I try running the test by comparing one variation of each curve manually, it works, so I think the problem is the combination of the two loops and the KS-test.
If anyone has experienced a similar error and was able to solve the issue, I would highly appreciate any advice on how to fix it. Thank you!
Example data.frames:
Kurve1 <- structure(list(Punkte = 1:21,
Trial.1 = c(105.5, 85.3, 63.1, 54.9, 42, 34.1, 30.7,
24.2, 20.1, 15.7, 14, 11, 9.3, 7.2, 6.6,
5.3, 4.2, 3.3, 2.6, 1.8, 0.9),
Trial.2 = c(103.8, 85.2, 64.3, 54.1, 41.8, 35.9, 29,
23.7, 20.2, 15.9, 13.5, 11, 9.3, 7.3, 6.4,
5.5, 4.3, 3.4, 2.5, 1.9, 0.9),
Trial.3 = c(104.8, 87.2, 64.9, 52.8, 40.8, 35.6, 29.1,
24.5, 20.4, 16.2, 13.7, 11.2, 9.2, 7.5,
6.4, 5.5, 4.2, 3.5, 2.5, 1.8, 0.9),
Trial.4 = c(106.9, 83.9, 67.1, 55.1, 44.1, 34.1, 29.3,
22.9, 19.4, 16.7, 13.6, 10.8, 9.4, 7.4,
6.1, 5.6, 4.4, 3.5, 2.4, 1.9, 0.9),
Trial.5 = c(104.8, 84.3, 68.7, 54.8, 45.3, 35.2, 28.9,
23.1, 20.1, 16.9, 13.3, 11, 9.6, 7.1, 6.3,
5.4, 4.5, 3.4, 2.3, 2, 0.9)),
class = "data.frame", row.names = c(NA, -21L))
Kurve2 <- structure(list(Punkte = 1:21,
Trial.1 = c(103.5, 81.2, 66.2, 54.5, 45.1, 39.1, 30.9,
27, 21.9, 19.3, 16.6, 14.9, 12.9, 11, 10.1,
9.2, 8, 7.1, 6.3, 6.2, 5),
Trial.2 = c(104, 81, 66.9, 55.2, 46, 38.7, 31.2, 27.3,
22.3, 20, 17.2, 15.2, 12.9, 11.1, 10.2,
9.1, 8, 7.1, 6.4, 5.9, 5),
Trial.3 = c(103.9, 81.9, 67.2, 53.8, 45.4, 38.5, 31.5,
26.8, 22.2, 19.8, 17.4, 15.1, 13, 10.9,
10.1, 9.2, 8.1, 7.1, 6.4, 6, 4.9),
Trial.4 = c(104.2, 84.1, 68.7, 55.4, 45.1, 36.3, 32,
26.9, 22.8, 19.8, 16.8, 14.8, 13.2, 10.9,
10.3, 9.1, 8.2, 7.2, 6.3, 6.1, 5),
Trial.5 = c(103.8, 83.2, 69.2, 55.7, 44.8, 36.4, 31.4,
26.7, 22.1, 18.9, 16.9, 14.4, 13, 11.1,
10.2, 9, 7.9, 7, 6.3, 6.1, 5.1)),
class = "data.frame", row.names = c(NA, -21L))
The code I used for the loop:
for(i in 1:ncol(Kurve1)){
for(j in 1:ncol(Kurve2)){
ks.test(Kurve1$Trial.[i], Kurve2$Trial.[j], alternative = "greater")
}
}
This will work:
for(i in 1:(ncol(Kurve1) - 2)){
for(j in (i + 1):(ncol(Kurve2) - 1)){
print(paste0("Trial.", i, " - Trial.", j))
ks_result <- ks.test(Kurve1[, paste0("Trial.", i)],
Kurve2[, paste0("Trial.", j)],
alternative="greater")
print(ks_result)
}
}
Explanation:
As it is doesn't make sense to run the KS test for the same column, and also doesn't make sense to run for both Trial.1 ~ Trial.2 and Trial.2 ~ Trial.1, etc., you have to run your outer for loop from 1 to the last but one ((ncol(Kurve1) - 2)) index for Trial.* columns, and you have to run your inner for loop from the next index as the outer loop has (i + 1) to the last index ((ncol(Kurve2) - 1)) for Trial.* columns.
You can not paste strings like Trial.[i], you have to use the paste function for that. As with that the Kurve1$paste0("Trial.", i) notation not working, you have to use the extract operator [ to get the column you need (Kurve1[, paste0("Trial.", i)])
As in a (nested) for loop the ks.test runs silently, a have added a print to be able to see the results. I have also added a line print(paste0("Trial.", i, " - Trial.", j)) to tag the actual result with the columns for which it belongs.

Applying melt function to multiple .txt files

I am having multiple .txt files each representing some parameters in the following format
Parameter1 = structure(list(Year = 1969:1974, Jan = c(16.6, 15.6, 15.8, 16.9,
16.2, 15.4), Feb = c(17, 15.2, 16.6, 14.8, 12.9, 17.9), Mar = c(14.2,
13.3, 16.9, 14.9, 15.5, 13.4), Apr = c(11.6, 10.7, 10.7, 11.6,
10.3, 9.7), May = c(9.9, 9.2, 9.7, 9.6, 8.2, 8.8), Jun = c(7.6,
7.2, 7.1, 7.2, 6, 6.9), Jul = c(7, 6.7, 6.7, 6.9, 6.9, 5.2),
Aug = c(7, 7.2, 7.2, 6.1, 6.8, 6.5), Sep = c(8.4, 7.6, 8.5,
7.3, 6.6, 6.8), Oct = c(10.4, 8.5, 8.5, 9.1, 8.3, 7.7), Nov = c(14,
12.2, 12.9, 12.2, 10.8, 10.2), Dec = c(15.5, 16.7, 15.9,
15.5, 13.2, 15.4), Annual = c(11.6, 10.8, 11.4, 11, 10.1,
10.3), winter = c(16.8, 15.4, 16.2, 15.8, 14.5, 16.6), premonsoon = c(11.9,
11.1, 12.4, 12, 11.3, 10.6), monsoon = c(7.5, 7.2, 7.3, 6.8,
6.6, 6.3), postmonsoon = c(13.3, 12.5, 12.5, 12.2, 10.7,
11.1)), row.names = c(NA, 6L), class = "data.frame")
I want to melt those .txt files and have it into a single file like the following format
I could able to do the reshaping operation for a single file like
A <- read.delim("Parameter1.txt", sep="\t")
Transpose <- t(A)
colnames(Transpose) = Transpose[1, ] # the first row will be the header
Transpose = Transpose[-1, ] # removing the first row.
arranged_data <- melt(Transpose)
melt(A)
head(arranged_data)
Arranged_data <- data.frame(cbind(arranged_data$X2,as.character(arranged_data$X1),arranged_data$value))
colnames(Arranged_data) <- c("Year","Month","Parameter1")
head(Arranged_data)
#Removing the seasonal and annual data
Final <- subset(Arranged_data, !Month %in% c("postmonsoon","monsoon","premonsoon","winter","Annual"))
numMonth<-function(x)c(JAN=1,FEB=2,MAR=3,APR=4,MAY=5,JUN=6,JUL=7,AUG=8,SEP=9,OCT=10,NOV=11,DEC=12)
Final$Month <- numMonth(Final$Month)
write.csv(Final,"arranged_data_TAMILNADU.csv",row.names = F)
Now how to apply it on multiple .txt files and get the output into single file.
I would suggest writing the melting / reshaping into a function and calling it over a list of inputs. Also, I'm using dplyr::gather() instead of reshape2::melt(). I hope this helps!
library(tidyverse)
A = structure(list(Year = 1969:1974,
Jan = c(16.6, 15.6, 15.8, 16.9, 16.2, 15.4),
Feb = c(17, 15.2, 16.6, 14.8, 12.9, 17.9),
Mar = c(14.2, 13.3, 16.9, 14.9, 15.5, 13.4),
Apr = c(11.6, 10.7, 10.7, 11.6, 10.3, 9.7),
May = c(9.9, 9.2, 9.7, 9.6, 8.2, 8.8),
Jun = c(7.6, 7.2, 7.1, 7.2, 6, 6.9),
Jul = c(7, 6.7, 6.7, 6.9, 6.9, 5.2),
Aug = c(7, 7.2, 7.2, 6.1, 6.8, 6.5),
Sep = c(8.4, 7.6, 8.5, 7.3, 6.6, 6.8),
Oct = c(10.4, 8.5, 8.5, 9.1, 8.3, 7.7),
Nov = c(14, 12.2, 12.9, 12.2, 10.8, 10.2),
Dec = c(15.5, 16.7, 15.9, 15.5, 13.2, 15.4),
Annual = c(11.6, 10.8, 11.4, 11, 10.1, 10.3),
winter = c(16.8, 15.4, 16.2, 15.8, 14.5, 16.6),
premonsoon = c(11.9, 11.1, 12.4, 12, 11.3, 10.6),
monsoon = c(7.5, 7.2, 7.3, 6.8, 6.6, 6.3),
postmonsoon = c(13.3, 12.5, 12.5, 12.2, 10.7, 11.1)),
row.names = c(NA, 6L),
class = "data.frame") 11.1)), row.names = c(NA, 6L), class = "data.frame")
B <- A
mylist <- list(A,B)
myfunc <- function (x){
x %>%
gather(key = "Month",
value = "Value",
c(format(ISOdatetime(2000,1:12,1,0,0,0),"%b"))) %>% # list of month names
arrange(Year) %>% # sort dataframe by Year
select(Year, Month, Value, winter, premonsoon, monsoon, postmonsoon)
}
mylist_melted <- lapply(mylist, myfunc)

Resolving minFactor error when using nls in R

I am running nls models in R on several different datasets, using the self-starting Weibull Growth Curve function, e.g.
MOD <- nls(Response ~ SSweibull(Time, Asym, Drop, lrc, pwr), data = DATA)
With data like this, it works as expected:
GOOD.DATA <- data.frame("Time" = c(1:150), "Response" = c(31.2, 20.0, 44.3, 35.2,
31.4, 27.5, 24.1, 25.9, 23.3, 21.2, 21.3, 19.8, 18.4, 17.3, 16.3, 16.3,
16.6, 15.9, 15.9, 15.8, 15.1, 15.6, 15.1, 14.5, 14.2, 14.2, 13.7, 14.1,
13.7, 13.4, 13.0, 12.6, 12.3, 12.0, 11.7, 11.4, 11.1, 11.0, 10.8, 10.6,
10.4, 10.1, 11.6, 12.0, 11.9, 11.7, 11.5, 11.2, 11.5, 11.3, 11.1, 10.9,
10.9, 11.4, 11.2, 11.1, 10.9, 10.9, 10.7, 10.7, 10.5, 10.4, 10.4, 10.3,
10.1, 10.0, 9.9, 9.7, 9.6, 9.7, 9.6, 9.5, 9.5, 9.4, 9.3, 9.2, 9.1, 9.0,
8.9, 9.0, 8.9, 8.8, 8.8, 8.7, 8.6, 8.5, 8.4, 8.3, 8.3, 8.2, 8.1, 8.0,
8.0, 8.0, 7.9, 7.9, 7.8, 7.7, 7.6, 7.6, 7.6, 7.6, 7.5, 7.5, 7.5, 7.5,
7.4, 7.4, 7.3, 7.2, 7.2, 7.1, 7.1, 7.0, 7.0, 6.9, 6.9, 6.8, 6.8, 6.7,
6.7, 6.6, 6.6, 6.5, 6.5, 6.4, 6.4, 6.4, 6.3, 6.3, 6.2, 6.2, 6.2, 6.1
6.1, 6.1, 6.0, 6.0, 5.9, 5.9, 5.9, 5.9, 5.8, 5.8, 5.8, 5.8, 5.8, 5.8,
5.8, 5.7))
But with this data set:
BAD.DATA <- data.frame("Time" = c(1:150), "Response" = c(89.8, 67.0,
51.4, 41.2, 39.4, 38.5, 34.3, 30.9, 29.9, 34.8, 32.5, 30.1, 28.5, 27.0,
26.2, 24.7, 23.8, 23.6, 22.6, 22.0, 21.3, 20.7, 20.1, 19.6, 19.0, 18.4,
17.9, 17.5, 17.1, 23.1, 22.4, 21.9, 23.8, 23.2, 22.6, 22.0, 21.6, 21.1,
20.6, 20.1, 19.7, 19.3, 19.0, 19.2, 18.8, 18.5, 18.3, 19.5, 19.1, 18.7,
18.5, 18.3, 18.0, 17.7, 17.5, 17.3, 17.0, 16.7, 16.7, 16.9, 16.6, 16.4,
16.1, 15.9, 15.8, 15.6, 15.4, 15.2, 15.0, 14.8, 14.7, 14.5, 14.4, 14.2,
14.0, 13.9, 13.7, 13.6, 15.4, 15.2, 15.1, 15.0, 14.9, 14.7, 14.6, 14.5,
14.4, 14.3, 14.4, 14.2, 14.1, 14.0, 13.8, 13.7, 13.6, 13.5, 13.4, 13.2,
13.3, 13.2, 13.1, 13.0, 12.9, 12.8, 12.7, 12.6, 12.5, 12.5, 12.4, 12.3,
12.2, 12.1, 12.1, 11.9, 12.8, 12.7, 12.6, 12.5, 12.4, 14.2, 14.1, 14.0,
14.1, 14.0, 13.9, 13.8, 13.7, 13.7, 13.6, 13.5, 13.4, 13.3, 13.3, 13.2,
13.1, 13.0, 12.9, 12.9, 12.8, 12.7, 12.6, 12.9, 12.8, 12.7, 12.6, 12.5,
12.5, 12.4, 12.3, 12.2))
I get the error;
Error in nls(y ~ cbind(1, -exp(-exp(lrc) * x^pwr)), data = xy, algorithm = "plinear",
: step factor 0.000488281 reduced below 'minFactor' of 0.000976562
By including the control argument I am able to change the minFactor for GOOD.DATA:
MOD <- nls(Response ~ SSweibull(Time, Asym, Drop, lrc, pwr), data = GOOD.DATA,
control = nls.control(minFactor = 1/4096))
But the model was running without errors anyway. With BAD.DATA and several other datasets, including control has no effect and I just get the same error message.
Questions
How can I change the minFactor for the BAD.DATA?
What's causing the error? (i.e. what is it about the data set that triggers the error?)
Will changing the minFactor resolve this error, or is this one of R's obscure error messages and it actually indicates a different issue?
It seems the control option does not work in your case, since the code breaks at getInitial while self-starting, that is, before using your provided control parameters. One way would be to try specifying some starting parameters, instead of the naive self-starting. For nls it is often the case that playing with the initial parameters will make-or-break it, not entirely sure for the specific Weibull case though, but should be the same.
To see that you don't arrive to the actual control, you can try with nls.control(printEval = T) and see that there's no print.

left_join does not merge all values

I'm merging two data.frames, dat1 and dat2, by temp and the merge is not providing all values for dat2. Why are values from dat2 not merging correctly?
Sample data
dat1 <- data.frame(temp = seq(0, 33.2, 0.1))
dat2 <- structure(list(temp = c(6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7,
7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8, 8.1, 8.2, 8.3,
8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9, 9.1, 9.2, 9.3, 9.4, 9.5, 9.6,
9.7, 9.8, 9.9, 10, 10.1, 10.2, 10.3, 10.4, 10.5, 10.6, 10.7,
10.8, 10.9, 11, 11.1, 11.2, 11.3, 11.4, 11.5, 11.6, 11.7, 11.8,
11.9, 12, 12.1, 12.2, 12.3, 12.4, 12.5, 12.6, 12.7, 12.8, 12.9,
13, 13.1, 13.2), pprox = c(193.53, 626.8, 1055.04, 1478.24,
1896.41, 2309.55, 2717.64, 3120.69, 3518.7, 3911.66, 4299.58,
4682.45, 5060.26, 5433.03, 5800.74, 6163.39, 6520.99, 6873.53,
7221.01, 7563.43, 7900.78, 8233.07, 8560.3, 8882.46, 9199.56,
9511.59, 9818.55, 10120.44, 10417.27, 10709.03, 10995.71, 11277.33,
11553.88, 11825.36, 12091.78, 12353.13, 12609.41, 12860.63, 13106.78,
13347.87, 13583.89, 13814.86, 14040.76, 14261.61, 14477.41, 14688.14,
14893.83, 15094.47, 15290.05, 15480.59, 15666.09, 15846.55, 16021.96,
16192.34, 16357.68, 16517.98, 16673.26, 16823.51, 16968.73, 17108.93,
17244.1, 17374.25, 17499.38, 17619.5, 17734.6, 17844.68, 17949.76,
18049.82, 18144.87, 18234.91)), row.names = c(NA, 70L), class = "data.frame")
Merge
dat <- left_join(dat1, dat2, by = "temp")
Output
dat[65:70, ]
temp approx
65 6.4 626.80
66 6.5 1055.04
67 6.6 NA
68 6.7 1896.41
69 6.8 NA
70 6.9 2717.64
I converted the temp columns in both data frames to a factor, followed by left joining them together. It works!
dat1$temp <- as.factor(dat1$temp)
dat2$temp <- as.factor(dat2$temp)
dat <- left_join(dat1, dat2, by = "temp")
Hmm interestingly identical(dat2$temp[4],6.6 ) returns TRUE but identical(dat1$temp[67],6.6) returns FALSE.
Floating point issues are a known issues have a look at Why are these numbers not equal? or floating point issue in R? among many other similar posts.
If you set dat1 <- data.frame(temp = round(seq(0, 33.2, 0.1), 2)) it should fix this. Possibly check out ?all.equal as all.equal(dat1$temp[67],6.6 )
is TRUE

Resources