I am new to the R language, for my assignment, I am trying to generate several levels dummies for different variables(total in 3). however, each approach i got problem:
method1: followed by https://stats.idre.ucla.edu/r/modules/coding-for-categorical-variables-in-regression-models/
The code:
> housing_prices2$Fuel.Type.f <- factor(housing_prices2$Fuel.Type)
> is.factor(housing_prices2$Fuel.Type.f)
[1] TRUE
> housing_prices2$Fuel.Type.f[1:10]
[1] Electric Gas Gas Gas Gas Gas Oil
[8] Oil Electric Gas
Levels: Electric Gas None Oil Solar Unknown/Other Wood
works well. However, when I got problem in next line:
> summary(lm(write ~ Fuel.Type.f, data = housing_prices2))
Error in model.frame.default(formula = write ~ Fuel.Type.f, data = housing_prices2,: object is not a matrix
I just have no idea about this error and it doesn't make sense to me, so I decided to use another method;
method2: followed by Convert categorical variables to numeric in R
for variable Fuel.Type, it works well:
> Fuel.Type <- as.factor(c("Electric", "Gas", "None", "Oil", "Solar", "Unknown/Other",
+ "Wood"))
> Fuel.Type
[1] Electric Gas None Oil Solar
[6] Unknown/Other Wood
Levels: Electric Gas None Oil Solar Unknown/Other Wood
> unclass(Fuel.Type)
[1] 1 2 3 4 5 6 7
attr(,"levels")
[1] "Electric" "Gas" "None" "Oil"
[5] "Solar" "Unknown/Other" "Wood"
but when I try to generate dummies for other variables, then i got this error:
> housing_prices2$Heat.Type.f[1:10]
NULL
Warning message:
Unknown or uninitialised column: 'Heat.Type.f'.
I have clueless about what's going on about these error either...
any suggestions are appreciated!
BTW, here is my sample data table:
>$ Fuel.Type : chr "Electric" "Gas" "Gas" "Gas"
>$ Heat.Type : chr "Electric" "Hot Water" "Hot Water" "Hot Air"
>$ Sewer.Type : chr "Private" "Private" "Public" "Private"
I figured out my problem last night.
The problem is that I messed up the datafile, since i create a new data file named:
hp2 <- read_excel("Desktop/hw/424/hw1/housing_prices2.xlsx")
In addition, I messed up the Y variable as well, see
summary(lm(write ~ Fuel.Type.f, data = housing_prices2))
My Y variable actually is not write.
Related
I have tried to resolve this problem all day but without any improvement.
I am trying to replace the following abbreviations into the following desired words in my dataset:
-Abbreviations: USA, H2O, Type 3, T3, bp
Desired words United States of America, Water, Type 3 Disease, Type 3 Disease, blood pressure
The input data is for example
[1] I have type 3, its considered the highest severe stage of the disease.
[2] Drinking more H2O will make your skin glow.
[3] Do I have T2 or T3? Please someone help.
[4] We don't have this on the USA but I've heard that will be available in the next 3 years.
[5] Having a high bp means that I will have to look after my diet?
The desired output is
[1] i have type 3 disease, its considered the highest severe stage
of the disease.
[2] drinking more water will make your skin glow.
[3] do I have type 3 disease? please someone help.
[4] we don't have this in the united states of america but i've heard that will be available in the next 3 years.
[5] having a high blood pressure means that I will have to look after my diet?
I have tried the following code but without success:
data= read.csv(C:"xxxxxxx, header= TRUE")
lowercase= tolower(data$MESSAGE)
dict=list("\\busa\\b"= "united states of america", "\\bh2o\\b"=
"water", "\\btype 3\\b|\\bt3\\"= "type 3 disease", "\\bbp\\b"=
"blood pressure")
for(i in 1:length(dict1)){
lowercasea= gsub(paste0("\\b", names(dict)[i], "\\b"),
dict[[i]], lowercase)}
I know that I am definitely doing something wrong. Could anyone guide me on this? Thank you in advance.
If you need to replace only whole words (e.g. bp in Some bp. and not in bpcatalogue) you will have to build a regular expression out of the abbreviations using word boundaries, and - since you have multiword abbreviations - also sort them by length in the descending order (or, e.g. type may trigger a replacement before type three).
An example code:
abbreviations <- c("USA", "H2O", "Type 3", "T3", "bp")
desired_words <- c("United States of America", "Water", "Type 3 Disease", "Type 3 Disease", "blood pressure")
df <- data.frame(abbreviations, desired_words, stringsAsFactors = FALSE)
x <- 'Abbreviations: USA, H2O, Type 3, T3, bp'
sort.by.length.desc <- function (v) v[order( -nchar(v)) ]
library(stringr)
str_replace_all(x,
paste0("\\b(",paste(sort.by.length.desc(abbreviations), collapse="|"), ")\\b"),
function(z) df$desired_words[df$abbreviations==z][[1]][1]
)
The paste0("\\b(",paste(sort.by.length.desc(abbreviations), collapse="|"), ")\\b") code creates a regex like \b(Type 3|USA|H2O|T3|bp)\b, it matches Type 3, or USA, etc. as whole word only as \b is a word boundary. If a match is found, stringr::str_replace_all replaces it with the corresponding desired_word.
See the R demo online.
I'm currently working with a large data frame containing lots of text in each row and would like to effectively identify and replace misspelled words in each sentence with the hunspell package. I was able to identify the misspelled words, but can't figure out how to do hunspell_suggest on a list.
Here is an example of the data frame:
df1 <- data.frame("Index" = 1:7, "Text" = c("A complec sentence joins an independet",
"Mary and Samantha arived at the bus staton before noon",
"I did not see thm at the station in the mrning",
"The participnts read 60 sentences in radom order",
"how to fix mispelled words in R languge",
"today is Tuesday",
"bing sports quiz"))
I converted the text column into character and used hunspell to identify the misspelled words within each row.
library(hunspell)
df1$Text <- as.character(df1$Text)
df1$word_check <- hunspell(df1$Text)
I tried
df1$suggest <- hunspell_suggest(df1$word_check)
but it keeps giving this error:
Error in hunspell_suggest(df1$word_check) :
is.character(words) is not TRUE
I'm new to this so I'm not exactly sure how does the suggest column using hunspell_suggest function would turn out. Any help will be greatly appreciated.
Check your intermediate steps. The output of df1$word_check is as follows:
List of 5
$ : chr [1:2] "complec" "independet"
$ : chr [1:2] "arived" "staton"
$ : chr [1:2] "thm" "mrning"
$ : chr [1:2] "participnts" "radom"
$ : chr [1:2] "mispelled" "languge"
which is of type list. If you did lapply(df1$word_check, hunspell_suggest) you can get the suggestions.
EDIT
I decided to go into more detail on this question as I have not seen any easy alternative. This is what I have come up with:
cleantext = function(x){
sapply(1:length(x),function(y){
bad = hunspell(x[y])[[1]]
good = unlist(lapply(hunspell_suggest(bad),`[[`,1))
if (length(bad)){
for (i in 1:length(bad)){
x[y] <<- gsub(bad[i],good[i],x[y])
}}})
x
}
Although there probably is a more elegant way of doing it, this function returns a vector of character strings corrected as such:
> df1$Text
[1] "A complec sentence joins an independet"
[2] "Mary and Samantha arived at the bus staton before noon"
[3] "I did not see thm at the station in the mrning"
[4] "The participnts read 60 sentences in radom order"
[5] "how to fix mispelled words in R languge"
[6] "today is Tuesday"
[7] "bing sports quiz"
> cleantext(df1$Text)
[1] "A complex sentence joins an independent"
[2] "Mary and Samantha rived at the bus station before noon"
[3] "I did not see them at the station in the morning"
[4] "The participants read 60 sentences in radon order"
[5] "how to fix misspelled words in R language"
[6] "today is Tuesday"
[7] "bung sports quiz"
Watch out, as this returns the first suggestion given by hunspell - which may or may not be correct.
# parse PubMed data
library(XML) # xpath
library(rentrez) # entrez_fetch
pmids <- c("25506969","25032371","24983039","24983034","24983032","24983031","26386083",
"26273372","26066373","25837167","25466451","25013473","23733758")
# Above IDs are mix of Books and journal articles
# ID# 23733758 is an journal article and has No abstract
data.pubmed <- entrez_fetch(db = "pubmed", id = pmids, rettype = "xml",
parsed = TRUE)
abstracts <- xpathApply(data.pubmed, "//Abstract", xmlValue)
names(abstracts) <- pmids
It works well if every record has an abstract. However, when there is a PMID (#23733758) without a pubmed abstract ( or a book article or something else), it skips resulting in an error 'names' attribute [5] must be the same length as the vector [4]
Q: How to pass multiple paths/nodes so that, I can extract journal article, Books or Reviews ?
UPDATE : hrbrmstr solution helps to address the NA. But,can xpathApply take multiple nodes like c(//Abstract, //ReviewArticle , etc etc )?
You have to attack it one tag element up:
abstracts <- xpathApply(data.pubmed, "//PubmedArticle//Article", function(x) {
val <- xpathSApply(x, "./Abstract", xmlValue)
if (length(val)==0) val <- NA_character_
val
})
names(abstracts) <- pmids
str(abstracts)
List of 5
## $ 24019382: chr "Adenocarcinoma of the lung, a leading cause of cancer death, frequently displays mutational activation of the KRAS proto-oncoge"| __truncated__
## $ 23927882: chr "Mutations in components of the mitogen-activated protein kinase (MAPK) cascade may be a new candidate for target for lung cance"| __truncated__
## $ 23825589: chr "Aberrant activation of MAP kinase signaling pathway and loss of tumor suppressor LKB1 have been implicated in lung cancer devel"| __truncated__
## $ 23792568: chr "Sorafenib, the first agent developed to target BRAF mutant melanoma, is a multi-kinase inhibitor that was approved by the FDA f"| __truncated__
## $ 23733758: chr NA
Per your comment with an alternate way to do this:
str(xpathApply(data.pubmed, '//PubmedArticle//Article', function(x) {
xmlValue(xmlChildren(x)$Abstract)
}))
## List of 5
## $ : chr "Adenocarcinoma of the lung, a leading cause of cancer death, frequently displays mutational activation of the KRAS proto-oncoge"| __truncated__
## $ : chr "Mutations in components of the mitogen-activated protein kinase (MAPK) cascade may be a new candidate for target for lung cance"| __truncated__
## $ : chr "Aberrant activation of MAP kinase signaling pathway and loss of tumor suppressor LKB1 have been implicated in lung cancer devel"| __truncated__
## $ : chr "Sorafenib, the first agent developed to target BRAF mutant melanoma, is a multi-kinase inhibitor that was approved by the FDA f"| __truncated__
## $ : chr NA
I've trained a tree model with R caret. I'm now trying to generate a confusion matrix and keep getting the following error:
Error in confusionMatrix.default(predictionsTree, testdata$catgeory)
: the data and reference factors must have the same number of levels
prob <- 0.5 #Specify class split
singleSplit <- createDataPartition(modellingData2$category, p=prob,
times=1, list=FALSE)
cvControl <- trainControl(method="repeatedcv", number=10, repeats=5)
traindata <- modellingData2[singleSplit,]
testdata <- modellingData2[-singleSplit,]
treeFit <- train(traindata$category~., data=traindata,
trControl=cvControl, method="rpart", tuneLength=10)
predictionsTree <- predict(treeFit, testdata)
confusionMatrix(predictionsTree, testdata$catgeory)
The error occurs when generating the confusion matrix. The levels are the same on both objects. I cant figure out what the problem is. Their structure and levels are given below.
They should be the same. Any help would be greatly appreciated as its making me cracked!!
> str(predictionsTree)
Factor w/ 30 levels "16-Merchant Service Charge",..: 28 22 22 22 22 6 6 6 6 6 ...
> str(testdata$category)
Factor w/ 30 levels "16-Merchant Service Charge",..: 30 30 7 7 7 7 7 30 7 7 ...
> levels(predictionsTree)
[1] "16-Merchant Service Charge" "17-Unpaid Cheque Fee" "18-Gov. Stamp Duty" "Misc" "26-Standard Transfer Charge"
[6] "29-Bank Giro Credit" "3-Cheques Debit" "32-Standing Order - Debit" "33-Inter Branch Payment" "34-International"
[11] "35-Point of Sale" "39-Direct Debits Received" "4-Notified Bank Fees" "40-Cash Lodged" "42-International Receipts"
[16] "46-Direct Debits Paid" "56-Credit Card Receipts" "57-Inter Branch" "58-Unpaid Items" "59-Inter Company Transfers"
[21] "6-Notified Interest Credited" "61-Domestic" "64-Charge Refund" "66-Inter Company Transfers" "67-Suppliers"
[26] "68-Payroll" "69-Domestic" "73-Credit Card Payments" "82-CHAPS Fee" "Uncategorised"
> levels(testdata$category)
[1] "16-Merchant Service Charge" "17-Unpaid Cheque Fee" "18-Gov. Stamp Duty" "Misc" "26-Standard Transfer Charge"
[6] "29-Bank Giro Credit" "3-Cheques Debit" "32-Standing Order - Debit" "33-Inter Branch Payment" "34-International"
[11] "35-Point of Sale" "39-Direct Debits Received" "4-Notified Bank Fees" "40-Cash Lodged" "42-International Receipts"
[16] "46-Direct Debits Paid" "56-Credit Card Receipts" "57-Inter Branch" "58-Unpaid Items" "59-Inter Company Transfers"
[21] "6-Notified Interest Credited" "61-Domestic" "64-Charge Refund" "66-Inter Company Transfers" "67-Suppliers"
[26] "68-Payroll" "69-Domestic" "73-Credit Card Payments" "82-CHAPS Fee" "Uncategorised"
Try use:
confusionMatrix(table(Argument 1, Argument 2))
Thats worked for me.
Maybe your model is not predicting a certain factor.
Use the table() function instead of confusionMatrix() to see if that is the problem.
Try specifying na.pass for the na.action option:
predictionsTree <- predict(treeFit, testdata,na.action = na.pass)
Change them into a data frame and then use them in confusionMatrix function:
pridicted <- factor(predict(treeFit, testdata))
real <- factor(testdata$catgeory)
my_data1 <- data.frame(data = pridicted, type = "prediction")
my_data2 <- data.frame(data = real, type = "real")
my_data3 <- rbind(my_data1,my_data2)
# Check if the levels are identical
identical(levels(my_data3[my_data3$type == "prediction",1]) , levels(my_data3[my_data3$type == "real",1]))
confusionMatrix(my_data3[my_data3$type == "prediction",1], my_data3[my_data3$type == "real",1], dnn = c("Prediction", "Reference"))
I had same issue but went ahead and changed it after reading data file like so..
data = na.omit(data)
Thanks all for pointer!
Might be there are missing values in the testdata, Add the following line before "predictionsTree <- predict(treeFit, testdata)" to remove NAs. I had the same error and now it works for me.
testdata <- testdata[complete.cases(testdata),]
The length problem you're running into is probably due to the presence of NAs in the training set -- either drop the cases that are not complete, or impute so that you do not have missing values.
make sure you installed the package with all its dependencies:
install.packages('caret', dependencies = TRUE)
confusionMatrix( table(prediction, true_value) )
If your data contains NAs then sometimes it will be considered as a factor level,So omit these NAs initially
DF = na.omit(DF)
Then,if your model fit is predicting some incorrect level,then it is better to use tables
confusionMatrix(table(Arg1, Arg2))
I just ran into the same problem, I solved it by using R ordered factor data type.
levels <- levels(predictionsTree)
levels <- levels[order(levels)]
table(ordered(predictionsTree,levels), ordered(testdata$catgeory, levels))
I am performing a subset of a large ffdf objects and I noticed that when I use subset.ff it is generating a large number of NAs. I tried an alternative way by using ffwhich and the result is much faster and no NAs are generated. Here it is my test:
library(ffbase)
# deals is the ffdf I would like to subset
unique(deals$COMMODITY)
ff (open) integer length=7 (7) levels: CASH CO2 COAL ELEC GAS GCERT OIL
[1] [2] [3] [4] [5] [6] [7]
CASH CO2 COAL ELEC GAS GCERT OIL
# Using subset.ff
started.at=proc.time()
deals0 <- subset.ff(deals,deals$COMMODITY %in% c("CASH","COAL","CO2","ELEC","GCERT"))
cat("Finished in",timetaken(started.at),"\n")
Finished in 12.640sec
# NAs are generated
unique(deals0$COMMODITY)
ff (open) integer length=8 (8) levels: CASH CO2 COAL ELEC GAS GCERT OIL <NA>
[1] [2] [3] [4] [5] [6] [7] [8]
CASH CO2 COAL ELEC GAS GCERT OIL NA
# Subset using ffwhich
started.at=proc.time()
idx <- ffwhich(deals,COMMODITY %in% c("CASH","COAL","CO2","ELEC","GCERT"))
deals1 <- deals[idx,]
cat("Finished in",timetaken(started.at),"\n")
Finished in 3.130sec
# No NAs are generated
unique(deals1$COMMODITY)
ff (open) integer length=7 (7) levels: CASH CO2 COAL ELEC GAS GCERT OIL
[1] [2] [3] [4] [5] [6] [7]
CASH CO2 COAL ELEC GAS GCERT OIL
Any idea why this is happening?
subset.ff is probably using [and your criterion but not including a !is.na(.) clause. The default for "[" is to return items that are either TRUE or NA for the criterion vector. The regular subset function adds a !is.na(.) clause, but maybe the authors of ffbase didn't get around to that.