I am using the VCorpus() function in r package tm. Here is the problem I have
example_text = data.frame(num=c(1,2,3),Author1 = c("Text mining is a great time.","Text analysis provides insights","qdap and tm are used in text mining"),Author2=c("R is a great language","R has many uses","DataCamp is cool!"))
This looks like
num Author1 Author2
1 1 Text mining is a great time. R is a great language
2 2 Text analysis provides insights R has many uses
3 3 qdap and tm are used in text mining here is a problem
Then I type df_source = DataframeSource(example_text[,2:3]) to only extract the last 2 columns.
df_source looks correct. After that, I did df_corpus = VCorpus(df_source) and df_corpus[[1]] is
<<PlainTextDocument>>
Metadata: 7
Content: chars: 2
And df_corpus[[1]] gives me
$content
[1] "3" "3"
But df_corpus[[1]] should return
<<PlainTextDocument>>
Metadata: 7
Content: chars: 49
And df_corpus[[1]][1] should return
$content
[1] "Text mining is a great time." "R is a great language"
I don't know where goes wrong. Any suggestions will be appreciated.
The texts inside example_text that are supposed to be character have all become factors because the 'factory-fresh' value of stringsAsFactors is TRUE, which is weird and annoying from my point of view.
example_text <- data.frame(num=c(1,2,3),Author1 = c("Text mining is a great time.","Text analysis provides insights","qdap and tm are used in text mining"),Author2=c("R is a great language","R has many uses","DataCamp is cool!"))
lapply(example_text, class)
# $num
# [1] "numeric"
#
# $Author1
# [1] "factor"
#
# $Author2
# [1] "factor"
To ensure the column Author1 and Author2 to be character columns, you may try:
Add options(stringsAsFactors = FALSE) at the beginning of your code.
Add stringsAsFactors = FALSE inside your data.frame(...) statement.
Run example_text[, 2:3] <- lapply(example_text[, 2:3], as.character)
Run example_text[, 2:3] <- lapply(example_text[, 2:3], paste)
Then everything should work fine.
Related
I have a large character vector of japanese words (mixed kanji and kana) which needs to be romanized (to romaji).
However with the available functions, (zipangu::str_conv_romanhira() and audubon::strj_romanize()), I am not getting the desired results.
For example for 北海道 (Hokkaido), zipangu::str_conv_romanhira() convert it to chinese pinyin and audubon::strj_romanize() converts only kana characters.
How to convert such mixed kana and kanji text to romaji.
library(zipangu)
library(stringi)
library(audubon)
str_conv_romanhira("北海道", "roman")
#> [1] "běi hǎi dào"
stri_trans_general("北海道", "Any-Latin")
#> [1] "běi hǎi dào"
strj_romanize("北海道")
#> [1] ""
There aren't any R packages that provide transliteration of Japanese kanji to romaji that I can see (at least none that are currently on CRAN). It's easy enough, however, to use the python module pykakasi via R to achieve this:
library(reticulate)
py_install("pykakasi") # Only need to install once
# Make module available in R
pykakasi <- import("pykakasi")
# Alias the convert function for convenience
convert <- pykakasi$kakasi()$convert
convert("北海道")
[[1]]
[[1]]$orig
[1] "北海道"
[[1]]$hira
[1] "ほっかいどう"
[[1]]$kana
[1] "ホッカイドウ"
[[1]]$hepburn
[1] "hokkaidou"
[[1]]$kunrei
[1] "hokkaidou"
[[1]]$passport
[1] "hokkaidou"
# Function to extract romaji and collapse
to_romaji <- function(txt) {
paste(sapply(convert(txt), `[[`, "hepburn"), collapse = " ")
}
# Test on some longer text
lapply(c("北海道", "石の上にも三年", "豚に真珠"), to_romaji)
[[1]]
[1] "hokkaidou"
[[2]]
[1] "ishi no ueni mo sannen"
[[3]]
[1] "buta ni shinju"
This is my first time asking a question on here so I hope I don't miss any crucial parts. I want to perform sentiment analysis on windows of speeches around certain keywords. My dataset is a large csv file containing a number of speeches, but I'm only interest in the sentiment of the words immediately surrounding certain key words.
I was told that the quanteda package in R would likely be my best bet for finding such a function, but I've been unsuccessful in locating it so far. If anyone knows how to do such a task it would be greatly appreciated !!!
Reprex (I hope?) below:
speech = c("This is the first speech. Many words are in this speech, but only few are relevant for my research question. One relevant word, for example, is the word stackoverflow. However there are so many more words that I am not interested in assessing the sentiment of", "This is a second speech, much shorter than the first one. It still includes the word of interest, but at the very end. stackoverflow.", "this is the third speech, and this speech does not include the word of interest so I'm not interested in assessing this speech.")
data <- data.frame(id=1:3,
speechContent = speech)
I'd suggest using tokens_select() with the window argument set to a range of tokens surrounding your target terms.
To take your example, if "stackoverflow" is the target term, and you want to measure sentiment in the +/- 10 tokens around that, then this would work:
library("quanteda")
## Package version: 3.2.1
## Unicode version: 13.0
## ICU version: 69.1
## Parallel computing: 8 of 8 threads used.
## See https://quanteda.io for tutorials and examples.
## [CODE FROM ABOVE]
corp <- corpus(data, text_field = "speechContent")
toks <- tokens(corp) %>%
tokens_select("stackoverflow", window = 10)
toks
## Tokens consisting of 3 documents and 1 docvar.
## text1 :
## [1] "One" "relevant" "word" ","
## [5] "for" "example" "," "is"
## [9] "the" "word" "stackoverflow" "."
## [ ... and 9 more ]
##
## text2 :
## [1] "word" "of" "interest" ","
## [5] "but" "at" "the" "very"
## [9] "end" "." "stackoverflow" "."
##
## text3 :
## character(0)
There are many ways to compute sentiment from this point. An easy one is to apply a sentiment dictionary, e.g.
tokens_lookup(toks, data_dictionary_LSD2015) %>%
dfm()
## Document-feature matrix of: 3 documents, 4 features (91.67% sparse) and 1 docvar.
## features
## docs negative positive neg_positive neg_negative
## text1 0 1 0 0
## text2 0 0 0 0
## text3 0 0 0 0
Using quanteda:
library(quanteda)
corp <- corpus(data, docid_field = "id", text_field = "speechContent")
x <- kwic(tokens(corp, remove_punct = TRUE),
pattern = "stackoverflow",
window = 3
)
x
Keyword-in-context with 2 matches.
[1, 29] is the word | stackoverflow | However there are
[2, 24] the very end | stackoverflow |
as.data.frame(x)
docname from to pre keyword post pattern
1 1 29 29 is the word stackoverflow However there are stackoverflow
2 2 24 24 the very end stackoverflow stackoverflow
Now read the help for kwic (use ?kwic in console) to see what kind of patterns you can use. With tokens you can specify which data cleaning you want to use before using kwic. In my example I removed the punctuation.
The end result is a data frame with the window before and after the keyword(s). In this example a window of length 3. After that you can do some form of sentiment analyses on the pre and post results (or paste them together first).
From other answers on this site on similar questions, and e.g. from pages like http://www.r-tutor.com/r-introduction/data-frame/data-frame-column-vector , it seems that I extract a variable from a data.frame, data[ , "col"] and data$col should yield the same result. But now I have some data in Excel:
LU Urban_LU LU_Index Urban_LU_index
Residential Residential 2 0
Rural residential Residential 3 0
Commercial Commercial 4 1
Public institutions including education Industrial 5 1
Industry Industrial 7 2
)
and I read it with read_excel from the readxl package:
library(readxl)
data <- read_excel("data.xlsx", "Sheet 1")
Now I extract a single variable from the data frame, using [ or $:
data[ , "LU"]
# Source: local data frame [5 x 1]
#
# LU
# (chr)
# 1 Residential
# 2 Rural residential
# 3 Commercial
# 4 Public institutions including education
# 5 Industry
data$LU
# [1] "Residential" "Rural residential"
# [3] "Commercial" "Public institutions including education"
# [5] "Industry"
length(data[ , "LU"])
# [1] 1
length(data$LU)
# [1] 5
Also, what I find suspicious are the classes of the data obtained from read_excel and the data which results from the two different modes of extraction:
class(data)
# [1] "tbl_df" "tbl" "data.frame"
class(data[ , "LU"])
# [1] "tbl_df" "data.frame"
class(data$LU)
# [1] "character"
>
So what's the difference between [ , "col"] and $col? Am I missing something from the manuals or is this a special case? Also, what's with the tbl_df and tbl class identifiers? I suspect that they are the cause of my confusion, what do they mean?
More of an extended comment:
The fact that readxl::read_xl returns output of class tbl_df seems poorly documented in ?read_xl. This behaviour was mentioned in the announcement of readxl on the RStudio blog though:
"[read_xl r]eturns output with class c("tbl_df", "tbl", "data.frame")"
To learn more about tbl_df, we need to consult the dplyr help pages. In the Methods section of ?dplyr::tbl_df, we find that
"tbl_df implements two important base methods: [ Never simplifies (drops), so always returns data.frame".
For more background, read about the drop argument in ?[.data.frame.
Related Q&A: Extract a dplyr tbl column as a vector and Best practice to get a dropped column in dplyr tbl_df.
See also the 'original' issue on github and the discussion therein.
I have twitter data. Using library(stringr) i have extracted all the weblinks. However, when I try to do the same I am getting error. The same code had worked some days ago. The following is the code:
library(stringr)
hash <- "#[a-zA-Z0-9]{1, }"
hashtag <- str_extract_all(travel$texts, hash)
The following is the error:
Error in stri_extract_all_regex(string, pattern, simplify = simplify, :
Error in {min,max} interval. (U_REGEX_BAD_INTERVAL)
I have re-installed stringr package....but doesn't help.
The code that I used for weblink is:
pat1 <- "http://t.co/[a-zA-Z0-9]{1,}"
twitlink <- str_extract_all(travel$texts, pat1)
The reproduceable example is as follows:
rtt <- structure(data.frame(texts = c("Review Anthem of the Seas Anthems maiden voyage httptcoLPihj2sNEP #stevenewman", "#Job #Canada #Marlin Travel Agentagente de voyages Full Time in #St Catharines ON httptconMHNlDqv69", "Experience #Fiji amp #NewZealand like never before on a great 10night voyage 4033 pp departing Vancouver httptcolMvChSpaBT"), source = c("Twitter Web Client", "Catch a Job Canada", "Hootsuite"), tweet_time = c("2015-05-07 19:32:58", "2015-05-07 19:37:03", "2015-05-07 20:45:36")))
Your problem comes from the whitespace in the hash:
#Not working (look the whitespace after the comma)
str_extract_all(rtt$texts,"#[a-zA-Z0-9]{1, }")
#working
str_extract_all(rtt$texts,"#[a-zA-Z0-9]{1,}")
You may want to consider usig the qdapRegex package that I maintain for this task. It makes extracting urls and hash tags easy. qdapRegex is a package that contains a bunch of canned regex and the uses the amazing stringi package as a backend to do the regex task.
rtt <- structure(data.frame(texts = c("Review Anthem of the Seas Anthems maiden voyage httptcoLPihj2sNEP #stevenewman", "#Job #Canada #Marlin Travel Agentagente de voyages Full Time in #St Catharines ON httptconMHNlDqv69", "Experience #Fiji amp #NewZealand like never before on a great 10night voyage 4033 pp departing Vancouver httptcolMvChSpaBT"), source = c("Twitter Web Client", "Catch a Job Canada", "Hootsuite"), tweet_time = c("2015-05-07 19:32:58", "2015-05-07 19:37:03", "2015-05-07 20:45:36")))
library(qdapRegex)
## first combine the built in url + twitter regexes into a function
rm_twitter_n_url <- rm_(pattern=pastex("#rm_twitter_url", "#rm_url"), extract=TRUE)
rm_twitter_n_url(rtt$texts)
rm_hash(rtt$texts, extract=TRUE)
Giving the following output:
## > rm_twitter_n_url(rtt$texts)
## [[1]]
## [1] "httptcoLPihj2sNEP"
##
## [[2]]
## [1] "httptconMHNlDqv69"
##
## [[3]]
## [1] "httptcolMvChSpaBT"
## > rm_hash(rtt$texts, extract=TRUE)
## [[1]]
## [1] "#stevenewman"
##
## [[2]]
## [1] "#Job" "#Canada" "#Marlin" "#St"
##
## [[3]]
## [1] "#Fiji" "#NewZealand"
>titletool<-read.csv("TotalCSVData.csv",header=FALSE,sep=",")
> class(titletool)
[1] "data.frame"
>titletool[1,1]
[1] Experiment name : CONTROL DB AD_1
>t<-titletool[1,1]
>t
[1] Experiment name : CONTROL DB AD_1
>class(t)
[1] "character"
now i want to create an object (vector) with the name "Experiment name : CONTROL DB AD_1" , or even better if possible CONTROL DB AD_1
Thank you
Use assign:
varname <- "Experiment name : CONTROL DB AD_1"
assign(varname, 3.14158)
get("Experiment name : CONTROL DB AD_1")
[1] 3.14158
And you can use a regular expression and sub or gsub to remove some text from a string:
cleanVarname <- sub("Experiment name : ", "", varname)
assign(cleanVarname, 42)
get("CONTROL DB AD_1")
[1] 42
But let me warn you this is an unusual thing to do.
Here be dragons.
If I understand correctly, you have a bunch of CSV files, each with multiple experiments in them, named in the pattern "Experiment ...". You now want to read each of these "experiments" into R in an efficient way.
Here's a not-so-pretty (but not-so-ugly either) function that might get you started in the right direction.
What the function basically does is read in the CSV, identify the line numbers where each new experiment starts, grabs the names of the experiments, then does a loop to fill in a list with the separate data frames. It doesn't really bother making "R-friendly" names though, and I've decided to leave the output in a list, because as Andrie pointed out, "R has great tools for working with lists."
read.funkyfile = function(funkyfile, expression, ...) {
temp = readLines(funkyfile)
temp.loc = grep(expression, temp)
temp.loc = c(temp.loc, length(temp)+1)
temp.nam = gsub("[[:punct:]]", "",
grep(expression, temp, value=TRUE))
temp.out = vector("list")
for (i in 1:length(temp.nam)) {
temp.out[[i]] = read.csv(textConnection(
temp[seq(from = temp.loc[i]+1,
to = temp.loc[i+1]-1)]),
...)
names(temp.out)[i] = temp.nam[i]
}
temp.out
}
Here is an example CSV file. Copy and paste it into a text editor and save it as "funkyfile1.csv" in the current working directory. (Or, read it in from Dropbox: http://dl.dropbox.com/u/2556524/testing/funkyfile1.csv)
"Experiment Name: Here Be",,
1,2,3
4,5,6
7,8,9
"Experiment Name: The Dragons",,
10,11,12
13,14,15
16,17,18
Here is a second CSV. Again, copy-paste and save it as "funkyfile2.csv" in your current working directory. (Or, read it in from Dropbox: http://dl.dropbox.com/u/2556524/testing/funkyfile2.csv)
"Promises: I vow to",,
"H1","H2","H3"
19,20,21
22,23,24
25,26,27
"Promises: Slay the dragon",,
"H1","H2","H3"
28,29,30
31,32,33
34,35,36
Notice that funkyfile1 has no column names, while funkyfile2 does. That's what the ... argument in the function is for: to specify header=TRUE or header=FALSE. Also the "expression" identifying each new set of data is "Promises" in funkyfile2.
Now, use the function:
read.funkyfile("funkyfile1.csv", "Experiment", header=FALSE)
# read.funkyfile("http://dl.dropbox.com/u/2556524/testing/funkyfile1.csv",
# "Experiment", header=FALSE) # Uncomment to load remotely
# $`Experiment Name Here Be`
# V1 V2 V3
# 1 1 2 3
# 2 4 5 6
# 3 7 8 9
#
# $`Experiment Name The Dragons`
# V1 V2 V3
# 1 10 11 12
# 2 13 14 15
# 3 16 17 18
read.funkyfile("funkyfile2.csv", "Promises", header=TRUE)
# read.funkyfile("http://dl.dropbox.com/u/2556524/testing/funkyfile2.csv",
# "Experiment", header=TRUE) # Uncomment to load remotely
# $`Promises I vow to`
# H1 H2 H3
# 1 19 20 21
# 2 22 23 24
# 3 25 26 27
#
# $`Promises Slay the dragon`
# H1 H2 H3
# 1 28 29 30
# 2 31 32 33
# 3 34 35 36
Go get those dragons.
Update
If your data are all in the same format, you can use the lapply solution mentioned by Andrie along with this function. Just make a list of the CSVs that you want to load, as below. Note that the files all need to use the same "expression" and other arguments the way the function is currently written....
temp = list("http://dl.dropbox.com/u/2556524/testing/funkyfile1.csv",
"http://dl.dropbox.com/u/2556524/testing/funkyfile3.csv")
lapply(temp, read.funkyfile, "Experiment", header=FALSE)