How do I handle/get rid of emoticons so that I can sort tweets for sentiment analysis?
Getting:
Error in sort.list(y) :
invalid input
Thanks
and this is how the emoticons come out looking from twitter and into r:
\xed��\xed�\u0083\xed��\xed��
\xed��\xed�\u008d\xed��\xed�\u0089
This should get rid of the emoticons, using iconv as suggested by ndoogan.
Some reproducible data:
require(twitteR)
# note that I had to register my twitter credentials first
# here's the method: http://stackoverflow.com/q/9916283/1036500
s <- searchTwitter('#emoticons', cainfo="cacert.pem")
# convert to data frame
df <- do.call("rbind", lapply(s, as.data.frame))
# inspect, yes there are some odd characters in row five
head(df)
text
1 ROFLOL: echte #emoticons [humor] http://t.co/0d6fA7RJsY via #tweetsmania ;-)
2 “#teeLARGE: when tmobile get the iphone in 2 wks im killin everybody w/ emoticons & \nall the other stuff i cant see on android!" \n#Emoticons
3 E poi ricevi dei messaggi del genere da tua mamma xD #crazymum #iloveyou #emoticons #aiutooo #bestlike http://t.co/Yee1LB9ZQa
4 #emoticons I want to change my name to an #emoticon. Is it too soon? #prince http://t.co/AgmR5Lnhrk
5 I use emoticons too much. #addicted #admittingit #emoticons <ed><U+00A0><U+00BD><ed><U+00B8><U+00AC><ed><U+00A0><U+00BD><ed><U+00B8><U+0081> haha
6 What you text What I see #Emoticons http://t.co/BKowBSLJ0s
Here's the key line that will remove the emoticons:
# Clean text to remove odd characters
df$text <- sapply(df$text,function(row) iconv(row, "latin1", "ASCII", sub=""))
Now inspect again, to see if the odd characters are gone (see row 5)
head(df)
text
1 ROFLOL: echte #emoticons [humor] http://t.co/0d6fA7RJsY via #tweetsmania ;-)
2 #teeLARGE: when tmobile get the iphone in 2 wks im killin everybody w/ emoticons & \nall the other stuff i cant see on android!" \n#Emoticons
3 E poi ricevi dei messaggi del genere da tua mamma xD #crazymum #iloveyou #emoticons #aiutooo #bestlike http://t.co/Yee1LB9ZQa
4 #emoticons I want to change my name to an #emoticon. Is it too soon? #prince http://t.co/AgmR5Lnhrk
5 I use emoticons too much. #addicted #admittingit #emoticons haha
6 What you text What I see #Emoticons http://t.co/BKowBSLJ0s
I recommend the function:
ji_replace_all <- function (string, replacement)
From the package:
install_github (" hadley / emo ").
I needed to remove the emojis from tweets that were in the Spanish language. Tried several options, but some messed up the text for me. However this is a marvel that works perfectly:
library(emo)
text="#VIDEO 😢💔🙏🏻,Alguien sabe si en Afganistán hay cigarro?"
ji_replace_all(text,"")
Result:
"#VIDEO ,Alguien sabe si en Afganistán hay cigarro?"
You can use regular expression to detect non-alphabet characters and remove them. Sample code:
rmNonAlphabet <- function(str) {
words <- unlist(strsplit(str, " "))
in.alphabet <- grep(words, pattern = "[a-z|0-9]", ignore.case = T)
nice.str <- paste(words[in.alphabet], collapse = " ")
nice.str
}
Related
I have been practicing web scrapping from wikipedia with the rvest library, and I would like to solve a problem that I found when using the str_replace_all() function. here is the code:
library(tidyverse)
library(rvest)
pagina <- read_html("https://es.wikipedia.org/wiki/Anexo:Premio_Grammy_al_mejor_%C3%A1lbum_de_rap") %>%
# list all tables on the page
html_nodes(css = "table") %>%
# convert to a table
html_table()
rap <- pagina[[2]]
rap <- rap[, -c(5)]
rap$Artista <- str_replace_all(rap$Artista, '\\[[^\\]]*\\]', '')
rap$Trabajo <- str_replace_all(rap$Trabajo, '\\[[^\\]]*\\]', '')
table(rap$Artista)
The problem is that when I remove the elements between brackets (hyperlinks in wikipedia) from the Artist variable, when doing the tabulation to see the count by artist, Eminem is repeated three times as if it were three different artists, the same happens with Kanye West that is repeated twice. I appreciate any solutions in advance.
There are some hidden bits still attached to the strings and trimws() is not working to remove them. You can use nchar(sort(test)) to see the number of character associated with each entry.
Here is a messy regular expression to extract out the letters, space, comma and - and skip everything else at the end.
rap <- pagina[[2]]
rap <- rap[, -c(5)]
rap$Artista<-gsub("([a-zA-Z -,&]+).*", "\\1", rap$Artista)
rap$Trabajo <- stringr::str_replace_all(rap$Trabajo, '\\[[^\\]]*\\]', '')
table(rap$Artista)
Cardi B Chance the Rapper Drake Eminem Jay Kanye West Kendrick Lamar
1 1 1 6 1 4 2
Lil Wayne Ludacris Macklemore & Ryan Lewis Nas Naughty by Nature Outkast Puff Daddy
1 1 1 1 1 2 1
The Fugees Tyler, the Creator
1 2
Here is another reguarlar expression that seems a bit clearer:
gsub("[^[:alpha:]]*$", "", rap$Artista)
From the end, replace zero or more characters which are not a to z or A to Z.
I want to extract state abbreviation (2 letters) and zip code (either 4 or 5 numbers) from the following string
address <- "19800 Eagle River Road, Eagle River AK 99577
907-481-1670
230 Colonial Promenade Pkwy, Alabaster AL 35007
205-620-0360
360 Connecticut Avenue, Norwalk CT 06854
860-409-0404
2080 S Lincoln, Jerome ID 83338
208-324-4333
20175 Civic Center Dr, Augusta ME 4330
207-623-8223
830 Harvest Ln, Williston VT 5495
802-878-5233
"
For the zip code, I tried few methods that I found on here but it didn't work mainly because of the 5 number street address or zip codes that have only 4 numbers
text <- readLines(textConnection(address))
library(stringi)
zip <- stri_extract_last_regex(text, "\\d{5}")
zip
library(qdapRegex)
rm_zip3 <- rm_(pattern="(?<!\\d)\\d{5}(?!\\d)", extract = TRUE)
zip <- rm_zip3(text)
zip
[1] "99577" "1670" "35007" "0360" "06854" "0404" "83338" "4333" "4330" "8223" "5495" "5233" NA
For the state abbreviation, I have no idea how to extract
Any help is appreciated! Thanks in advance!
Edit 1: Include phone numbers
Code to extract zip code:
zip <- str_extract(text, "\\d{5}")
Code to extract state code:
states <- str_extract(text, "\\b[A-Z]{2}(?=\\s+\\d{5}$)")
Code to extract phone numbers:
phone <- str_extract(text, "\\b\\d{3}-\\d{3}-\\d{4}\\b")
NOTE: Looks like there's an issue with your data because the last 2 zip codes should be 5 characters long and not 4. 4330 should actually be 04330. If you don't have control over the data source, but know for sure that they are US codes you could pad 0's on the left as required. However since you are looking for a solution for 4 or 5 characters, you can use this:
Code to extract zip code (looks for space in front and newline at the back so that parts of a phone number or an address aren't picked)
zip <- str_extract(text, "(?<= )\\d{4,5}(?=\\n|$)")
Code to extract state code:
states <- str_extract(text, "\\b[A-Z]{2}(?=\\s+\\d{4,5}$)")
Demo: https://regex101.com/r/7Im0Mu/2
I am using address as input not the text, see if it works for your case.
Assumptions on regex: Two capital letters followed by 4 or 5 numeric letters are for state and zip, The phone numbers are always on next line.
Input:
address <- "19800 Eagle River Road, Eagle River AK 99577
907-481-1670
230 Colonial Promenade Pkwy, Alabaster AL 35007
205-620-0360
360 Connecticut Avenue, Norwalk CT 06854
860-409-0404
2080 S Lincoln, Jerome ID 83338
208-324-4333
20175 Civic Center Dr, Augusta ME 4330
207-623-8223
830 Harvest Ln, Williston VT 5495
802-878-5233
"
I am using stringr library , you may choose any other to extract the information as you wish.
library(stringr)
df <- data.frame(do.call("rbind",strsplit(str_extract_all(address,"[A-Z][A-Z]\\s\\d{4,5}\\s\\d{3}-\\d{3}-\\d{4}")[[1]],split="\\s|\\n")))
names(df) <- c("state","Zip","Phone")
EDIT:
In case someone want to use text as input,
text <- readLines(textConnection(address))
text <- data.frame(text)
st_zip <- setNames(data.frame(str_extract_all(text$text,"[A-Z][A-Z]\\s\\d{4,5}",simplify = T)),"St_zip")
pin <- setNames(data.frame(str_extract_all(text$text,"\\d{3}-\\d{3}-\\d{4}",simplify = T)),"pin")
st_zip <- st_zip[st_zip$St_zip != "",]
df1 <- setNames(data.frame(do.call("rbind",strsplit(st_zip,split=' '))),c("State","Zip"))
pin <- pin[pin$pin != "",]
df2 <- data.frame(cbind(df1,pin))
OUTPUT:
State Zip pin
1 AK 99577 907-481-1670
2 AL 35007 205-620-0360
3 CT 06854 860-409-0404
4 ID 83338 208-324-4333
5 ME 4330 207-623-8223
6 VT 5495 802-878-5233
Thank you #Rahul. Both would be great. At least can you show me how to do it with Notepad++?
Extraction using Notepad++
Well first copy your whole data in a file.
Go to Find by pressing Ctrl + F. This will open search dialog box. Choose Replace tab search with regex ([A-Z]{2}\s*\d{4,5})$ and replace with \n-\1-\n. This will search for state abbreviation and ZIP code and place them in new line with - as prefix and suffix.
Now go to Mark tab. Check Bookmark Line checkbox then search with -(.*?)- and press Mark All. This will mark state abb and ZIP which are in newlines with -.
Now go to Search --> Bookmark --> Remove Unmarked Lines
Finally search with ^-|-$ and replace with empty string.
Update
So now there will be phone numbers too ? In that case you only have to remove $ from regex in step 2. Regex to use will be ([A-Z]{2}\s*\d{4,5}). Rest all steps will be same.
I am using the VCorpus() function in r package tm. Here is the problem I have
example_text = data.frame(num=c(1,2,3),Author1 = c("Text mining is a great time.","Text analysis provides insights","qdap and tm are used in text mining"),Author2=c("R is a great language","R has many uses","DataCamp is cool!"))
This looks like
num Author1 Author2
1 1 Text mining is a great time. R is a great language
2 2 Text analysis provides insights R has many uses
3 3 qdap and tm are used in text mining here is a problem
Then I type df_source = DataframeSource(example_text[,2:3]) to only extract the last 2 columns.
df_source looks correct. After that, I did df_corpus = VCorpus(df_source) and df_corpus[[1]] is
<<PlainTextDocument>>
Metadata: 7
Content: chars: 2
And df_corpus[[1]] gives me
$content
[1] "3" "3"
But df_corpus[[1]] should return
<<PlainTextDocument>>
Metadata: 7
Content: chars: 49
And df_corpus[[1]][1] should return
$content
[1] "Text mining is a great time." "R is a great language"
I don't know where goes wrong. Any suggestions will be appreciated.
The texts inside example_text that are supposed to be character have all become factors because the 'factory-fresh' value of stringsAsFactors is TRUE, which is weird and annoying from my point of view.
example_text <- data.frame(num=c(1,2,3),Author1 = c("Text mining is a great time.","Text analysis provides insights","qdap and tm are used in text mining"),Author2=c("R is a great language","R has many uses","DataCamp is cool!"))
lapply(example_text, class)
# $num
# [1] "numeric"
#
# $Author1
# [1] "factor"
#
# $Author2
# [1] "factor"
To ensure the column Author1 and Author2 to be character columns, you may try:
Add options(stringsAsFactors = FALSE) at the beginning of your code.
Add stringsAsFactors = FALSE inside your data.frame(...) statement.
Run example_text[, 2:3] <- lapply(example_text[, 2:3], as.character)
Run example_text[, 2:3] <- lapply(example_text[, 2:3], paste)
Then everything should work fine.
I have used adist to calculate the number of characters that differ between two strings:
a <- "#IvoryCoast TENNIS US OPEN Clément «Un beau combat» entre Simon et Cilic"
b <- "Clément «Un beau combat» entre Simon et Cilic"
adist(a,b) # result 27
Now I would like to extract all the occurrences of those characters that differ. In my example, I would like to get the string "#IvoryCoast TENNIS US OPEN ".
I tried and used:
paste(Reduce(setdiff, strsplit(c(a, b), split = "")), collapse = "")
But the obtained result is not what I expected!
#IvysTENOP
For this case, you could use gsub.
> a <- "#IvoryCoast TENNIS US OPEN Clément «Un beau combat» entre Simon et Cilic"
> b <- "Clément «Un beau combat» entre Simon et Cilic"
> gsub(b, "", a)
[1] "#IvoryCoast TENNIS US OPEN "
You can do, based on the paste/reduce solution:
paste(Reduce(setdiff, strsplit(c(a, b), split = " ")), collapse = " ")
#[1] "#IvoryCoast TENNIS US OPEN"
Or, if you want to get separated items, with setdiff and strsplit:
setdiff(strsplit(a," ")[[1]],strsplit(b," ")[[1]])
#[1] "#IvoryCoast" "TENNIS" "US" "OPEN"
How do I handle/get rid of emoticons so that I can sort tweets for sentiment analysis?
Getting:
Error in sort.list(y) :
invalid input
Thanks
and this is how the emoticons come out looking from twitter and into r:
\xed��\xed�\u0083\xed��\xed��
\xed��\xed�\u008d\xed��\xed�\u0089
This should get rid of the emoticons, using iconv as suggested by ndoogan.
Some reproducible data:
require(twitteR)
# note that I had to register my twitter credentials first
# here's the method: http://stackoverflow.com/q/9916283/1036500
s <- searchTwitter('#emoticons', cainfo="cacert.pem")
# convert to data frame
df <- do.call("rbind", lapply(s, as.data.frame))
# inspect, yes there are some odd characters in row five
head(df)
text
1 ROFLOL: echte #emoticons [humor] http://t.co/0d6fA7RJsY via #tweetsmania ;-)
2 “#teeLARGE: when tmobile get the iphone in 2 wks im killin everybody w/ emoticons & \nall the other stuff i cant see on android!" \n#Emoticons
3 E poi ricevi dei messaggi del genere da tua mamma xD #crazymum #iloveyou #emoticons #aiutooo #bestlike http://t.co/Yee1LB9ZQa
4 #emoticons I want to change my name to an #emoticon. Is it too soon? #prince http://t.co/AgmR5Lnhrk
5 I use emoticons too much. #addicted #admittingit #emoticons <ed><U+00A0><U+00BD><ed><U+00B8><U+00AC><ed><U+00A0><U+00BD><ed><U+00B8><U+0081> haha
6 What you text What I see #Emoticons http://t.co/BKowBSLJ0s
Here's the key line that will remove the emoticons:
# Clean text to remove odd characters
df$text <- sapply(df$text,function(row) iconv(row, "latin1", "ASCII", sub=""))
Now inspect again, to see if the odd characters are gone (see row 5)
head(df)
text
1 ROFLOL: echte #emoticons [humor] http://t.co/0d6fA7RJsY via #tweetsmania ;-)
2 #teeLARGE: when tmobile get the iphone in 2 wks im killin everybody w/ emoticons & \nall the other stuff i cant see on android!" \n#Emoticons
3 E poi ricevi dei messaggi del genere da tua mamma xD #crazymum #iloveyou #emoticons #aiutooo #bestlike http://t.co/Yee1LB9ZQa
4 #emoticons I want to change my name to an #emoticon. Is it too soon? #prince http://t.co/AgmR5Lnhrk
5 I use emoticons too much. #addicted #admittingit #emoticons haha
6 What you text What I see #Emoticons http://t.co/BKowBSLJ0s
I recommend the function:
ji_replace_all <- function (string, replacement)
From the package:
install_github (" hadley / emo ").
I needed to remove the emojis from tweets that were in the Spanish language. Tried several options, but some messed up the text for me. However this is a marvel that works perfectly:
library(emo)
text="#VIDEO 😢💔🙏🏻,Alguien sabe si en Afganistán hay cigarro?"
ji_replace_all(text,"")
Result:
"#VIDEO ,Alguien sabe si en Afganistán hay cigarro?"
You can use regular expression to detect non-alphabet characters and remove them. Sample code:
rmNonAlphabet <- function(str) {
words <- unlist(strsplit(str, " "))
in.alphabet <- grep(words, pattern = "[a-z|0-9]", ignore.case = T)
nice.str <- paste(words[in.alphabet], collapse = " ")
nice.str
}