I was wandering if in R their is a function like KeepChar("abcde....xyz", some_text) that you feed with all the desired character that you want to keep, and returns the strings with only the desired character left in it. Here the function would only keep the letters of the alphabet in lower case. I would like something that looks like this:
some_text <- "Hel-_l0o W#oRr^ld"
some_text <- KeepChar("abcdefghijklmnopqrstuvwxyz ", some_text)
some_text
> "hello world"
I feel that the removing method that I am currently using gsub("#\\w+", "", some_text), tm_map(some_text, stripWhitespace) or str_replace_all(some_text,"[^[:graph:]]", " ") takes a lot of time and coding line with a constant risk of forgetting to remove a specific character, especially when you already know exactly what you want to keep.
Why I ask this question is because I am coding a plateform to process sentiment analysis on texts from various sources like twitter and I want to make sure not to forget to remove any unwanted character.
To handle a pattern without using regex I will try this:
string <- "Hel-_l0o W#oRr^ld"
pattern <- "abcdefghijklmnopqrstuvwxyz"
KeepChar = function(pattern, string){
splitted_string <- unlist(strsplit(string, ""))
splitted_pattern <- unlist(strsplit(pattern, ""))
ids_string <- splitted_string %in% splitted_pattern
return(paste(splitted_string[ids_string], sep = "", collapse = ""))
}
some_text <- KeepChar(pattern = pattern, string = string)
You can try this:
some_text <- "Hel-_l0o W#oRr^ld"
gsub("[^[:alpha:] ]", "", some_text)#will return all characters
gsub("[^[:lower:] ]", "", some_text)#will return only lower characters alongwith space
gsub("[^[:upper:] ]", "", some_text)#will return higher case characters alongwith space
You can also look at the page https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html to see the matches available in R
Related
Context
I am working with a messy datafile right now. I have a list of comments that I'd like to sort out and grab the most common combination of phrases. An example phrase would be "Did not qualify because of X and Y" and "Did not qualify because of Y and X". I am trying to go through and remove Stop Words so I can match X and Y as a common phrase. I was able to easily do this for common single words, but phrases are a little difficult. Below is my code for context
Create Datafile
dat1 <- dat %>% filter(Action != Exclude)
Remove problem characters
dat1$Comments <- stri_trans_general(dat1$Comments, "latin-ascii")
dat1$Comments <- gsub(pattern='<[^<>]*>', replacement=" ", x=dat1$Comments)
dat1$Comments <- gsub(pattern='\n', replacement=" ", x=dat1$Comments)
dat1$Comments <- gsub(pattern="[[:punct:]]", replacement=" ", x=dat1$Comments)
Remove stop words (Where my problem is)
sw <- paste0("\\b(", paste0(stop_words$word, collapse="|"), ")\\b")
dat1$Comments <- lapply(dat1$Comments, function(x) (gsub(pattern=sw, replacement=" ", x)))
Remove extra spaces between words
dat1$Comments <- trimws(gsub("\\s+", " ", dat1$Comments))
dat1$Comments <- gsub("(^[[:space:]]*)|([[:space:]]*$)", "", dat1$Comments)
Sweet Data
top_phrases <- data.frame(text = dat1$Comments) %>%
unnest_tokens(bigram, text, 'ngrams', n = Length, to_lower = TRUE) %>%
count(bigram, sort = TRUE)
Issue
This is what pops up and is traced back to the gsub code
Error in gsub(pattern = sw, replacement = " ", x) : assertion 'tree->num_tags == num_tags' failed in executing regexp: file 'tre-compile.c', line 634
If anyone is curious, here is what is stored in "sw"
"\\b(a|a's|able|about|above|according|accordingly|across|actually|after|afterwards|again|against|ain't|all|allow|allows|almost|alone|along|already|also|although|always|am|among|amongst|an|and|another|any|anybody|anyhow|anyone|anything|anyway|anyways|anywhere|apart|appear|appreciate|appropriate|are|aren't|around|as|aside|ask|asking|associated|at|available|away|awfully|b|be|became|because|become|becomes|becoming|been|before|beforehand|behind|being|believe|below|beside|besides|best|better|between|beyond|both|brief|but|by|c|c'mon|c's|came|can|can't|cannot|cant|cause|causes|certain|certainly|changes|clearly|co|com|come|comes|concerning|consequently|consider|considering|contain|containing|contains|corresponding|could|couldn't|course|currently|d|definitely|described|despite|did|didn't|different|do|does|doesn't|doing|don't|done|down|downwards|during|e|each|edu|eg|eight|either|else|elsewhere|enough|entirely|especially|et|etc|even|ever|every|everybody|everyone|everything|everywhere|ex|exactly|example|except|f|far|few|fifth|first|five|followed|following|follows|for|former|formerly|forth|four|from|further|furthermore|g|get|gets|getting|given|gives|go|goes|going|gone|got|gotten|greetings|h|had|hadn't|happens|hardly|has|hasn't|have|haven't|having|he|he's|hello|help|hence|her|here|here's|hereafter|hereby|herein|hereupon|hers|herself|hi|him|himself|his|hither|hopefully|how|howbeit|however|i|i'd|i'll|i'm|i've|ie|if|ignored|immediate|in|inasmuch|inc|indeed|indicate|indicated|indicates|inner|insofar|instead|into|inward|is|isn't|it|it'd|it'll|it's|its|itself|j|just|k|keep|keeps|kept|know|knows|known|l|last|lately|later|latter|latterly|least|less|lest|let|let's|like|liked|likely|little|look|looking|looks|ltd|m|mainly|many|may|maybe|me|mean|meanwhile|merely|might|more|moreover|most|mostly|much|must|my|myself|n|name|namely|nd|near|nearly|necessary|need|needs|neither|never|nevertheless|new|next|nine|no|nobody|non|none|noone|nor|normally|not|nothing|novel|now|nowhere|o|obviously|of|off|often|oh|ok|okay|old|on|once|one|ones|only|onto|or|other|others|otherwise|ought|our|ours|ourselves|out|outside|over|overall|own|p|particular|particularly|per|perhaps|placed|please|plus|possible|presumably|probably|provides|q|que|quite|qv|r|rather|rd|re|really|reasonably|regarding|regardless|regards|relatively|respectively|right|s|said|same|saw|say|saying|says|second|secondly|see|seeing|seem|seemed|seeming|seems|seen|self|selves|sensible|sent|serious|seriously|seven|several|shall|she|should|shouldn't|since|six|so|some|somebody|somehow|someone|something|sometime|sometimes|somewhat|somewhere|soon|sorry|specified|specify|specifying|still|sub|such|sup|sure|t|t's|take|taken|tell|tends|th|than|thank|thanks|thanx|that|that's|thats|the|their|theirs|them|themselves|then|thence|there|there's|thereafter|thereby|therefore|therein|theres|thereupon|these|they|they'd|they'll|they're|they've|think|third|this|thorough|thoroughly|those|though|three|through|throughout|thru|thus|to|together|too|took|toward|towards|tried|tries|truly|try|trying|twice|two|u|un|under|unfortunately|unless|unlikely|until|unto|up|upon|us|use|used|useful|uses|using|usually|uucp|v|value|various|very|via|viz|vs|w|want|wants|was|wasn't|way|we|we'd|we'll|we're|we've|welcome|well|went|were|weren't|what|what's|whatever|when|whence|whenever|where|where's|whereafter|whereas|whereby|wherein|whereupon|wherever|whether|which|while|whither|who|who's|whoever|whole|whom|whose|why|will|willing|wish|with|within|without|won't|wonder|would|would|wouldn't|x|y|yes|yet|you|you'd|you'll|you're|you've|your|yours|yourself|yourselves|z|zero|i|me|my|myself|we|our|ours|ourselves|you|your|yours|yourself|yourselves|he|him|his|himself|she|her|hers|herself|it|its|itself|they|them|their|theirs|themselves|what|which|who|whom|this|that|these|those|am|is|are|was|were|be|been|being|have|has|had|having|do|does|did|doing|would|should|could|ought|i'm|you're|he's|she's|it's|we're|they're|i've|you've|we've|they've|i'd|you'd|he'd|she'd|we'd|they'd|i'll|you'll|he'll|she'll|we'll|they'll|isn't|aren't|wasn't|weren't|hasn't|haven't|hadn't|doesn't|don't|didn't|won't|wouldn't|shan't|shouldn't|can't|cannot|couldn't|mustn't|let's|that's|who's|what's|here's|there's|when's|where's|why's|how's|a|an|the|and|but|if|or|because|as|until|while|of|at|by|for|with|about|against|between|into|through|during|before|after|above|below|to|from|up|down|in|out|on|off|over|under|again|further|then|once|here|there|when|where|why|how|all|any|both|each|few|more|most|other|some|such|no|nor|not|only|own|same|so|than|too|very|a|about|above|across|after|again|against|all|almost|alone|along|already|also|although|always|among|an|and|another|any|anybody|anyone|anything|anywhere|are|area|areas|around|as|ask|asked|asking|asks|at|away|back|backed|backing|backs|be|became|because|become|becomes|been|before|began|behind|being|beings|best|better|between|big|both|but|by|came|can|cannot|case|cases|certain|certainly|clear|clearly|come|could|did|differ|different|differently|do|does|done|down|down|downed|downing|downs|during|each|early|either|end|ended|ending|ends|enough|even|evenly|ever|every|everybody|everyone|everything|everywhere|face|faces|fact|facts|far|felt|few|find|finds|first|for|four|from|full|fully|further|furthered|furthering|furthers|gave|general|generally|get|gets|give|given|gives|go|going|good|goods|got|great|greater|greatest|group|grouped|grouping|groups|had|has|have|having|he|her|here|herself|high|high|high|higher|highest|him|himself|his|how|however|i|if|important|in|interest|interested|interesting|interests|into|is|it|its|itself|just|keep|keeps|kind|knew|know|known|knows|large|largely|last|later|latest|least|less|let|lets|like|likely|long|longer|longest|made|make|making|man|many|may|me|member|members|men|might|more|most|mostly|mr|mrs|much|must|my|myself|necessary|need|needed|needing|needs|never|new|new|newer|newest|next|no|nobody|non|noone|not|nothing|now|nowhere|number|numbers|of|off|often|old|older|oldest|on|once|one|only|open|opened|opening|opens|or|order|ordered|ordering|orders|other|others|our|out|over|part|parted|parting|parts|per|perhaps|place|places|point|pointed|pointing|points|possible|present|presented|presenting|presents|problem|problems|put|puts|quite|rather|really|right|right|room|rooms|said|same|saw|say|says|second|seconds|see|seem|seemed|seeming|seems|sees|several|shall|she|should|show|showed|showing|shows|side|sides|since|small|smaller|smallest|some|somebody|someone|something|somewhere|state|states|still|still|such|sure|take|taken|than|that|the|their|them|then|there|therefore|these|they|thing|things|think|thinks|this|those|though|thought|thoughts|three|through|thus|to|today|together|too|took|toward|turn|turned|turning|turns|two|under|until|up|upon|us|use|used|uses|very|want|wanted|wanting|wants|was|way|ways|we|well|wells|went|were|what|when|where|whether|which|while|who|whole|whose|why|will|with|within|without|work|worked|working|works|would|year|years|yet|you|young|younger|youngest|your|yours)\\b"
Both TRE (the default regex engine used in base R regex functions) and PCRE (the regex engine used in base R regex functions with perl=TRUE) have quite hard limits for the pattern length.
In your case, stringr regex functions will work better as they are using ICU regex engine that supports much longer regex patterns.
So, you may replace
gsub(pattern=sw, replacement=" ", x)
with
stringr::str_replace_all(x, sw, " ")
have problem to change the swedish characters ä ö å in a presentable way in R
I got my data directly from MS SQL database
here are the examples
markets <- c("Caf\xe9 ","Restaurang kv\xe4ll ","Barnomsorg tillagningsk\xf6k ","Folkh\xf6gskola ")
then I use gusb to remove the lefthand space
market=gsub(" ", "", markets,fixed = TRUE)
I got this error:
Error in gsub(" ", "", market, fixed = TRUE) :
input string 3 is invalid UTF-8
then I use this command:
markets_new=gsub(" ", "", markets)
then have strange Chinese characters in the string,
"Caf攼㸹"
"Restauranglunch+kv攼㸴ll"
"Barnomsorgtillagningsk昼㸶k"
"Folkh昼㸶gskola"
I tried the treatment change the default setting of Rstudio by following:
https://yihui.name/en/2018/11/biggest-regret-knitr/?fbclid=IwAR2E5Lp0zjS51fcdjgZ1tej0sg5EBxfG8sNitt-cUA2XEshnT3lNCHNQ3Do
it does not help, was also try to use gsub() substitute the characters but seems not working.
One more thing, if I use
write.csv(markets,'submarket product view.csv',row.names = F)
then in my csv file what I see as follows
"Caf<e9> "
"Restaurang kv<e4>ll "
"Barnomsorg tillagningsk<f6>k "
"Folkh<f6>gskola "
"Sm<f6>rg<e5>s/salladsrestaurang "
I think <e9> is e with a hat, <e4> is ä, <f6> is ö, and <e5> is å
Any treatment suggestion?
Thanks to #Wiktor Stribiżew
this solution works best:
df$m <- gsub(" ", "", `Encoding<-`(as.character(df$m), "latin1"),fixed = TRUE)
try this
Encoding(markets) <- "UTF-16"
markets <- trimws(markets)
#[1] "Café" "Restaurang kväll" "Barnomsorg tillagningskök" "Folkhögskola"
I'm trying to scrape information from this url: http://www.sports-reference.com/cbb/boxscores/index.cgi?month=2&day=3&year=2017 and have gotten decently far to the point where I have strings for each game that look like this:
str <-"Yale\n\t\t\t87\n\t\t\t\n\t\t\t\tFinal\n\t\t\t\t\n\t\t\t\n\t\tColumbia\n\t\t\t78\n\t\t\t \n\t\t\t\n\t\t"
Ideally I'd like to get to a vector or dataframe that looks something like:
str_vec <- c('Yale',87,'Columbia',78)
I've tried a few things that didn't work like:
without_n <- gsub(x = str, pattern = '\n')
without_Final <- gsub(x = without_n, pattern = 'Final')
str_vec <- strslpit(x = without_Final, split = '\t')
Thanks in advance for any helpful tips/answers!
You can use gsub to first replace all the non-alphanumeric characters in the string with an empty string. Then insert a space between the name and score. Thereafter you can split the string on space to a data structure needed.
require(stringr)
step_1 <- gsub('([^[:alnum:]]|(Final))', "", str)
#"Yale87Columbia78"
step_2 <- gsub("([[:alpha:]]+)([[:digit:]]+)", "\\1 \\2 ", step_1)
strsplit(str_trim(step_2)," ")
#"Yale" "87" "Columbia" "78"
I assume the string pattern is consistent, for this to work reliably.
I extracted tweets from twitter using the twitteR package and saved them into a text file.
I have carried out the following on the corpus
xx<-tm_map(xx,removeNumbers, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,stripWhitespace, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,removePunctuation, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,strip_retweets, lazy=TRUE, 'mc.cores=1')
xx<-tm_map(xx,removeWords,stopwords(english), lazy=TRUE, 'mc.cores=1')
(using mc.cores=1 and lazy=True as otherwise R on mac is running into errors)
tdm<-TermDocumentMatrix(xx)
But this term document matrix has a lot of strange symbols, meaningless words and the like.
If a tweet is
RT #Foxtel: One man stands between us and annihilation: #IanZiering.
Sharknado‚Äã 3: OH HELL NO! - July 23 on Foxtel #SyfyAU
After cleaning the tweet I want only proper complete english words to be left , i.e a sentence/phrase void of everything else (user names, shortened words, urls)
example:
One man stands between us and annihilation oh hell no on
(Note: The transformation commands in the tm package are only able to remove stop words, punctuation whitespaces and also conversion to lowercase)
Using gsub and
stringr package
I have figured out part of the solution for removing retweets, references to screen names, hashtags, spaces, numbers, punctuations, urls .
clean_tweet = gsub("&", "", unclean_tweet)
clean_tweet = gsub("(RT|via)((?:\\b\\W*#\\w+)+)", "", clean_tweet)
clean_tweet = gsub("#\\w+", "", clean_tweet)
clean_tweet = gsub("[[:punct:]]", "", clean_tweet)
clean_tweet = gsub("[[:digit:]]", "", clean_tweet)
clean_tweet = gsub("http\\w+", "", clean_tweet)
clean_tweet = gsub("[ \t]{2,}", "", clean_tweet)
clean_tweet = gsub("^\\s+|\\s+$", "", clean_tweet)
ref: ( Hicks , 2014)
After the above
I did the below.
#get rid of unnecessary spaces
clean_tweet <- str_replace_all(clean_tweet," "," ")
# Get rid of URLs
clean_tweet <- str_replace_all(clean_tweet, "http://t.co/[a-z,A-Z,0-9]*{8}","")
# Take out retweet header, there is only one
clean_tweet <- str_replace(clean_tweet,"RT #[a-z,A-Z]*: ","")
# Get rid of hashtags
clean_tweet <- str_replace_all(clean_tweet,"#[a-z,A-Z]*","")
# Get rid of references to other screennames
clean_tweet <- str_replace_all(clean_tweet,"#[a-z,A-Z]*","")
ref: (Stanton 2013)
Before doing any of the above I collapsed the whole string into a single long character using the below.
paste(mytweets, collapse=" ")
This cleaning process has worked for me quite well as opposed to the tm_map transforms.
All that I am left with now is a set of proper words and a very few improper words.
Now, I only have to figure out how to remove the non proper english words.
Probably i will have to subtract my set of words from a dictionary of words.
library(tidyverse)
clean_tweets <- function(x) {
x %>%
# Remove URLs
str_remove_all(" ?(f|ht)(tp)(s?)(://)(.*)[.|/](.*)") %>%
# Remove mentions e.g. "#my_account"
str_remove_all("#[[:alnum:]_]{4,}") %>%
# Remove hashtags
str_remove_all("#[[:alnum:]_]+") %>%
# Replace "&" character reference with "and"
str_replace_all("&", "and") %>%
# Remove puntucation, using a standard character class
str_remove_all("[[:punct:]]") %>%
# Remove "RT: " from beginning of retweets
str_remove_all("^RT:? ") %>%
# Replace any newline characters with a space
str_replace_all("\\\n", " ") %>%
# Make everything lowercase
str_to_lower() %>%
# Remove any trailing whitespace around the text
str_trim("both")
}
tweets %>% clean_tweets
To remove the URLs you could try the following:
removeURL <- function(x) gsub("http[[:alnum:]]*", "", x)
xx <- tm_map(xx, removeURL)
Possibly you could define similar functions to further transform the text.
For me, this code did not work, for some reason-
# Get rid of URLs
clean_tweet <- str_replace_all(clean_tweet, "http://t.co/[a-z,A-Z,0-9]*{8}","")
Error was-
Error in stri_replace_all_regex(string, pattern, fix_replacement(replacement), :
Syntax error in regexp pattern. (U_REGEX_RULE_SYNTAX)
So, instead, I used
clean_tweet4 <- str_replace_all(clean_tweet3, "https://t.co/[a-z,A-Z,0-9]*","")
clean_tweet5 <- str_replace_all(clean_tweet4, "http://t.co/[a-z,A-Z,0-9]*","")
to get rid of URLs
The code do some basic cleaning
Converts into lowercase
df <- tm_map(df, tolower)
Removing Special characters
df <- tm_map(df, removePunctuation)
Removing Special characters
df <- tm_map(df, removeNumbers)
Removing common words
df <- tm_map(df, removeWords, stopwords('english'))
Removing URL
removeURL <- function(x) gsub('http[[:alnum;]]*', '', x)
I have a string like this
x<-c("This is a test (120)")
I need to replace the empty space between test and ( so that text will be like this
x<-c("This is a test(120)")
I tried this
s<-gsub("\t\v\(", "", x)
not working, any input would be appreciated.
Using a lookahead:
gsub("\\s+(?=\\()", "", x, perl=TRUE)
[1] "This is a test(120)"
Answer depends on more specifications you require though. Do you want to remove all spaces in front of opening brackets? Or just one? Or only in front of brackets containing numbers?
One simple approach is to used fixed = TRUE as in:
gsub(" (", "(", x, fixed = TRUE)
or:
gsub(" \\(", "\\(", x)
You have to "double escape" things in R. One for R and one for the regex:
s <- gsub('\\s\\(', '(', x)
That said, depending on your specific use case, you might want this to be more robust:
s <- gsub('(.+) \\((.+)\\)', '\\1(\\2)', x)