How to replace all values starting with certain characters in R with one value? - r

I hope this isn't a duplicate, I was unable to find a question that refers to the exact same issue.
I have a data frame in R, where within one column (let's call it 'Task') there are 170 items named EC1:EC170, I would like to replace them so that they just say 'EC' and don't have a number following them.
The important thing is that this column also has other types of values, that do not start with EC, so I don't just want to change the names of all values in the column, but only those that start with 'EC'.
In linux I would use 'sed' and replace 'EC*' with 'EC', but I don't know how to do that in R.

Rich Scriven's startswith suggestion worked great, I did write df$task instead of just 'task'. Thanks a lot! this is what I used: df$task[startsWith(df$task, "EC")] <- "EC"

I'd recommend regex as well. You're looking for string "EC" followed by 1 to 3 digits and replace these occurrences with "EC":
df$Task = sub("EC\\d{1,3}", "EC", df$Task)

Related

I am using R code to count for a specific word occurrence in a string. How can I update it to count if the word's synonyms are used?

I'm using the following code to find if the word "assist" is used in a string variable.
string<- c("assist")
`assist <-
(1:nrow(df) %in% c(sapply(string, grep, df$textvariable, fixed = TRUE)))+0`
`sum(assist)`
If I also wanted to check if synonyms such as "help" and "support" are used in the string, how can I update the code? So if either of these synonyms are used, I want to code it as 1. If neither of these words are used, I want to code it as 0. It doesn't matter if all of the words appear in the string or how many times they are used.
I tried changing it to
string<- c("assist", "help", "support")
But it looks like it is searching for strings in which all of these words are used?
I'd appreciate your help!
Thank you

Removing part of strings within a column

I have a column within a data frame with a series of identifiers in, a letter and 8 numbers, i.e. B15006788.
Is there a way to remove all instances of B15.... to make them empty cells (there’s thousands of variations of numbers within each category) but keep B16.... etc?
I know if there was just one thing I wanted to remove, like the B15, I could do;
sub(“B15”, ””, df$col)
But I’m not sure on the how to remove a set number of characters/numbers (or even all subsequent characters after B15).
Thanks in advance :)
Welcome to SO! This is a case of regex. You can use base R as I show here or look into the stringR package for handy tools that are easier to understand. You can also look for regex rules to help define what you want to look for. For what you ask you can use the following code example to help:
testStrings <- c("KEEPB15", "KEEPB15A", "KEEPB15ABCDE")
gsub("B15.{2}", "", testStrings)
gsub is the base R function to replace a pattern with something else in one or a series of inputs. To test our regex I created the testStrings vector for different examples.
Breaking down the regex code, "B15" is the pattern you're specifically looking for. The "." means any character and the "{2}" is saying what range of any character we want to grab after "B15". You can change it as you need. If you want to remove everything after "B15". replace the pattern with "B15.". the "" means everything till the end.
edit: If you want to specify that "B15" must be at the start of the string, you can add "^" to the start of the pattern as so: "^B15.{2}"
https://www.rstudio.com/wp-content/uploads/2016/09/RegExCheatsheet.pdf has a info on different regex's you can make to be more particular.

Extract numerical value before a string in R

I have been mucking around with regex strings and strsplit but can't figure out how to solve my problem.
I have a collection of html documents that will always contain the phrase "people own these". I want to extract the number immediately preceding this phrase. i.e. '732,234 people own these' - I'm hoping to capture the number 732,234 (including the comma, though I don't care if it's removed).
The number and phrase are always surrounded by a . I tried using Xpath but that seemed even harder than a regex expression. Any help or advice is greatly appreciated!
example string: >742,811 people own these<
-> 742,811
Could you please try following.
val <- "742,811 people own these"
gsub(' [a-zA-Z]+',"",val)
Output will be as follows.
[1] "742,811"
Explanation: using gsub(global substitution) function of R here. Putting condition here where it should replace all occurrences of space with small or capital alphabets with NULL for variable val.
Try using str_extract_all from the stringr library:
str_extract_all(data, "\\d{1,3}(?:,\\d{3})*(?:\\.\\d+)?(?= people own these)")

grep-like function in R

I'm trying to write a program in R which would take in a .pdb file and give out a .xyz-file.
I'm having problems with erasing some rows that contain useless data. There are around 30-40 thousand rows, from which I would only need about 3000. The rows that contain the useful information start with the word "ATOM".
In unix terminal I would just use the command
grep ATOM < filename.pdb > newfile.xyz
but I have no idea how to achieve the same result with R.
Thank you for your help!
You should be able to use grep, and depending on your specific situation, perhaps substr.
For example
#Random string variable
stringVar <- c("abcdefg", "defg", "eff", "abc")
#find the location of variables starting with "abc"
abcLoc <- grep("abc", substr(stringVar, 1, 3))
#Extract "abc" instances
out <- stringVar[abcLoc]
out
Note that the substr part limits the search to only the first three characters of each element of stringVar (e.g., "abc", "def", etc.). This may not be strictly necessary but I've found it to be very useful at times. For example, if you had an element like "defabc" that you didn't want to include, using substr would ensure it wouldn't be "found" by grep.
Hope it's helpful.

Finding number of occurrences of a word in a file using R functions

I am using the following code for finding number of occurrences of a word memory in a file and I am getting the wrong result. Can you please help me to know what I am missing?
NOTE1: The question is looking for exact occurrence of word "memory"!
NOTE2: What I have realized they are exactly looking for "memory" and even something like "memory," is not accepted! That was the part which has brought up the confusion I guess. I tried it for word "action" and the correct answer is 7! You can try as well.
#names=scan("hamlet.txt", what=character())
names <- scan('http://pastebin.com/raw.php?i=kC9aRvfB', what=character())
Read 28230 items
> length(grep("memory",names))
[1] 9
Here's the file
The problem is really Shakespeare's use of punctuation. There are a lot of apostrophes (') in the text. When the R function scan encounters an apostrophe it assumes it is the start of a quoted string and reads all characters up until the next apostrophe into a single entry of your names array. One of these long entries happens to include two instances of the word "memory" and so reduces the total number of matches by one.
You can fix the problem by telling scan to regard all quotation marks as normal characters and not treat them specially:
names <- scan('http://pastebin.com/raw.php?i=kC9aRvfB', what=character(), quote=NULL )
Be careful when using the R implementation of grep. It does not behave in exactly the same way as the usual GNU/Linux program. In particular, the way you have used it here WILL find the number of matching words and not just the total number of matching lines as some people have suggested.
As pointed by #andrew, my previous answer would give wrong results if a word repeats on the same line. Based on other answers/comments, this one seems ok:
names = scan('http://pastebin.com/raw.php?i=kC9aRvfB', what=character(), quote=NULL )
idxs = grep("memory", names, ignore.case = TRUE)
length(idxs)
# [1] 10

Resources