Suppose I have a very large data table, one column of which is "ManufacturerName". The data was not entered uniformly, so it's pretty messy. For example, there may be observations like:
ABC Inc
ABC, Inc
ABC Incorporated
A.B.C.
...
Joe Shmos Plumbing
Joe Shmo Plumbing
...
I am looking for an automated way in R to try and consider similar names as one factor level. I have learned the syntax to manually do this, for example:
levels(df$ManufacturerName) <- list(ABC=c("ABC", "A.B.C", ....), JoeShmoPlumbing=c(...))
But I'm trying to think of an automated solution. Obviously it's not going to be perfect as I can't anticipate every type of permutation in the data table. But maybe something that searches the factor levels, strips out punctuation/special characters, and creates levels based on common first words. Or any other ideas. Thanks!
Look into the stringdist package. For starters, you could do something like this:
library(stringdist)
x <- c("ABC Inc", "ABC, Inc", "ABC Incorporated", "A.B.C.", "Joe Shmos Plumbing", "Joe Shmo Plumbing")
d <- stringdistmatrix(x)
# 1 2 3 4 5
# 2 1
# 3 9 10
# 4 6 7 15
# 5 16 16 16 18
# 6 15 15 15 17 1
For more help, see ?stringdistmatrix or do searches on StackOverflow for fuzzy matching, approximate string matching, string distance functions, and agrep.
Related
I want to be able to keep all rows where the "conm" column does contain certain bank names. you can tell from the code I am trying to use subset to do this but to no avail.
I have tried using subset to do this.
CMPSTPRFT12 <- subset(CMPSPRFT11, conm = MORGUARD CORP | conm = LEHMAN BROTHERS HOLDINGS INC)
I expect the output in rstudio to just show all rows where the column containing the names of banks includes certain banks, not all banks. I want SUnTrust, Lehman Brothers, Morgan Stanley, Goldman Sachs, PennyMac, Bank of America, and Fannie Mae.
Please see other posts on how to phrase your questions more helpfully for others. How to make a great R reproducible example
You can use dplyr and filter.
df <- data.frame(bank=letters[1:10],
value=10:19)
df %>% filter(bank=='a' | bank=='b')
bank value
1 a 10
2 b 11
banks <- c('d','g','j')
df %>% filter(bank %in% banks)
bank value
1 d 13
2 g 16
3 j 19
CompanyName <- c('Kraft', 'Kraft Foods', 'Kfraft', 'nestle', 'nestle usa', 'GM', 'general motors', 'the dow chemical company', 'Dow')
I want to get either:
CompanyName2
Kraft
Kraft
Kraft
nestle
nestle
general motors
general motors
Dow
Dow
But would be absolutely fine with:
CompanyName2
1
1
1
2
2
3
3
I see algorithms for getting the distance between two words, so if I had just one weird name I would compare it to all other names and pick the one with the lowest distance. But I have thousands of names and want to group them all into groups.
I do not know anything about elastic search, but would one of the functions in the elastic package or some other function help me out here?
I'm sorry there's no programming here. I know. But this is way out of my area of normal expertise.
Solution: use string distance
You're on the right track. Here is some R code to get you started:
install.packages("stringdist") # install this package
library("stringdist")
CompanyName <- c('Kraft', 'Kraft Foods', 'Kfraft', 'nestle', 'nestle usa', 'GM', 'general motors', 'the dow chemical company', 'Dow')
CompanyName = tolower(CompanyName) # otherwise case matters too much
# Calculate a string distance matrix; LCS is just one option
?"stringdist-metrics" # see others
sdm = stringdistmatrix(CompanyName, CompanyName, useNames=T, method="lcs")
Let's take a look. These are the calculated distances between strings, using Longest Common Subsequence metric (try others, e.g. cosine, Levenshtein). They all measure, in essence, how many characters the strings have in common. Their pros and cons are beyond this Q&A. You might look into something that gives a higher similarity value to two strings that contain the exact same substring (like dow)
sdm[1:5,1:5]
kraft kraft foods kfraft nestle nestle usa
kraft 0 6 1 9 13
kraft foods 6 0 7 15 15
kfraft 1 7 0 10 14
nestle 9 15 10 0 4
nestle usa 13 15 14 4 0
Some visualization
# Hierarchical clustering
sdm_dist = as.dist(sdm) # convert to a dist object (you essentially already have distances calculated)
plot(hclust(sdm_dist))
If you want to group then explicitly into k groups, use k-medoids.
library("cluster")
clusplot(pam(sdm_dist, 5), color=TRUE, shade=F, labels=2, lines=0)
I have the following data frame in R (made up stuff to learn the program):
country population civilised
1 Town 13 5
2 city 69 9
3 Home 24 2
4 Stuff 99 9
and I am trying to access specific rows with the subset function, like
test <- subset(t, country==Town. But all ever get is object not found.
We need to quote the string.
test <- subset(t, country=='Town')
test
# country population civilised
#1 Town 13 5
NOTE; t is a function name (Check ?t). It is better to name objects that are not function names.
I have the following situation where Im pretty desperate.
paste("crossdata","$geno$'",1:4,"'$data",sep="")
generates 4 strings which look like that:
"crossdata$geno$'1'$data" "crossdata$geno$'2'$data" "crossdata$geno$'3'$data" "crossdata$geno$'4'$data"
I want to retrieve the corresponding data.frames of these 4 strings via evaluation of one of these strings and the combine them via cbind. However when Im doing something like this:
cbind(sapply(parse(text=paste("crossdata","$geno$'",i,"'$data",sep="")),eval))
that does not work. Can anybody help me out?
Thanks
datlist <- list(adat=data.frame(u=1:5,v=6:10),bdat=data.frame(x=11:15,y=16:20))
extdat <- c("datlist$adat","datlist$bdat")
do.call('cbind',lapply(extdat,function(i) eval(parse(text=i))))
u v x y
1 1 6 11 16
2 2 7 12 17
3 3 8 13 18
4 4 9 14 19
5 5 10 15 20
Of course this uses eval + parse, which usually means you are on the wrong track.
Using the combination of parse and eval is like saying that you know how to get from New York City to Boston and therefore making all your travel plans by going from your origin to New York, then to Boston, then to your desitination. In some cases this may not be to bad, but it is a bit of a long detour if you are traveling from London to Paris.
You should first learn the relationship and difference between subsetting lists using $ and [[ (see ?'[[' for the documentation) and when it is, and more importantly, is not appropriate to use $. Once you understand that you should be able to find solutions that do not require parse and eval.
Your problem may be as simple as (untested since your example is not reproducible):
do.call( cbind, lapply( 1:4, function(x) crossdata[['geno']][[x]][['data']] ) )
or possibly
do.call(cbind, lapply(as.character(1:4), function(x) crossdata$geno[[x]]$data ) )
Some of the data I work with contain sensitive information (names of persons, dates, locations, etc). But I sometimes need to share "the numbers" with other persons to get help with statistical analysis, or process it on more powerful machines where I can't control who looks at the data.
Ideally I would like to work like this:
Read the data into R (look at it, clean it, etc.)
Select a data frame that I want to de-classify, run it through a package and receive two "files": the de-classified data and a translation-file. The latter I will keep myself.
The de-classified data can be shared, manipulated and processed without worries.
I re-classify the processed data together with the translation-file.
I suppose that this can also be useful when uploading data for processing "in the cloud" (Amazon, etc.).
Have you been in this situation? I first thought about writing a "randomize" function myself, but then I realized there is no end on how sophisticated this can be done (for example, offsetting time-stamps without losing order). Maybe there is already a defined method or tool?
Thanks to everyone who contributes to [r]-tag here at Stack Overflow!
One way to do this is with match. First I make a small dataframe:
foo <- data.frame( person=c("Mickey","Donald","Daisy","Scrooge"), score=rnorm(4))
foo
person score
1 Mickey -0.07891709
2 Donald 0.88678481
3 Daisy 0.11697127
4 Scrooge 0.31863009
Then I make a key:
set.seed(100)
key <- as.character(foo$person[sample(1:nrow(foo))])
You must save this key obviously somewhere. Now I can encode the persons:
foo$person <- match(foo$person, key)
foo
person score
1 2 0.3186301
2 1 -0.5817907
3 4 0.7145327
4 3 -0.8252594
If I want the person names again I can index the key:
key[foo$person]
[1] "Mickey" "Donald" "Daisy" "Scrooge"
Or use tranform, this also works if the data is changed as long as the person ID remains the same:
foo <-rbind(foo,foo[sample(1:4),],foo[sample(1:4,2),],foo)
foo
person score
1 2 0.3186301
2 1 -0.5817907
3 4 0.7145327
4 3 -0.8252594
21 1 -0.5817907
41 3 -0.8252594
31 4 0.7145327
15 2 0.3186301
32 4 0.7145327
16 2 0.3186301
11 2 0.3186301
12 1 -0.5817907
13 4 0.7145327
14 3 -0.8252594
transform(foo, person=key[person])
person score
1 Mickey 0.3186301
2 Donald -0.5817907
3 Daisy 0.7145327
4 Scrooge -0.8252594
21 Donald -0.5817907
41 Scrooge -0.8252594
31 Daisy 0.7145327
15 Mickey 0.3186301
32 Daisy 0.7145327
16 Mickey 0.3186301
11 Mickey 0.3186301
12 Donald -0.5817907
13 Daisy 0.7145327
14 Scrooge -0.8252594
Can you simply assign a GUID to the row from which you have removed all of the sensitive information? As long as your colleagues lacking the security clearance don't mess with the GUID, you'd be able to incorporate any changes and additions they may make simply by joining on the GUID. Then it becomes simply a matter of generating bogus ersatz values for the columns whose data you have purged. LastName1, LastName2, City1, City2, etc etc. EDIT: You'd have a table for each purged column, e.g. City, State, Zip, FirstName, LastName each of which contains the distinct set of the real classified values in that column and an integer value. So that "Jones" could be represented in the sanitized dataset as, say, LastName22, "Schenectady" as City343, "90210" as Zipcode716. This would give your colleagues valid values to work with (e.g. they'd have the same number of distinct cities as your real data, just with anonymized names) and the interrelationships of the anonymized data are preserved.. EDIT2: if the goal is to give your colleagues sanitized data that is still statistically meaningful, then date columns would require special processing. E.g. if your colleagues need to do statistical computations on the age of the person, you have to give them something close to the original date, not so close that it could be revealing, yet not so far that it could skew the analysis.
Sounds like Statistical Disclosure Control problem. Take a look at sdcMicro package.
EDIT: Just realized that you have slightly different problem. The point of Statistical Disclosure Control is to "damage" data so that the risk of disclosure is reduced. By "damaging" data you are loosing some information - this is the price you are paying for reduced risk of disclosure. Your data will contain less information - so your analysis can give different or less results as analysis done on original data.
Depends on what you are going to do with your data.