TLDR; I've gotten myself into some trouble. I have a dataframe that I exported into different output files based on column ID. Now, I want to merge data from those files back into the original dataframe without a common index, knowing only that their order was preserved within the ID grouping.
I'm starting with a large dataframe (n > 30,000). One of my columns contains text documents in many different languages (text). Another column (en) indicates which language the document is.
MWE:
df <- data.frame(text = c('this is a test', 'questo è un test', 'cest un test', 'another test', 'un autre essai'),
en = c("en", "it", "fr", "en", "fr"))
I wanted to translate them all into English, but needed to do it manually without a Google Translate API. First I did the following to obtain a list of all unique non-English entries:
noneng <- subset(df, lang_2!="en")
noneng <- subset(noneng, !is.na(text))
noneng <- noneng[!duplicated(noneng[,c('text')]),]
I then grouped elements of this list by language group and exported them.
# export to sep files
write_mult = function(x) {
write.table(x,paste0("translate/untranslated_",unique(x$lang_1),".txt"))
return(x)
}
noneng %>%
group_by(lang_1) %>%
do(write_mult(.))
I end up with a bunch of delimited .txt files, one for each language group ID. Within each .txt is the complete list of unique rows for that language group.
untranslated_en.txt (2 entries)
untranslated_it.txt (2 entries)
untranslated_fr.txt (1 entry)
...
I painstakingly ran these through Google Translate and now have English language versions for each row of each .txt. I was planning on replacing the untranslated versions in the original dataframe with its translation.
What I (foolishly) didn't think about in advance was how I was going to match the translated versions back to the untranslated ones in original dataframe without having given them an index/key of some kind.
There must be a way out of this. For example,
I know the original order of noneng before I exported it
group_by and unique must have predictable output based on the order they're given.
So it seems like it should be possible to recover the order of the .txt files even without an index column.
If I knew that, then I could merge them based on order, even without an index. But having split the output into multiple .txts has thrown me off, and I'm totally stumped about how one would backward induct this with R?
I hope I'm explaining this clearly. I'd be grateful for any help.
Perhaps something like this:
library(dplyr)
# helper function to add row numbers per language group
add_row_in_lang <- function(df) {
df %>% group_by(en) %>% mutate(row = row_number()) %>% ungroup()
}
# add row numbers, join to table with combined translations
df %>%
add_row_in_lang() %>%
left_join(
list.files(path = "translated_csvs", full.names = TRUE) %>%
purrr::map_dfr( read_csv ) %>%
add_row_in_lang(),
by = c("en", "row")
)
Note how I've specified to join just on en and row, since it sounds like in your case those will be the keys.
I'm not sure about the format of your CSVs, so I can't verify or know what tweaks might be required, but if you had the data frames you could do the same thing:
df_fr <- data.frame(text = c('cest un test', 'un autre essai'),
en = c("fr", "fr"), translation = c("This is a test", "Another test"))
df_it <- data.frame(text = c('questo è un test'),
en = c("it"), translation = c("This is a test"))
df_en <- data.frame(text = c('this is a test', 'another test'),
en = c("en"), translation = c("This is a test", "another test"))
df %>%
add_row_in_lang() %>%
left_join(
bind_rows(df_fr, df_it, df_en) %>%
add_row_in_lang(),
by = c("en", "row")
)
# A tibble: 5 × 5
text.x en row text.y translation
<chr> <chr> <int> <chr> <chr>
1 this is a test en 1 this is a test This is a test
2 questo è un test it 1 questo è un test This is a test
3 cest un test fr 1 cest un test This is a test
4 another test en 2 another test another test
5 un autre essai fr 2 un autre essai Another test
Note how in my example here, both tables have the original text -- but I specified that the join only use en and the row column that we added. Since in my example both files have a text column, it outputs as text.x (from the first table) and text.y (from the 2nd one).
Not sure what caused me to overthink this. I came up with a rough but straighforward fix. In case it's useful for anyone else, this is what I did (not as clean as Jon's answer but maybe more comprehensible for newbies).
This step is optional, but first I gave the dataframe an index post hoc and ran the export procedure again. I sampled the untranslated entries to note their indices. Then, I ordered the dataframe by group ID to confirm that the indices do match up (as expected).
noneng <- noneng[order(noneng$lang_1),]
I noted the group order in the sorted dataframe (alphabetical), imported each of the .txts, and used bind_rows to create a new dataframe noneng_translated of equal length. Sort that dataframe by ID. Finally, cbind(noneng, noneng_translated). Simple enough.
en <- data.frame(lapply(en, as.character))
it <- data.frame(lapply(it, as.character))
fr <- data.frame(lapply(fr, as.character))
...
newdf <- bind_rows(en,it,fr,...)
cbind(noneng, newdf)
Uglier and riskier than merging on an index would have been, but it did the trick. Curious how others would do this. I'll leave it up in case others in the future make similar mistakes.
Related
I have been given an oddly structured dataset that I need to prepare for visualisation in GIS. The data is from historic newspapers from different locations in China, published between 1921 and 1937. The excel table is structured as follows:
There is a sheet for each location, 2. each sheet has a column for every year and the variables for each newspaper is organised in blocks of 7 rows and separated by a blank row. Here's a sample from one of the sheets:
,1921年,1922年,1923年
,,,
Title of Newspaper 1,遠東報(Yuan Dong Bao),,
(Language),漢文,,
(Ideology),東支鉄道機関紙,,
(Owner),(総経理)史秉臣,,
(Editior),張福臣,,
(Publication Frequency),日刊,,
(Circulation),"1,000",,
(Others),1908年創刊、東支鉄道に支那か勢力を扶殖せし以来著しく排日記事を記載す,,
,,,
Title of Newspaper 2,ノーウォスチ・ジーズニ(Nōuosuchi Jīzuni),ノォウォスチ・ジーズニ,ノウウスチジーズニ
(Language),露文,露文,露文
(Ideology),政治、社会、文学、商工新聞,社会民主,社会民主
(Owner),タワリシエスウオ・ペチャヤチャ,ぺチヤチ合名会社,ぺチヤチ合名会社
(Editior),イ・エフ・ブロクミユレル,イ・エフ・ブロクミユレル(本名クリオリン)、記者(社員)チエルニヤエフスキー,イ・エフ・ブロクミユレル(本名クリオリン)、記者(社員)チエルニヤエフスキー
(Publication Frequency),日刊,日刊,日刊
(Circulation),"約3,000","約3,000","3,000"
(Others),1909年創刊、猶太人会より補助を受く、「エス・エル」党の過激派に接近せる主張をなす、哈爾賓諸新聞中最も紙面整頓せるものにして記事多く比較的正確なり、日本お対露干渉排斥排日記事を掲載す,1909年創刊、エス・エル党の過激派に接近せる主張をなす、哈爾賓諸新聞中最も紙面整頓し記事多く比較的正確にして最も有力なるものなり、日本お対露干渉排斥及一般排日記事を掲載し「チタ」政府を擁護す,1909年創刊、過激派に益々接近し長春会議以後は「ダリタ」通信と相待って過激派系の両翼たりし感あり、紙面整頓し記事比較的正確且つ金力に於て猶太人系の後援を有し最も有力なる新聞たり、一般排日記事を掲載し支那側に媚を呈す、「チタ」政権の擁護をなし当地に於ける機関紙たりと自任す
,,,
Title of Newspaper 3,北満洲(Kita Manshū),哈爾賓日々新聞(Harbin Nichi-Nichi Shimbun),哈爾賓日々新聞(Harbin Nichi-Nichi Shimbun)
(Language),邦文,邦文,邦文
(Ideology),,,
(Owner),合資組織,社長 児玉右二,株式組織 (社長)児玉右二
(Editior),木下猛、遠藤規矩郎,編集長代理 阿武信一,(副社長)磯部検三 (主筆)阿武信一
(Publication Frequency),日刊,日刊,日刊
(Circulation),"1,000内外",,
(Others),大正3年7月創刊,大正11年1月創刊の予定、西比利亜新聞、北満洲及哈爾賓新聞の合同せるものなり,大正11年1月創刊
Yes, it's also in numerous non-latin languages, which makes it a little bit more challenging.
I want to create a new matrix for every year, then rotate the table to turn the 7 rows for each newspaper into columns so that I end up with each row corresponding to one newspaper. Finally, I need to generate a new column that gives me the location of the newspaper. I would also like to add a unique identifier for each newspaper and add another column that states the year, just in case I decide to merge the entire dataset into a single matrix. I did the transformation manually in Excel but the entire dataset contains data several from thousand newspapers, so I need to automate the process. Here is what I want to achieve (sans unique identifier and year column):
Title of Newspaper,Language,Ideology,Owner,Editor,Publication Frequency,Circulation,Others,Location
直隷公報(Zhi Li Gong Bao),漢文,直隷省公署の公布機関,直隷省,,日刊,2500,光緒22年創刊、官報の改称,Tientsin
大公報(Da Gong Bao),漢文,稍親日的,合資組織,樊敏鋆,日刊,,光緒28年創刊、倪嗣仲の機関にて現に王祝山其の全権を握り居れり、9年夏該派の没落と共に打撃を受け少しく幹部を変更して再発行せり、但し資金は依然王より供給し居れり,Tientsin
天津日々新聞(Tianjin Ri Ri Xin Wen),漢文,日支親善,方若,郭心培,日刊,2000,光緒27年創刊、親日主義を以て一貫す、國聞報の後身なり民国9年安直戦争中直隷派の圧迫を受けたるも遂に屈せさりし,Tientsin
時聞報(Shi Wen Bao),漢文,中立,李大義,王石甫,,1000,光緒30年創刊、紙面相当価値あり,Tientsin
Is there a way of doing this in R? How will I go about it?
I've outlined a plan in a comment above. This is untested code that makes it more concrete. I'll keep testing till it works
inps <- readLines( ~/Documents/R_code/Tientsin unformatted.txt")
inp2 <- inps[ -(1:2) ]
# 'identify groupings with cumsum(grepl(",,,", inp2) as the second arg to split'
inp.df <- lapply( split(inp2, cumsum(grepl(",,,", inp2) , read.csv)
library(data.table) # only needed if you use rbindlist not needed for do.call(rbind , ..
# make a list of one-line dataframes as below
# finally run rbindlist or do.call(rbind, ...)
in.dt <- do.call( rbind, (inp.df)) # rbind checks for ordering of columns
This is the step that makes a one line dataframe from a set of text lines:
txt <- 'Title of Newspaper 1,遠東報(Yuan Dong Bao),,
(Language),漢文,,
(Ideology),東支鉄道機関紙,,
(Owner),(総経理)史秉臣,,
(Editior),張福臣,,
(Publication Frequency),日刊,,
(Circulation),"1,000",,
(Others),1908年創刊、東支鉄道に支那か勢力を扶殖せし以来著しく排日記事を記載す,,'
temp=read.table(text=txt, sep="," , colClasses=c(rep("character", 2), NA, NA))
in1 <- setNames( data.frame(as.list(temp$V2)), temp$V1)
in1
#-------------------------
Title of Newspaper 1 (Language) (Ideology) (Owner) (Editior) (Publication Frequency) (Circulation)
1 遠東報(Yuan Dong Bao) 漢文 東支鉄道機関紙 (総経理)史秉臣 張福臣 日刊 1,000
(Others)
1 1908年創刊、東支鉄道に支那か勢力を扶殖せし以来著しく排日記事を記載す
So it looks like the column names of the individually constructed items would need to further processing to make them capable of being successfully "bindable" by plyr::rbindlist or data.table::rbindlist
The title doesn't really do my question justice, because there are probably a few ways to skin this cat. But I picked one approach and went with it. This is what I'm working with:
I've pulled all the metadata for a particular study in the NCBI database using the "Send to:" option on their interface and downloading a .txt file.
In total, I have ~23k samples, each with up to 609 unique questions and answers from a questionnaire totaling 8M+ obs of 1 variable when read as a .csv. To my dismay, the metadata are irregular. Some samples have 140 associated key/value pairs. Others have 492. I've included a header of a sample below.
1: qiita_sid_10317:10317.BLANK1.6H.GUELPH
Identifiers: BioSample: SAMEA4790059; SRA: ERS2609990
Organism: metagenome
Attributes:
/Alias="qiita_sid_10317:10317.BLANK1.6H.GUELPH"
/description="American Gut control"
/ENA checklist="ERC000011"
/INSDC center alias="UCSDMI"
/INSDC center name="University of California San Diego Microbiome Initiative"
/INSDC first public="2018-07-13T17:03:10Z"
/INSDC last update="2018-07-13T14:50:03Z"
/INSDC status="public"
/SRA accession="ERS2609990"
I've tried (including but not limited to):
Read .txt file (adding a delimiter hasn't made a difference, am I missing something here?)
I've tried reading the data using various delimiters
I've even removed the header data in Sublime Text, leaving only "Attributes:" and the "/"-delimited key/value pairs in order to mess with the column that way
I've split the column found all unique values in col1 to maybe create a df from scratch, etc etc.
Can't seem to get past the cleaning steps:
samples <- read.csv("~/biosample_result_full.txt")
samples_split <- cSplit(samples, splitCols = sample$Colname, sep = "=")
samples_split$Attributes_1 <- gsub(" ", "_", samples_split$Attributes_1)
questions <- unique(samples_split$Attributes_1)
Ideally, each sample and associated metadata would be transformed into rows, with each "Attribute"/question as the column name.
Any help is greatly appreciated.
I see that the website you've linked to, allows fot the option to export data to xml. I strongly suggest to do so. R can hande/parse xml-files very efficient.
When I download the first three results from that site to a file biosample_result.xml , it's easy to process using the xml2-package
library( xml2 )
library( magrittr )
doc <- read_xml( "./biosample_result.xml")
#gret all BioSample nodes
BioSample.Nodes <- xml_find_all( doc, "//BioSample")
#build a data.frame
data.frame(
sample_name = xml_find_first( BioSample.Nodes , ".//Id[#db='SRA']") %>% xml_text(),
stringsAsFactors = FALSE )
# sample_name
# 1 ERS2609990
# 2 ERS2609989
# 3 ERS2609988
So if you can use the XML, you will just have to use the right xpath-syntax to get the data/nodes you need, into the columns you want...
In the exmaple above, I extracted (from each BioSample-node) the first ID-node with attribute db equals SRA, and stored the result in the co0lumn sample_name.
Still assuming you can use the xml-data.
If you are lokking for all attributes into one df, you need the functions from purrr, so just load the entire tidyverse
library( tidyverse )
df <- xml_find_all( doc, "//BioSample") %>%
map_df(~{
set_names(
xml_find_all(.x, ".//Attribute") %>% xml_text(),
xml_find_all(.x, ".//Attribute") %>% xml_attr( "attribute_name" )
) %>%
as.list() %>%
flatten_df()
})
will result in a df like this
I am trying to build a data frame with book id, title, author, rating, collection, start and finish date from the LibraryThing api with my personal data. I am able to get a nested list fairly easily, and I have figured out how to build a data frame with everything but the dates (perhaps in not the best way but it works). My issue is with the dates.
The list I'm working with normally has 20 elements, but it adds the startfinishdates element only if I added dates to the book in my account. This is causing two issues:
If it was always there, I could extract it like everything else and it would have NA most of the time, and I could use cbind to get it lined up correctly with the other information
When I extract it using the name, and get an object with less elements, I don't have a way to join it back to everything else (it doesn't have the book id)
Ultimately, I want to build this data frame and an answer that tells me how to pull out the book id and associate it with each startfinishdate so I can join on book id is acceptable. I would just add that to the code I have.
I'm also open to learning a better approach from the jump and re-designing the entire thing as I have not worked with lists much in R and what I put together was after much trial and error. I do want to use R though, as ultimately I am going to use this to create an R Markdown page for my web site (for instance, a plot that shows finish dates of books).
You can run the code below and get the data (no api key required).
library(jsonlite)
library(tidyverse)
library(assertr)
data<-fromJSON("http://www.librarything.com/api_getdata.php?userid=cau83&key=392812157&max=450&showCollections=1&responseType=json&showDates=1")
books.lst<-data$books
#create df from json
create.df<-function(item){
df<-map_df(.x=books.lst,~.x[[item]])
df2 <- t(df)
return(df2)
}
ids<-create.df(1)
titles<-create.df(2)
ratings<-create.df(12)
authors<-create.df(4)
#need to get the book id when i build the date df's
startdates.df<-map_df(.x=books.lst,~.x$startfinishdates) %>% select(started_stamp,started_date)
finishdates.df<-map_df(.x=books.lst,~.x$startfinishdates) %>% select(finished_stamp,finished_date)
collections.df<-map_df(.x=books.lst,~.x$collections)
#from assertr: will create a vector of same length as df with all values concatenated
collections.v<-col_concat(collections.df, sep = ", ")
#assemble df
books.df<-as.data.frame(cbind(ids,titles,authors,ratings,collections.v))
names(books.df)<-c("ID","Title","Author","Rating","Collections")
books.df<-books.df %>% mutate(ID=as.character(ID),Title=as.character(Title),Author=as.character(Author),
Rating=as.character(Rating),Collections=as.character(Collections))
This approach is outside the tidyverse meta-package. Using base-R you can make it work using the following code.
Map will apply the user defined function to each element of data$books which is provided in the argument and extract the required fields for your data.frame. Reduce will take all the individual dataframes and merge them (or reduce) to a single data.frame booksdf.
library(jsonlite)
data<-fromJSON("http://www.librarything.com/api_getdata.php?userid=cau83&key=392812157&max=450&showCollections=1&responseType=json&showDates=1")
booksdf=Reduce(function(x,y){rbind(x,y)},
Map(function(x){
lenofelements = length(x)
if(lenofelements>20){
if(!is.null(x$startfinishdates$started_date)){
started_date = x$startfinishdates$started_date
}else{
started_date=NA
}
if(!is.null(x$startfinishdates$started_stamp)){
started_stamp = x$startfinishdates$started_date
}else{
started_stamp=NA
}
if(!is.null(x$startfinishdates$finished_date)){
finished_date = x$startfinishdates$finished_date
}else{
finished_date=NA
}
if(!is.null(x$startfinishdates$finished_stamp)){
finished_stamp = x$startfinishdates$finished_stamp
}else{
finished_stamp=NA
}
}else{
started_stamp = NA
started_date = NA
finished_stamp = NA
finished_date = NA
}
book_id = x$book_id
title = x$title
author = x$author_fl
rating = x$rating
collections = paste(unlist(x$collections),collapse = ",")
return(data.frame(ID=book_id,Title=title,Author=author,Rating=rating,
Collections=collections,Started_date=started_date,Started_stamp=started_stamp,
Finished_date=finished_date,Finished_stamp=finished_stamp))
},data$books))
I'm attempting to loop through multiple CSV files and complete the same task for each file to save myself time. First, I ran 'list.files' to list all files in the folder (e.g., GPS_Collar33800_13.csv,GPS_Collar33801_13.CSV,etc). I then developed a loop but I'm struggling on how to structure the other parts of the code to work through each individual file. My end goal is to have 24 files that all look the same structurally and then I need to merge them all together into a master file. Another issue is that I need to list a unique ID for each file (Add column for collar ID, e.g., 33800,33801,33802,etc.) but I don't know how to easily do this without manually adding in a new unique ID by hand (if I knew that it was bringing in file GPS_Collar33800_13.csv first then I can make the AnimalID column value=33800 and do the same thing for GPS_Collar33801_13.csv and add in AnimalID column value=33801). The unique IDs are based on the file name. Any suggestions would be much appreciated!
## List CSV files in folder
`files<-list.files()`
## Run a for loop to complete the same tasks for each
for (i in 1:length(files)){
## Read table
tmp<-read.table(files[i],header=FALSE,sep=" ")
## Keep certain columns
tmp1 <- tmp[c(2:5,9,10,12,13)]
#Name the remaining columns
names(tmp1) <-
c("GMT_Date","GMT_Time","LMT_Date","LMT_Time","Latitude","Longitude","PDOP","2D_3D")
#Add column for collar ID
tmp1$AnimalID<-33800
#Cleanup dataframe by removing records with NAs
tmp1[tmp1 == "N/A"] <- NA
tmp2<-na.omit(tmp1)
You can give this a try:
library(stringr)
## List CSV files in folder
files<-list.files()
big.df <- vector('list',length(files))
## Run a for loop to complete the same tasks for each
for (i in 1:length(files)){
## Read table
tmp<-read.table(files[i],header=FALSE,sep=" ")
## Keep certain columns
tmp1 <- tmp[c(2:5,9,10,12,13)]
#Name the remaining columns
names(tmp1) <-
c("GMT_Date","GMT_Time","LMT_Date","LMT_Time","Latitude","Longitude","PDOP","2D_3D")
#Add column for collar ID
tmp1$AnimalID<-str_match(files[i], 'Collar(\\d+)_')[,2]
#Cleanup dataframe by removing records with NAs
tmp1[tmp1 == "N/A"] <- NA
tmp2<-na.omit(tmp1)
big.df[[i]] <- tmp2
}
final.df <- do.call('rbind', big.df)
It will require the stringr package and assumes your filenames all look like 'GPS_Collar33801_13.csv', etc. It then reads in each file, stores it in a large list, moves to the next file... and when it's done, it mashes them all together in a data.frame called final.df.
Edit: Just fixed the str_match argument.
So let me make sure before I begin that I understand the ask:
For each file in the folder,
Import the file as a data frame
Drop some columns
Rename the remaining columns
Set a column in the data frame to a value obtained from the file name
Remove cases containing the string "N/A" anywhere
Then, combine each of the resulting data frames into one data frame by UNION-ing them (that is, adding the rows together because the columns should be the same).
It's critically important that you provide your data with any such question. If you can't provide your specific data, create some fake data that still demonstrates the problem at hand. Then, provide an example of what it should look like once the operations are complete. This reduces guesswork by the people answering your question.
So with all that said, let's get cracking.
Let's abstract away the sub-parts of task #1 by pretending that we have a function called process_a_file that will do steps 1-5 of each individual file and return a data frame. I can explain how that function works later.
For the "for each file" part, you need lapply. lapply runs a given function on each element of a list you provide, and returns a list of what the function returns:
results_list <- lapply(files, process_a_file)
This will return a list, where each element of the list is a data frame returned by process_a_file. Then you need a function to combine them - I recommend bind_rows from the package dplyr:
results_df <- dplyr::bind_rows(results_list)
And that's all you need to do!
So, now, what do we put in process_a_file? This is pretty easy - your code is mostly complete for doing this, but there are some different ways to do it that I prefer :)
process_a_file <- function(filename) {
#???????
}
Step 1 is to import the file as a data frame. For this I recommend read_delim from the readr package - it's much faster than the default R methods, has nice defaults, and lets us tackle Step 5 at the same time by specifying that "N/A" means NA:
df <- readr::read_delim(filename, delim = " ", col_names = FALSE, na = "N/A")
For step 2, your way works, but I also recommend the select function from dplyr:
dplyr::select(df, 2:5,9,10,12,1)
You can also index columns with unquoted names, and drop columns with -5 or -column_name too - and you can do step 3 at the same time!
df <- dplyr::select(
df,
GMT_Date = 2,
GMT_Time = 3,
LMT_Date = 4,
LMT_Time = 5,
Latitude = 9,
Longitude = 10,
PDOP = 12,
`2D_3D` = 13
)
Your way of renaming the columns is fine, too. By the way, if you start a column name with a number, you have to use this `backtick` syntax everywhere, so it's quite inconvenient and you should probably avoid it if you can.
Then finally, I recommend getting the ID from the file name using regular expressions. I'll assume you can write that regular expression since that's really out of scope - so you can use basename(tools::file_path_sans_ext(filename) to return the filename without the path or extension, and use stringr::str_extract to pop out the ID, which you then add to a column using dplyr::mutate
dplyr::mutate(df, animal_id = stringr::str_extract(basename(tools::file_path_sans_ext(filename)), "THE REGEX GOES HERE"))
So now, putting this all together - using dplyr's piping syntax %>% to make it look nice:
process_a_file <- function(filename) {
readr::read_delim(filename,
delim = " ",
col_names = FALSE,
na = "N/A") %>%
dplyr::select(
GMT_Date = 2,
GMT_Time = 3,
LMT_Date = 4,
LMT_Time = 5,
Latitude = 9,
Longitude = 10,
PDOP = 12,
`2D_3D` = 13
) %>%
dplyr::mutate(animal_id = stringr::str_extract(basename(tools::file_path_sans_ext(filename)), "THE REGEX GOES HERE"))
}
results_list <- lapply(files, process_a_file)
results_df <- dplyr::bind_rows(results_list)
I have data in Excel sheets and I need a way to clean it. I would like remove inconsistent values, like Branch name is specified as (Computer Science and Engineering, C.S.E, C.S, Computer Science). So how can I bring all of them into single notation?
The car package has a recode function. See it's help page for worked examples.
In fact an argument could be made that this should be a closed question:
Why is recode in R not changing the original values?
How to recode a variable to numeric in R?
Recode/relevel data.frame factors with different levels
And a few more questions easily identifiable with a search: [r] recode
EDIT:
I liked Marek's comment so much I decided to make a function that implemented it. (Factors have always been one of those R-traps for me and his approach seemed very intuitive.) The function is designed to take character or factor class input and return a grouped result that also classifies an "all_others" level.
my_recode <- function(fac, levslist){ nfac <- factor(fac);
inlevs <- levels(nfac);
othrlevs <- inlevs[ !inlevs %in% unlist(levslist) ]
# levslist of the form :::: list(
# animal = c("cow", "pig"),
# bird = c("eagle", "pigeon") )
levels(nfac)<- c(levslist, all_others =othrlevs); nfac}
df <- data.frame(name = c('cow','pig','eagle','pigeon', "zebra"),
stringsAsFactors = FALSE)
df$type <- my_recode(df$name, list(
animal = c("cow", "pig"),
bird = c("eagle", "pigeon") ) )
df
#-----------
name type
1 cow animal
2 pig animal
3 eagle bird
4 pigeon bird
5 zebra all_others
You want a way to clean your data and you specify R. Is there a reason for it? (automation, remote control [console], ...)
If not, I would suggest Open Refine. It is a great tool exactly for this job. It is not hosted, you can safely download it and run against your dataset (xls/xlsx work fine), you then create a text facet and group away.
It uses advanced algorithms (and even gives you a choice) and is really helpful. I have cleaned a lot of data in no time.
The videos at the official web site are useful.
There are no one size fits all solutions for these types of problems. From what I understand you have Branch Names that are inconsistently labelled.
You would like to see C.S.E. but what you actually have is CS, Computer Science, CSE, etc. And perhaps a number of other Branch Names that are inconsistent.
The first thing I would do is get a unique list of Branch Names in the file. I'll provide an example using letters() so you can see what I mean
your_df <- data.frame(ID=1:2000)
your_df$BranchNames <- sample(letters,2000, replace=T)
your_df$BranchNames <- as.character(your_df$BranchNames) # only if it's a factor
unique.names <- sort(unique(your_df$BranchNames))
Now that we have a sorted list of unique values, we can create a listing of recodes:
Let's say we wanted to rename A through G as just A
your_df$BranchNames[your_df$BranchNames %in% unique.names[1:7]] <- "A"
And you'd repeat the process above eliminating or group the unique names as appropriate.