Rename multiple colnames using a dictionary - r

I have multiple csv including multiple information (such as "age") with different spellings for the same variable. For standardizing them I plan to read each of them and turn each into a dataframe for standardizing and then writing back the csv.
Therefore, I created a dictionary that looks like this:
I am struggling to find a way to do the following in R:
Asking it to look through each of the colnames of the dataframe and comparing each to every "old_name" in the dictionary dataframe.
If it finds the a match then replace the "old_name" with the "new_name"
Any help would be really useful!
Edit: the issue is not only with upper and lower case. For example, in some cases it could be: "years" instead of "age".

Here is a quick and dirty approach. I wrote a function so you could just change the arguments and quickly cycle through all your files. Using the stringi package is optional -- I'm using it to check the provided .csv file name, but you could remove that if you decide it's unnecessary.
library(stringi)
dict <- data.frame(path=c('../csv1','../csv1','../csv2','../csv3','../csv3'),
old_name=c('Age','agE','Name','years','NamE'),
new_name=c('age','age','name','age','name'))
example_csv <- data.frame(Age=c(43,34,42,24),NamE=c('Michael','Jim','Dwight','Kevin'))
standardizeColumnNames <- function(df,csvFileName,dictionary){
colHeaders <- character(ncol(df))
for(i in 1:ncol(df)){
index <- which(dictionary$old_name == names(df)[i])
if(length(index) > 0){
colHeaders[i] <- as.character(dictionary$new_name[index[1]])
} else {
colHeaders[i] <- names(df)[i]
}
}
names(df) <- colHeaders
if(stri_sub(csvFileName,-4) != '.csv'){
csvFileName <- paste0(csvFileName,'.csv')
}
write.csv(df,csvFileName)
}
standardizeColumnNames(example_csv,'test_file_name',dict)

Related

Nested Json with Different Attribute Names in R

I am playing with the Kaggle Star Trek Scripts dataset but I am struggling with converting from json to a dataframe in R. Ideally I would convert it in a long form dataset with index columns for episodes and characters with their lines on individual rows. I did find this answer, however it is not in R.
Currently the json looks like the photo below. Sorry it is not a full exmaple, but I put a small mocked version below as well. If you want you can download the data yourselves from here.
Current JSON View
Mock Example
"ENT": {
"episode_0": {
"KLAANG": {
"\"Pungghap! Pung ghap!\"": {},
"\"DujDajHegh!\"": {}
}
},
"eipsode_1": {
"ARCHER": {
"\"Warpme!\"": {},
"\"Toboldly go!\"": {}
}
}
}
}
The issue I have is that the second level, epsiodes, are individually numbered. So my regular bag of tricks for flattening by attribute name are not working. I am unsure how to loop through a level rather than an attribute name.
What I would ideally want is a long form data set that looks like this:
Series Episode Character Lines
ENT episode_0 KLAANG Pung ghap! Pung ghap!
ENT episode_0 KLAANG DujDaj Hegh!
ENT episode_1 ARCHER Warp me!
ENT episode_1 ARCHER To boldly go!
My currnet code looks like the below, which is what I would normally start with, but is obviously not working or far enough.
your_df <- result[["ENT"]] %>%
purrr::flatten() %>%
map_if(is_list, as_tibble) %>%
map_if(is_tibble, list) %>%
bind_cols()
I have also tried using stack() and map_dfr() but with no success. So I yet again come humbly to you, dear reader, for expertise. Json is the bane of my existance. I struggle with applying other answers to my circumstance so any advice or examples I can reverse engineer and lear from are most appreciated.
Also happy to clarify or expand on anything if possible.
-Jake
So I was able to brute force it thanks to an answer from Michael on this tread called How to flatten a list of lists? so shout out to them.
The function allowed me to covert JSON to a list of lists.
flattenlist <- function(x){
morelists <- sapply(x, function(xprime) class(xprime)[1]=="list")
out <- c(x[!morelists], unlist(x[morelists], recursive=FALSE))
if(sum(morelists)){
Recall(out)
}else{
return(out)
}
}
So Putting it all together I ended up with the following solution. Annotation for your entertainment.
library(jsonlite)
library(tidyverse)
library(dplyr)
library(data.table)
library(rjson)
result <- fromJSON(file = "C:/Users/jacob/Downloads/all_series_lines.json")
# Mike's function to get to a list of lists
flattenlist <- function(x){
morelists <- sapply(x, function(xprime) class(xprime)[1]=="list")
out <- c(x[!morelists], unlist(x[morelists], recursive=FALSE))
if(sum(morelists)){
Recall(out)
}else{
return(out)
}
}
# Mike's function applied
final<-as.data.frame(do.call("rbind", flattenlist(result)))
# Turn all the lists into a master data frame and ensure the index becomes a column I can separate later for context.
final <- cbind(Index_Name = rownames(final), final)
rownames(final) <- 1:nrow(final)
# So the output takes the final elements at the end of the JSON and makes those the variables in a dataframe so I need to force it back to a long form dataset.
final2<-gather(final,"key","value",-Index_Name)
# I separate each element of index name into my three mapping variables; Series,Episode and Character. I can also keep the original column names from above as script line id
final2$Episode<-gsub(".*\\.(.*)\\..*", "\\1", final2$Index_Name)
final2$Series<-substr(final2$Index_Name, start = 1, stop = 3)
final2$Character<-sub('.*\\.'," ", final2$newColName)

Binding rows of multiple data frames into one data frame in R

I have a vector of file paths called dfs, and I want create a dataframe of those files and bind them together into one huge dataframe, so I did something like this :
for (df in dfs){
clean_df <- bind_rows(as.data.table(read.delim(df, header=T, sep="|")))
return(clean_df)
}
but only the last item in the dataframe is being returned. How do I fix this?
I'm not sure about your file format, so I'll take common .csv as an example. Replace the a * i part with actually reading all the different files, instead of just generating mockup data.
files = list()
for (i in 1:10) {
a = read.csv('test.csv', header = FALSE)
a = a * i
files[[i]] = a
}
full_frame = data.frame(data.table::rbindlist(files))
The problem is that you can only pass one file at a time to the function read.delim(). So the solution would be to use a function like lapply() to read in each file specified in your df.
Here's an example, and you can find other answers to your question here.
library(tidyverse)
df <- c("file1.txt","file2.txt")
all.files <- lapply(df,function(i){read.delim(i, header=T, sep="|")})
clean_df <- bind_rows(all.files)
(clean_df)
Note that you don't need the function return(), putting the clean_df in parenthesis prompts R to print the variable.

Apply function to all dataframes

I work with SAS files (sas7bdat = dataframes) and SAS formats (sas7bcat).
My sas7bdat files are in a "data" file, so I can get a list in object files_names.
Here is the first part of my code, working perfectly
files_names <- list.files(here("data"))
nb_files <- length(files_names)
data_names <- vector("list",length=nb_files)
for (i in 1 : nb_files) {
data_names[i] <- strsplit(files_names[i], split=".sas7bdat")
}
for (i in 1:nb_files) {
assign(data_names[[i]],
read_sas(paste(here("data", files_names[i])), "formats/formats.sas7bcat")
)}
but I get some issues when trying to apply function as_factor from package haven (in order to apply labels on my new dataframes and get like SEX = "Male" instead of SEX = 1).
I can make it work dataframe by dataframe like the code below
df_labelled <- haven::as_factor(df, only_labelled = TRUE)
I would like to create a loop but didn't work because my data_names[i] isn't a dataframe and as_factor requires a dataframe in first argument.
I'm quite new to R, thank you very much if someone could help me.
you might want to think about using different data structures, for example you can use a named list to save your dataframes then you can easily loop through them.
In fact you could do everything in one loop, I'm sure there's a more efficient way to do this, but here's an example of one way without changing your code too much :
files_names <- list.files(here("data"))
raw_dfs <- list()
labelled_dfs <- list()
for (file_name in files_names) {
# # strsplit returns a list either extract the first element
# # like this
# df_name <- (strsplit(file_name, split=".sas7bdat"))[[1]]
# # or use something else like gsub
df_name <- gsub(".sas7bdat", '', file_name)
raw_dfs[df_name] <- read_sas(paste(here("data", file_name)), "formats/formats.sas7bcat")
labelled_dfs[df_name] <- haven::as_factor(raw_dfs[[df_name]], only_labelled = TRUE)
}

I want to create a data.frame with the values that I print of this loop in R

When I run this Loop I can print the results and I want to create a data frame with this data but I cant. Until now I have this:
filenames <- list.files(path=getwd())
numfiles <- length(filenames)
for (i in 1:numfiles) {
file <- read.table(filenames[i],header = TRUE)
ts = subset(file, file$name == "plantNutrientUptake")
tss = subset (ts, ts$path == "//plants/nitrate")
tssc = tss[,2:3]
d40 = tssc[41,2]
print(d40)
print(filenames[i])
}
This is not the most efficient way to do this, but it takes advantage of what code you've already written. First, you'll create an empty data frame with the columns you want, but filled with NA. Then, in each iteration of the loop, you'll fill one row of the data frame.
filenames <- list.files(path=getwd())
numfiles <- length(filenames)
# Create an empty data.frame
df <- data.frame(filename = rep(NA, numfiles), d40 = rep(NA, numfiles))
for (i in 1:numfiles){
file <- read.table(filenames[i],header = TRUE)
ts = subset(file, file$name == "plantNutrientUptake")
tss = subset (ts, ts$path == "//plants/nitrate")
tssc = tss[,2:3]
d40 = tssc[41,2]
# Fill row i of the data frame
df[i,"filename"] = filenames[i]
df[i,"d40"] = d40
}
Hope that does it! Good luck :)
There are a lot of ways to do what you are asking. Also, without a reproducible example it is difficult to validate that code will run. I couldn't tell what type of data was in each of your variable so I just guessed that they were mostly characters with one numeric. You'll need to change the code if that's not true.
The following method is using base R (no other packages). It builds off of what you have done. There are other ways to do this using map, do.call, or apply. But it's important to be able to run through a loop.
As someone commented, your code is just re-writing itself every loop. Luckily you have the variable i that you can use to specify where things go.
filenames <- list.files(path=getwd())
numfiles <- length(filenames)
# Declare an empty dataframe for efficiency purposes
df <- data.frame(
ts = rep(NA_character_,numfiles),
tss = rep(NA_character_,numfiles),
tssc = rep(NA_character_,numfiles),
d40 = rep(NA_real_,numfiles),
stringsAsFactors = FALSE
)
# Loop through the files and fill in the data
for (i in 1:numfiles){
file <- read.table(filenames[i],header = TRUE)
df$ts[i] <- subset(file, file$name == "plantNutrientUptake")
df$tss[i] <- subset (ts, ts$path == "//plants/nitrate")
df$tssc[i] <- tss[,2:3]
df$d40[i] <- tssc[41,2]
print(d40)
print(filenames[i])
}
You'll notice a few things about this code that are extra.
First, I'm declaring the variable type for each column explicitly. You can use rep(NA,numfiles) but that leave R to guess what the column should be. This may not be a problem for you if all of your variables are obviously of the same type. But imagine you have a variable a = c("1","A","B") of all characters. R will go through the first iteration of the loop and guess that the column is numeric. Then on the second run of the loop will crash when it runs into a character.
Next, I'm declaring the entire dataframe before entering the loop. When people tell you that loops in [modern] R are slow it is often because you are re-allocating memory every loop. By declaring the entire dataframe up front you speed up the loop significantly. This also allows you to reference any cell in the dataframe...which is exactly what you want to do in the loop.
Finally, I'm using the $ syntax to make things clear. Writing df[i,"d40"] <- d40 is the same as writing df$d40[i] <- d40. I just think it is clear to use the second method. This is a matter of personal preference.

Reading nodes from multiple html and storing result as a vector

I have a list of locally saved html files. I want to extract multiple nodes from each html and save the results in a vector. Afterwards, I would like to combine them in a dataframe. Now, I have a piece of code for 1 node, which works (see below), but it seems quite long and inefficient if I apply it for ~ 20 variables. Also, something really strange with the saving to vector (XXX_name) it starts with the last observation and then continues with the first, second, .... Do you have any suggestions for simplifying the code/ making it more efficient?
# Extracts name variable and stores in a vector
XXX_name <- c()
for (i in 1:216) {
XXX_name <- c(XXX_name, name)
mydata <- read_html(files[i], encoding = "latin-1")
reads_name <- html_nodes(mydata, 'h1')
name <- html_text(reads_name)
#print(i)
#print(name)
}
Many thanks!
You can put the workings inside a function then apply that function to each of your variables with map
First, create the function:
read_names <- function(var, node) {
mydata <- read_html(files[var], encoding = "latin-1")
reads_name <- html_nodes(mydata, node)
name <- html_text(reads_name)
}
Then we create a df with all possible combinations of inputs and apply the function to that
library(tidyverse)
inputs <- crossing(var = 1:216, node = vector_of_nodes)
output <- map2(inputs$var, inputs$node, read_names)

Resources