I am trying to rename columns in a dataframe in R. However, the renaming has circular referencing. I would like a solution to this problem, the circular referencing cannot be avoided. One way to think was to rename a column and move it to a new dataframe, hence, avoiding the circular referencing. However, I am unable to do so.
The renaming reference is as follows:
The current function I am using is as follows:
standard_mapping <- function(mapping.col, current_name, standard_name, data){
for(i in 1:nrow(mapping.col)) {
# i =32
print(i)
eval(parse(text = paste0("std.name = mapping.col[",i,",'",new_name,"']")))
eval(parse(text = paste0("data.name = mapping.col[",i,",'",old_name,"']")))
if(data.name %in% colnames(data)){
setnames(data, old=c(data.name), new = c(std.name))
}
}
return(data)
}
Mapping.col is referred to the image
You can rename multiple colums at the same time, and there's no need to move the data itself that's stored in your data.frame. If you know the right order, you can just use
names(data) <- mapping.col$new_name
If the order is different, you can use match to first match them to the right positions:
names(data) <- mapping.col$new_name[match(names(data), mapping.col$old_name)]
By the way, assigning names and other attributes is always done by some sort of assignment. The setNames returns something, that still needs assigning.
Related
This topic has been covered numerous times I see but I can't really get the answer I'm looking for. Thus, here I go.
I am trying to do a loop to create variables in 5 data sets that have similar names as such:
Ech_repondants_nom_1
Ech_repondants_nom_2
Ech_repondants_nom_3
Ech_repondants_nom_4
Ech_repondants_nom_5
Below if the code that I have tried:
list <- c(1:5)
for (i in list) {
Ech_repondants_nom_[[i]]$sec = as.numeric(Ech_repondants_nom_[[i]]$interviewtime)
Ech_repondants_nom_[[i]]$min = round(Ech_repondants_nom_[[i]]$sec/60,1)
Ech_repondants_nom_[[i]]$heure = round(Ech_repondants_nom_[[i]]$min/60,1)
}
Any clues why this does not work?
cheers!
These are object names and not list elements to subset as Ech_repondants_nom_[[i]]. We may need to get the object by paste i.e.
get(paste0("Ech_repondants_nom_", i)$sec
but, then if we need to update the original object, have to call assign. Instead of all this, it can be done more easily if we load the datasets into a list and loop over the list with lapply
lst1 <- lapply(mget(paste0("Ech_repondants_nom_", 1:5)), function(dat)
within(dat, {sec <- as.numeric(interviewtime);
min <- round(sec/60, 1);
heure <- round(min/60, 1)}))
It may be better to keep it as a list, but if we need to update the original object, use list2env
list2env(lst1, .GlobalEnv)
Ech_repondants_nom_[[i]]
Isn't actually selcting that dataframe because you can't call objects like that. Try creating a function that takes a dataframe as an argument then iterating through the dataframes
changing_time_stamp<-function(df){
df$sec = as.numeric(df$interviewtime)
df$min = round(df$sec/60,1)
df$heure = round(df$min/60,1)
for (i in list) {
changing_time_stamp(i)
}
EDIT: I fixed some of the variable names in the function
By assigning date_[c], I have 35 "date_c"'s(below code).
for (c in 1:nrow(datetable2)) {
assign(paste("date_",c,sep=""),dt2[which(dt2$Date==datetable2$Date[c]),])
}
Now, I want to change each "date_c"'s rownames to 1:length(date_c). I used the code below but it doesn't work. The program says it can not find "date_[d]". How should I change the "date_[d]" issue in the below loophole?
for (d in 1:nrow(datetable2)){
rownames(date_[d]) <- seq(length=nrow(date_[d]))
}
Get all the date_c daatframes in a list, use lapply to iterate over it and remove the rownames. When we remove the rownames it actually recreates the rownames from 1:nrow(data).
result <- lapply(mget(ls(pattern = 'date_')), function(x)
{rownames(x) <- NULL;x})
result would have a list of dataframes with the rownames as we want. It is better to keep data in a list as it is easier to manage them. If you still want the changes to reflect in the original dataframe you can use list2env.
list2env(result, .GlobalEnv)
setwd("C:\\Users\\DATA")
temp = list.files(pattern="*.dta")
for (i in 1:length(temp)) assign(temp[i], read.dta13(temp[i], nonint.factors = TRUE))
grep(pattern="_m", temp, value=TRUE)
Here I create a list of my datasets and read them into R, I then attempt to use grep in order to find all variable names with pattern _m, obviously this doesn't work because this simply returns all filenames with pattern _m. So essentially what I want, is my code to loop through the list of databases, find variables ending with _m, and return a list of databases that contain these variables.
Now I'm quite unsure how to do this, I'm quite new to coding and R.
Apart from needing to know in which databases these variables are, I also need to be able to make changes (reshape them) to these variables.
First, assign will not work as you think, because it expects a string (or character, as they are called in R). It will use the first element as the variable (see here for more info).
What you can do depends on the structure of your data. read.dta13 will load each file as a data.frame.
If you look for column names, you can do something like that:
myList <- character()
for (i in 1:length(temp)) {
# save the content of your file in a data frame
df <- read.dta13(temp[i], nonint.factors = TRUE))
# identify the names of the columns matching your pattern
varMatch <- grep(pattern="_m", colnames(df), value=TRUE)
# check if at least one of the columns match the pattern
if (length(varMatch)) {
myList <- c(myList, temp[i]) # save the name if match
}
}
If you look for the content of a column, you can have a look at the dplyr package, which is very useful when it comes to data frames manipulation.
A good introduction to dplyr is available in the package vignette here.
Note that in R, appending to a vector can become very slow (see this SO question for more details).
Here is one way to figure out which files have variables with names ending in "_m":
# setup
setwd("C:\\Users\\DATA")
temp = list.files(pattern="*.dta")
# logical vector to be filled in
inFileVec <- logical(length(temp))
# loop through each file
for (i in 1:length(temp)) {
# read file
fileTemp <- read.dta13(temp[i], nonint.factors = TRUE)
# fill in vector with TRUE if any variable ends in "_m"
inFileVec[i] <- any(grepl("_m$", names(fileTemp)))
}
In the final line, names returns the variable names, grepl returns a logical vector for whether each variable name matches the pattern, and any returns a logical vector of length 1 indicating whether or not at least one TRUE was returned from grepl.
# print out these file names
temp[inFileVec]
So using a for loop I was able to break my 1.1 million row dataset in r into 110 tables of approximately 10,000 rows each in hopes of getting r to handle the data better. I now want to run another for loop that assigns the values in each of these tables to a different dataframe name.
My table names are:
Pom_1
Pom_2
Pom_3
...
Pom_110
What I want to do is create a for loop like the following:
for (i in 1:110)
{
Pom <- read.table(paste("Pom",i,sep = "_"))
for (j in 1:nrows(Pom))
{do something}
}
So I want to loop through the array and assign the values of each Pom table to "Pom" so that I can then run a for loop on each subsection of Pom. This problem is the read.table function does not seem to be the right one. Any ideas?
Can you give a more specific example of what you want to do withing each dataframe? You should avoid using the inner loop when possible and if you really need to have a look at ?apply
nrow instead of nrows
This is a generic solution using a example data.frame. The function you're looking for is assign, check it's help page:
Pom = data.frame(x = rnorm(30)) #original data.frame
n.tables = 3 # number of new data.frames you want to creat
Pom.names = paste("Pom",1:3,sep="") # name of all new data.frames
breaks = nrow(Pom)/n.tables * 0:n.tables # breaks of the original data.frame
for (i in 1:n.tables) {
rows = (breaks[i]+1):breaks[i+1] # which rows from Pom are going to be assign to the new data.frame?
assign(Pom.names[i],Pom[rows,]) # create new data.frame
}
ls()
[1] "breaks" "i" "n.tables" "Pom" "Pom.names" "Pom1"
[7] "Pom2" "Pom3" "rows"
I'm willing to bet the problem with your table call is that you aren't specifying the file extension (assuming Pom_1 - Pom_110 are files in your working directory, which I think they are since you're using read.table).
You can fix it by the following
fileExtension<-".xls" #specify your extension, I assume xls
for (i in 1:110)
{
tablename<-paste("Pom",i,sep = "_")
Pom <- read.table(paste(tablename, fileExtension, sep=""))
for (j in 1:nrows(Pom))
{do something}
}
Of course that's assuming a couple things about how everything in your problem is set up, but it's my best guess based on your description and code
I'm having some trouble understanding how R handles subsetting internally and this is causing me some issues while trying to build some functions. Take the following code:
f <- function(directory, variable, number_seq) {
##Create a empty data frame
new_frame <- data.frame()
## Add every data frame in the directory whose name is in the number_seq to new_frame
## the file variable specify the path to the file
for (i in number_seq){
file <- paste("~/", directory, "/",sprintf("%03d", i), ".csv", sep = "")
x <- read.csv(file)
new_frame <- rbind.data.frame(new_frame, x)
}
## calculate and return the mean
mean(new_frame[, variable], na.rm = TRUE)*
}
*While calculating the mean I tried to subset first using the $ sign new_frame$variable and the subset function subset( new_frame, select = variable but it would only return a None value. It only worked when I used new_frame[, variable].
Can anyone explain why the other subseting didn't work? It took me a really long time to figure it out and even though I managed to make it work I still don't know why it didn't work in the other ways and I really wanna look inside the black box so I won't have the same issues in the future.
Thanks for the help.
This behavior has to do with the fact that you are subsetting inside a function.
Both new_frame$variable and subset(new_frame, select = variable) look for a column in the dataframe withe name variable.
On the other hand, using new_frame[, variable] uses the variablename in f(directory, variable, number_seq) to select the column.
The dollar sign ($) can only be used with literal column names. That avoids confusion with
dd<-data.frame(
id=1:4,
var=rnorm(4),
value=runif(4)
)
var <- "value"
dd$var
In this case if $ took variables or column names, which do you expect? The dd$var column or the dd$value column (because var == "value"). That's why the dd[, var] way is different because it only takes character vectors, not expressions referring to column names. You will get dd$value with dd[, var]
I'm not quite sure why you got None with subset() I was unable to replicate that problem.