I'm importing a large dataset in R and curious if there's a way to quickly go through the columns and identify whether the column has categorical values, numeric, date, etc. When I use str(df) or class(df), the columns mostly come back mislabeled.
For example, some columns are labeled as numeric, but there are only 10 unique values in the column (ranging from 1-10), indicating that it should really be a factor. There are other columns that only have 11 unique values representing a rating, from 0-5 in 0.5 increments. Another column has country codes (172 values), which range from 1-230.
Is there a way to quickly identify if a column should be a factor without going through each of the columns to understand the nature of variable? (there are many columns in the dataset)
Thanks!
At the moment, I've been using variations of the following code to catch the first two cases:
as.numeric(df[,51]) #convert the column to numeric
len = length(unique(df[,51])) #find number of unique values
diff = max(df[,51]) - min(df[,51]) #calculate difference between min and max
ord = (len - 1) / diff # calculate the increment if equally spaced
#subtract the max value from second to max value to find the actual increment (only uses last two values)
step = sort(unique(df[,51]),partial=len)[len] -
sort(unique(df[,51]),partial=len-1)[len-1]
ord == step #check if the last increment equals the implied increment
However, this approach assumes that each of the variables are equally spaced (for example, incremented 0.5) and only tests the space between the last two values. This wouldn't catch a column that contains c(1,2,3.5,4.5,5,6) which has 6 unique values, but uneven spacing in the middle (not that this is common in my dataset).
It is not obvious how many distinct values would indicate a factor vs a numeric variable, but you can examine all variables to see what is in your data with
table(sapply(df, function(x) { length(unique(x))} ))
and if you decide that the boundary between factor and numeric is k you can identify the factors with
which(sapply(df, function(x) {length(unique(x)) < k}))
Related
I am attempting to create a loop to calculate the length and average value of a column where the identifier equals the value in a list. I basically have a dataframe with a Identifier, number of occurrences, and additional data. I also have a list that contains the unique of the identifiers (50 string values). I want to summarize the number of rows and average value for each of those 50 values.
So far I've tried creating two functions to calculate those values, and then integrating it into the loop but have been unsuccessful.
infoAvg = function(x){
average = mean(x)
return(average)}
infoLen = function(x){
length = length(x)
return(length)
}
Here x is the DF and y is the column I want to calculate on.
Does it make sense to take this approach, and if so how do I integrate it into a loop?
Thanks.
I have a time series and panel data data frame with a specific ID in the first column, and a weekly status for employment: Unemployed (1), employed (0).
I have 261 variables (the weeks every year) and 1.000.000 observations.
I would like to count the maximum number of times '1' occurs consecutively for every row in R.
I have looked a bit at rowSums and rle(), but I am not as far as I know interested in the sum of the row, as it is very important the values are consecutive.
You can see an example of the structure of my data set here - just imagine more rows and columns
We can write a little helper function to return the maximum number of times a certain value is consecutively repeated in a vector, with a nice default value of 1 for this use case
most_consecutive_val = function(x, val = 1) {
with(rle(x), max(lengths[values == val]))
}
Then we can apply this function to the rows of your data frame, dropping the first column (and any other columns that shouldn't be included):
apply(your_data_frame[-1], MARGIN = 1, most_consecutive_val)
If you share some easily imported sample data, I'll be happy to help debug in case there are issues. dput is an easy way to share a copy/pasteable subset of data, for example dput(your_data[1:5, 1:10]) would be a great way to share the first 5 rows and 10 columns of your data.
If you want to avoid warnings and -Inf results in the case where there are no 1s, use Ryan's suggestion from the comments:
most_consecutive_val = function(x, val = 1) {
with(rle(x), if(all(values != val)) 0 else max(lengths[values == val]))
}
I want to calculate ratios for each row in a data frame using values from two columns for each row. The data are anatomical measurements from paired muscles, and I need to calculate a ratio of the measurement of one muscle to the measurement of the other. Each row is an individual specimen, and each of the 2 columns in question has measurements for one of the 2 muscles. Which of the two muscles is largest varies among individuals (rows), so I need to figure out how to write a script that always picks the smaller value, which may be in either column, for the numerator, and that always picks the larger values, which also can be in either column, for the denominator, rather than simply dividing all values of one column by values of the other. This might be simple, but I'm not so good with coding yet.
This doesn't work:
ratio <- DF$1/DF$2
I assume that what I need would loop through each row doing something like this:
ratio <- which.min(c(DF$1, DF$2))/which.max(c(DF$1, DF$2))
Any help would be greatly appreciated!
Assuming that you are only dealing with positive values, you could consider something like this:
# example data:
df <- data.frame(x = abs(rnorm(100)), y = abs(rnorm(100)))
# sorting the two columns so that the smaller always appears in the first
# column:
df_sorted <- t(apply(df,1, sort))
# dividing the first col. by the second col.
ratio <- df_sorted[,1]/df_sorted[,2]
Or, alternatively:
ifelse(df[,1] > df[,2], df[,2]/df[,1], df[,1]/df[,2])
I am a new R user.
I have a dataframe consisting of 50 columns and 300 rows. The first column indicates the ID while the 2nd until the last column are standard deviation (sd) of traits. The pooled sd for each column are indicated at the last row. For each column, I want to remove all those values ten times greater than the pooled sd. I want to do this in one run. So far, the script below is what I have came up for knowing whether a value is greater than the pooled sd. However, even the ID (character) are being processed (resulting to all FALSE). If I put raw_sd_summary[-1], I have no way of knowing which ID on which trait has the criteria I'm looking for.
logic_sd <- lapply(raw_sd_summary, function(x) x>tail(x,1) )
logic_sd_df <- as.data.frame(logic_sd)
What shall I do? And how can I extract all those values labeled as TRUE (greater than pooled sd) that are ten times greater than the pooled SD (along with their corresponding ID's)?
I think your code won't work since lapply will run on a data.frame's columns, not its rows as you want. Change it to
logic_sd <- apply(raw_sd_summary, 2, function(x) x>10*tail(x,1) )
This will give you a logical array of being more than 10 times the last row. You could recover the IDs by replacing the first column
logic_sd[,1] <- raw_sd_summary[,1]
You could remove/replace the unwanted values in the original table directly by
raw_sd_summary[-300,-1][logic_sd[-300,-1]]<-NA # or new value
I have 34 subsets with a bunch of variables and I am making a new dataframe with summarizing information about each variable for the subsets.
- Example: A10, T2 and V2 are all subsets with ~10 variables and 14 observations where one variable is population.
I want my new dataframe to have a column which says how many times per subset variable 2 hit zero.
I've looked at a bunch of different count functions but they all seem to make separate tables and count the occurrences of all variables. I'm not interested in how many times each unique value shows up because most of the values are unique, I just want to know how many times population hit zero for each subset of 14 observations.
I realize this is probably a simple thing to do but I'm not very good at creating my own solutions from other R code yet. Thanks for the help.
I've done something similar with a different dataset where I counted how many times 'NA' occurred in a vector where all the other values were numerical. For that I used:
na.tmin<- c(sum(is.na(s1997$TMIN)), sum(is.na(s1998$TMIN)), sum(is.na(s1999$TMIN))...
Which created a column (na.tmin) that had the number of times each subset recorded NA instead of a number. I'd like to just count the number of times the value 0 occurred but is.0 is of course not a function because 0 is numerical. Is there a function that will just count the number of times a specific value shows up? If there's not should I use the count occurrences for unique values function?
Perhaps:
sum( abs( s1997$TMIN ) < 0.00000001 )
It's safer to use a tolerance value unless you are sure that you value is an integer. See FAQ 7.31.
sum( abs( pi - (355/113+seq(-0.001, 0.001, length=1000 ) ) )< 0.00001 )
[1] 10