Replace column names with extracted strings - r

I'm trying to replace some of the columns in my data frame with extracted strings from each column name. This is my current data frame:
Date Time Temp ActivityLevelActivity ExplainActivityvalues4 AppetiteLevelAppetite
10/22/21 10:26 76 4 Activity was low 8
10/23/21 8:42 79 3 Activity was low again 7
I would like to replace the "ActivityLevelActivity" and "AppetiteLevelAppetite" column names with just "Activity" and "Appetite". I would like to change the "ExplainActivityvalues4" to "Activity_Comments".
I have tried:
gsub("Level", "[^L]+", names(df))
gsub("Explain", "(?<=\\n)[[:alpha:]]+(?<=\\v)", names(df))
I used "Level" and "Explain" as the patterns because the word "Level" is included in every column name where I would just like to take the first word. "Explain" is included for every column name where I would like to take the middle word and add "_Comments".
Essentially, I would like the new data frame to look like this:
Date Time Temp Activity Activity_Comments Appetite
10/22/21 10:26 76 4 Activity was low 8
10/23/21 8:42 79 3 Activity was low again 7
EDIT:
To explain further, here are all of my column names:
names(df) <- c(“Date”, “Time”, “Temp”, “ActivityLevelActivity”, “ExplainActivityvalues4”, “AppetiteLevelAppetite”, “ExplainAppetitevalues4”, “ComfortLevelComfort”, “ExplainComfortvalues4”, “DemeanorLevelDemeanor”, “ExplainDemeanorvalues4”, CooperationLevelCooperation”, “ExplainCooperationvalues4”, “HygieneLevelHygiene”, “ExplainHygienevalues4”, “MobilityLevelMobility”, “ExplainMobilityvalues4”)

Since you only have three columns and there's not really much of a shared pattern here, it would just be easier to more directly use rename.
# library(dplyr)
dd %>%
rename(
Activity = ActivityLevelActivity,
Appetite = AppetiteLevelAppetite,
Activity_Comments = ExplainActivityvalues4)

Related

R: Two Identically Structured Excel Files Return Different Data Types in Data Frames

I have two different Excel files, excel1 and excel2.
I am reading them in using separate but identical functions:
df1<- readxl::read_xlsx("excel1.xlsx", sheet= "Ad Awareness", skip= 7)
df2<- readxl::read_xlsx("excel2.xlsx", sheet= "Ad Awareness", skip= 7)
However, when I run head() on each, here is what df` returns:
calDate Score
<dttm> <dbl>
1 2016-10-17 00:00:00 17.8
2 2016-10-18 00:00:00 17.2
3 2016-10-19 00:00:00 20.3
And here is what df2 returns:
calDate Score
<dbl> <lgl>
1 43025 NA
2 43026 NA
3 43027 NA
Any reason why the data type are being read-in different? There is nothing different about the files.
read_xlsx() will guess the variable types based on your data (see here for more information).
So what you are describing could be due to:
different amount of data in your different files (not enough data in one of them to get to a correct guess)
changes you might have made in Excel to the cell format (those changes are not always visually obvious in Excel)
Without seeing your data, it is hard to give you more answer than this.
But you can control this with the col_types argument:
col_types: Either ‘NULL’ to guess all from the spreadsheet or a
character vector containing one entry per column from these
options: "skip", "guess", "logical", "numeric", "date",
"text" or "list". If exactly one ‘col_type’ is specified, it
will be recycled. The content of a cell in a skipped column
is never read and that column will not appear in the data
frame output. A list cell loads a column as a list of length
1 vectors, which are typed using the type guessing logic from
‘col_types = NULL’, but on a cell-by-cell basis.

How do I replace values in an R dataframe column with a corresponding value?

Ok, so I have a dataframe that I downloaded from Pew Research Center. One of the columns (called 'cregion') contains a series of numbers from 1-56, with each number corresponding to a geographic location in the U.S. Most of these locations are states, and the additional 6 are at the sub-state level. So, for example, the number '1' corresponds to 'Alabama', and '11' corresponds to the 'District Of Columbia'.
What I'd like to do is replace each of those numbers in the 'cregion' column with the ACTUAL name of the region it corresponds to. Unfortunately, there is no column in this data frame that I can use to swap the values, as the key for which number corresponds to which region exists completely separately (word document). I'm new to R and while I've been searching for a few hours for the best way to go about this, I can't seem to find a method that would work (or I just don't understand the explanation). Can anybody suggest a method to me?
If you have a vector of the state names as strings called statevec whose ith element corresponds to cregion i, and your data frame is named dat, just do
dat <- data.frame(cregion = sample(1:50), stuff = runif(50))
head(dat)
# cregion stuff
#1 25 0.665843896
#2 11 0.144631131
#3 13 0.691616240
#4 28 0.507454243
#5 9 0.416535139
#6 30 0.004196311
statevec <- state.name
dat$cregion <- statevec[dat$cregion]
head(dat)
# cregion stuff
#1 Missouri 0.665843896
#2 Hawaii 0.144631131
#3 Illinois 0.691616240
#4 Nevada 0.507454243
#5 Florida 0.416535139
#6 New Jersey 0.004196311

How to deal with non-consecutive (non-daily) dates in R, while looping?

I am trying to write a script that loops through month-end dates and compares associated fields, but I am unable to find a way to way to do this.
I have my data in a flatfile and subset based on 'TheDate'
For instance I have:
date.range <- subset(raw.data, observation_date == theDate)
Say TheDate = 2007-01-31
I want to find the next month included in my data flatfile which is 2007-02-28. How can I reference this in my loop?
I currently have:
date.range.t1 <- subset(raw.data, observation_date == theDate+1)
This doesnt work obviously as my data is not daily.
EDIT:
To make it more clear, my data is like below
ticker observation_date Price
ADB 31/01/2007 1
ALS 31/01/2007 2
ALZ 31/01/2007 3
ADB 28/02/2007 2
ALS 28/02/2007 5
ALZ 28/02/2007 1
I am using a loop so I want to skip from 31/01/2007 to 29/02/2007 by recognising it is the next date, and use that value to subset my data
First get unique values of date like so:
unique_dates<-unique(raw.data$observation_date)
The sort these unique dates:
unique_dates_ordered<-unique_dates[order(as.Date(unique_dates, format="%Y-%m-%d"))]
Now you can subset based on the index of unique_dates_ordered i.e.
subset(raw.data, raw.data$observation_date == unique_dates_ordered[i])
Where i = 1 for the first value, i = 2 for the second value etc.

Reshape specific rows into columns in R

My sample data frame would look like the following:
1 Number Type Code Reason
2 0123 06 09 010
3 Date Amount Damage Act
4 08/31/16 10,000 Y N
5 State City Zip Phone
6 WI GB 1234 Y
I want to make rows 1, 3, and 5 column names and have the data below each fall into each column, respectively. I was looking into the reshape function, but I only saw examples where an entire column of values needed to be individual columns. So I wasn't sure what to do in this scenario--apologies if it's obvious.
Here is the desired output:
1 Number Type Code Reason Date Amount Damage Act State City Zip Phone
2 0123 06 09 010 08/31/16 10,000 Y N WI GB 1234 Y
Thanks
As some people have commented, you could build a data frame out of the rows of your starting data frame, but I think it's a little easier to work on the lines of text.
If your starting file looks something like this
Number , Type , Code ,Reason
0123 , 06 , 09 , 010
Date , Amount , Damage , Act
08/31/16 , 10000 , Y , N
State , City , Zip , Phone
WI , GB , 1234, Y
we can read it in with each line as an element of a character vector:
lines <- readLines("start.csv")
make all the odd lines into a single line:
oddind <- seq(from=1, to= length(lines), by=2)
namelines <- paste(lines[oddind], collapse=",")
make all the even lines into a single line:
datlines <- paste(lines[oddind+1], collapse=",")
make those lines into a new CSV to read:
writeLines(text= c(namelines, datlines), con= "nice.csv")
print(read.csv("nice.csv"))
This gives
Number Type Code Reason Date Amount Damage Act State
1 123 6 9 10 08/31/16 10000 Y N WI
City Zip Phone
1 GB 1234 Y
So, it's all in one row of the data frame and all the variable names show up correctly in the data frame.
The benefits of this strategy are:
It will work for starting CSV files where the number of variables isn't a multiple of 4.
It will work for starting CSV files with any number of rows
There is no chance of weird dynamic casting happening with unlist() or as.character().
Creating a dataframe roughly appearing like that (although it necessarily has column names). Those are probably factor columns if you just used one of the standard read.* functions without using stringsAsFactors=FALSE, hence the need to convert with as.character.
dat=read.table(text="1 Number Type Code Reason
2 0123 06 09 010
3 Date Amount Damage Act
4 08/31/16 10,000 Y N
5 State City Zip Phone
6 WI GB 1234 Y")
Then you can set odd number rows as names of the values-vector of the even number rows with:
setNames( unlist( lapply( dat[!c(TRUE,FALSE), ] ,as.character)),
unlist( lapply( dat[c(TRUE,FALSE), ] ,as.character)) )
1 3 5 Number Date State Type
"2" "4" "6" "0123" "08/31/16" "WI" "06"
Amount City Code Damage Zip Reason Act
"10,000" "GB" "09" "Y" "1234" "010" "N"
Phone
"Y"
The !c(TRUE,FALSE) and its logical complement in the next extract operation get magically recycled along all the possible rows. Obviously there would be better ways of doing this if you posted a version of a text file rather than saying that the starting point was a dataframe. You would need to remove what were probably rownames. If you want a "clean solution then post either dput(.) from your dataframe or the raw text file.

Collecting data in one row from different csv files by the name

It's hard to explain what exactly I want to achieve with my script but let me try.
I have 20 different csv files, so I loaded them into R:
tbl = list.files(pattern="*.csv")
list_of_data = lapply(tbl, read.csv)
then with your help I combined them into one and removed all of the duplicates:
data_rd <- subset(transform(all_data, X = sub("\\..*", "", X)),
!duplicated(X))
I have now 1 master table which includes all of the names (Accession):
Accession
AT1G19570
AT5G38480
AT1G07370
AT4G23670
AT5G10450
AT4G09000
AT1G22300
AT1G16080
AT1G78300
AT2G29570
Now I would like to find this accession in other csv files and put the data of this accession in the same raw. There are like 20 csv files and for each csv there are like 20 columns so in same cases it might give me a 400 columns. It doesn't matter how long it takes. It has to be done. Is it even possible to do with R ?
Example:
First csv Second csv Third csv
Accession Size Lenght Weight Size Lenght Weight Size Lenght Weight
AT1G19570 12 23 43 22 77 666 656 565 33
AT5G38480
AT1G07370 33 22 33 34 22
AT4G23670
AT5G10450
AT4G09000 12 45 32
AT1G22300
AT1G16080
AT1G78300 44 22 222
AT2G29570
It looks like a hard task to do. Propably it has to be done by the loop. Any ideas ?
This is a merge loop. Here is rough R code that will inefficiently grow with each merge.
Begin as before:
tbls = list.files(pattern="*.csv")
list_of_data = lapply(tbl, read.csv)
tbl=list_of_data[[1]]
for(i in 2:length(list_of_data))
{
tbl=merge(tbl, list of_data[[i]], by="Accession", all=T)
}
The matching column names (not used as a key) will be renamed with a suffix (.x,.y, and so on), the all=T argument will ensure that whenever a new Accession key is merged a new row will be made and the missing cells will be filled with NA.

Resources