I am writing a dataframe using a csv file. I am making a data frame. However, when I go to run it, it's not recognizing the objects in the file. It will recognize some of them, but not all.
smallsample <- data.frame(read.csv("SmallSample.csv",header = TRUE),smallsample$age,smallsample$income,smallsample$gender,smallsample$marital,smallsample$numkids,smallsample$risk)
smallsample
It wont recognize marital or numkids, despite the fact that those are the column names in the table in the .csv file.
When you use read.csv the output is already in a dataframe.
You can simple use smallsample <- read.csv("SmallSample.csv")
Result using a dummy csv file
<table><tbody><tr><th> </th><th>age</th><th>income</th><th>gender</th><th>marital</th><th>numkids</th><th>risk</th></tr><tr><td>1</td><td>32</td><td>34932</td><td>Female</td><td>Single</td><td>1</td><td>0.9611315</td></tr><tr><td>2</td><td>22</td><td>50535</td><td>Male</td><td>Single</td><td>0</td><td>0.7257541</td></tr><tr><td>3</td><td>40</td><td>42358</td><td>Male</td><td>Single</td><td>1</td><td>0.6879534</td></tr><tr><td>4</td><td>40</td><td>54648</td><td>Male</td><td>Single</td><td>3</td><td>0.568068</td></tr></tbody></table>
Related
I am given very big (around 10 Gb each) datasets in both SAS and Stata format. I am going to read them into R for analysis.
Is there a way to show what variables (columns) they contain inside without reading the whole data file? I often only need some of the variables. I can view them of course from File Explorer, but it's not reproducible and takes a lot of time.
Both SAS and Stata are available on the system, but just opening a file might take a minute or so.
If you have SAS run a proc contents or proc datasets to see the details of the dataset without opening it. You may want to do that anyways, so that you can verify variable types, lengths and formats.
libname myFiles 'path to your sas7bdatfiles';
proc contents data=myfiles.datasetName;
run;
See below for the dta solution, which you can update to SAS using read_sas.
library(haven)
# read in first row of dta
dta_head <- read_dta("my_data.dta",
n_max = 1)
# get variable names of dta
dta_names <- names(dta_head)
After examining the names and labels of your dta file, you can then remove the n_max = 1 option and read in full while possibly adding the col_select option specifying the subset of variables you wish to read in.
I am importing multiple excel workbooks, processing them, and appending them subsequently. I want to create a temporary dataframe (tempfile?) that holds nothing in the beginning, and after each successive workbook processing, append it. How do I create such temporary dataframe in the beginning?
I am coming from Stata and I use tempfile a lot. Is there a counterpart to tempfile from Stata to R?
As #James said you do not need an empty data frame or tempfile, simply add newly processed data frames to the first data frame. Here is an example (based on csv but the logic is the same):
list_of_files <- c('1.csv','2.csv',...)
pre_processor <- function(dataframe){
# do stuff
}
library(dplyr)
dataframe <- pre_processor(read.csv('1.csv')) %>%
rbind(pre_processor(read.csv('2.csv'))) %>%>
...
Now if you have a lot of files or a very complicated pre_processsing then you might have other questions (e.g. how to loop over the list of files or to write the right pre_processing function) but these should be separate and we really need more specifics (example data, code so far, etc.).
I am trying to remove bias from a microscopy analysis, so I want to make it so the experimenter doesn't know what the conditions are for the image they are looking at.
To do this I need to rename every file in a directory so they can't be identified, but I also need to be able to know what the original filename was subsequently.
I made a folder with three files in it to try this out. I got the file list and made a vector for the new names, and combined into a data frame .
setwd("~/Desktop/folder1")
filename_list<-list.files("~/Desktop/folder1")
new_filenames <- c("anon1", "anon2", "anon3")
require(reshape2)
df1 <- melt(data.frame(filename_list,new_filenames))
View(df1)
I've also been able to change names using scripts from a previous question
and r bloggers using sapply and file.rename. I got a little stuck with using wildcards in this to select the whole filename (minus extension) but i'm sure it's possible;
sapply(filename_list,FUN=function(eachPath){file.rename(from=eachPath,to=sub(pattern="image_",replacement="anon",eachPath))})
How I can get the new_filenames vector and apply it to file.rename so it corresponds to the original_filenames vector in the df1 data frame,
or is there a better way to do this? Thanks.
I have a dataset created in an R session, that I want to 1) export as csv 2) save the readr-type column specifications separately. This will allow me to read this data later on, using read_csv() and specifying col_types from the file saved in 2).
Problem: one gets column specifications (attribute spec) only for data read with a read_* function. It does not seem possible to obtain directly column specifications from dataset created within R?
My worflow so far is:
Export item: write_csv()
Read specification from the exported file: spec_csv().
Save the column specification: write_rds()
Then finally read_csv(step_1, col_types=step_3)
But this is error prone, as spec_csv() can get it wrong: it is indeed only guessing, so in case all values are NA, need to attribute arbitrary (character) class. Ideally I would like to be able to extract column specifications directly from the original dataset, instead of writing/re-loading. How can I do that? I.e., how can I convert my classes of a data-frame to a spec object?
Thanks!
Actual (inefficient) worfkow:
library(tidyverse)
write_csv(iris, "iris.csv")
spec_csv("iris.csv") %>%
write_rds("col_specs_path.rda")
read_csv("iris.csv", col_types = read_rds("col_specs_path.rda"))
I'm writing a script to plot data from multiple files. Each file is named using the same format, where strings between “.” give some info on what is in the file. For example, SITE.TT.AF.000.52.000.001.002.003.WDSD_30.csv.
These data will be from multiple sites, so SITE, or WDSD_30, or any other string, may be different depending on where the data is from, though it's position in the file name will always indicate a specific feature such as location or measurement.
So far I have each file read into R and saved as a data frame named the same as the file. I'd like to get something like the following to work: if there is a data frame in the global environment that contains WDSD_30, then plot a specific column from that data frame. The column will always have the same name, so I could write plot(WDSD_30$meas), and no matter what site's files were loaded in the global environment, the script would find the WDSD_30 file and plot the meas variable. My goal for this script is to be able to point it to any folder containing files from a particular site, and no matter what the site, the script will be able to read in the data and find files containing the variables I'm interested in plotting.
A colleague suggested I try using strsplit() to break up the file name and extract the element I want to use, then use that to rename the data frame containing that element. I'm stuck on how exactly to do this or whether this is the best approach.
Here's what I have so far:
site.files<- basename(list.files( pattern = ".csv",recursive = TRUE,full.names= FALSE))
sfsplit<- lapply(site.files, function(x) strsplit(x, ".", fixed =T)[[1]])
for (i in 1:length(site.files)) assign(site.files[i],read.csv(site.files[i]))
for (i in 1:length(site.files))
if (sfsplit[[i]][10]==grep("PARQL", sfsplit[[i]][10]))
{assign(data.frame.getting.named.PARQL, sfsplit[[i]][10])}
else if (sfsplit[i][10]==grep("IRBT", sfsplit[[i]][10]))
{assign(data.frame.getting.named.IRBT, sfsplit[[i]][10])
...and so on for each data frame I'd like to eventually plot from.Is this a good approach, or is there some better way? I'm also unclear on how to refer to the objects I made up for this example, data.frame.getting.named.xxxx, without using the entire filename as it was read into R. Is there something like data.frame[1] to generically refer to the 1st data frame in the global environment.