Making data out of one point in multiple csv files in R - r

I'm trying to create a variable out of a single point in 5 different data tables.
ie. I have data for mental disorders for each year in seperate CSV files. How do I track just one variable (eg Autism) in each file and put it into one variable?
Here is what I have so far:
d2000 <- read.table("C:/AL00.csv")
d2001 <- read.table("C:/AL01.csv")
d2002 <- read.table("C:/AL02.csv")
d2003 <- read.table("C:/AL03.csv")
rownames(d2000) <- d2000[,3]
rownames(d2001) <- d2001[,3]
rownames(d2002) <- d2002[,3]
rownames(d2003) <- d2003[,3]
ASD = c(d2000["Autism","Total"],d2001["Autism","Total"],d2002["Autism","Total"])
This isn't working. I tried typing in just one of the data points:
>d2000["Autism","Total"]
[1] 2,763
Levels: 1,075 1,480 2,763
It outputs the correct number, but what are these "Levels"? Are they my problem, if so, how would I fix?

I would do something like this :
ll <- lapply(list.files(pattern="AL[0-9]+.*csv",full.names=TRUE),
function(x) read.table(x, stringsAsFactors=FALSE))
res <- do.call(rbind,ll)[,'Autism']
This will give you the column Autism in on vector. Then to convert it to a numeric you can do some regular expressions:
as.numeric(gsub(',','',res))

Related

read multiple ENVI files and combine them in one csv

I'm fairly new in working with R but trying to get this done. I have dozens of ENVI spectral datasets stored in a directory. Each dataset is seperated into two files. They all have the same name convention, i.e.:
ID_YYYYMMDD_350-200nm.asr
ID_YYYYMMDD_350-200nm.hdr
The task is to read the dataset, add two columns (ID and date from filename), and store the results in a *.csv-file. I got this to work for a single file (hardcoded).
library(caTools)
setwd("D:/some/path/software_scripts")
### filename without extension
name <- "011a_20100509_350-2500nm"
### split filename in area-id and date
flaeche<-substr(name, 0, 4)
date <- as.Date((substr(name,6,13)),"%Y%m%d")
### get values from ENVI-file in a matrix
spectrum <- read.ENVI(paste(name,".esl", sep = ""), headerfile=paste(name,".hdr", sep=""))
### add columns
spectrum <- cbind(Flaeche=flaeche,Datum=as.character(date),spectrum)
### CSV-Dataset with all values
write.csv(spectrum, file = name,".csv", sep=",")
I want to combine all available files into one *.csv file. I know that I've to use list.files but have no idea, how to implement the read.ENVI function and add the resulting matrices ongoing to CSV.
Update:
library(caTools)
setwd("D:/some/path/mean")
files <- list.files() # change or leave totally empty if setwd() put you in the right spot
all_names <- sub("^([^.]*).*", "\\1", files) # strip off extensions
name <- unique(all_names) # get rid of duplicates from .esl and .hdr
# wrap your existing code in a function
mungeENVI <- function(name) {
# split filename in area-id and date
flaeche<-substr(name, 0, 4)
date <- as.Date((substr(name,6,13)),"%Y%m%d")
# get values from ENVI-file in a matrix
spectrum <- read.ENVI(paste(name,".esl", sep = ""), headerfile=paste(name,".hdr", sep=""))
# add columns
spectrum <- cbind(Flaeche=flaeche,Datum=as.character(date),spectrum)
return(spectrum)
}
# use lapply to 'loop' over each name
list_of_ENVIs <- lapply(name, mungeENVI) # returns a list
# use do.call(rbind, x) to turn it into a big data.frame
final_df <- do.call(rbind, list_of_ENVIs)
# now write output
write.csv(final_df, "all_results.csv")
you can find a sample dataset here: Sample dataset
I work with a lot of lab data where I can rely on the output files being in a reliable format (same column order, column name, header format, etc). So this is assuming that the .ENVI files you have are similar to that. If your files are not like that, I'm happy to help with that too, I'd just need to see a dummy file or two.
Anyways here's the idea:
library(caTools)
library(lubridate)
library(magrittr)
setwd("~/Binfo/TST/Stack/") # adjust as needed
files <- list.files("data/", full.name = T) # adjust as needed
all_names <- gsub("\\.\\D{3}", "", files) # strip off extensions
names1 <- unique(all_names) # get rid of duplicates
# wrap your existing code in a function
mungeENVI <- function(name) {
# split filename in area-id and date
f <- gsub(".*\\/(\\d{3}\\D)_.*", "\\1", name)
d <- gsub(".*_(\\d+)_.*", "\\1", name) %>% ymd()
# get values from ENVI-file in a matrix
spectrum <- read.ENVI(paste(name,".esl", sep = ""), headerfile=paste(name,".hdr", sep=""))
# add columns
spectrum <- cbind(Flaeche=f,Datum= as.character(d),spectrum)
return(spectrum)
}
# use lapply to 'loop' over each name
list_of_ENVIs <- lapply(names1, mungeENVI) # returns a list
# use do.call(rbind, x) to turn it into a big data.frame
final_df <- do.call(rbind, list_of_ENVIs)
# now write output
write.csv(final_df, "data/all_results.csv")
Let me know if you have any problems and we an go from there. Cheers.
I edited my answer a bit, I think the problem you were hitting is in list.files() it should have had the argument full.name = T. I also adjusted you parsing method to be a little more defensive and use grep capture expressions. I tested the code with your two example files (4 really) but I can build out a large matrix (66743 elements). Also I used lubridate, I think it's a better way to work with dates and times.

R text mining documents from multiple txt files

I have multiple txt files, each referring to a different month of the year (for many years). So, how I could analyze these files (text mining) each of these separately from a unique corpus (or something similar), by taking track of the month-year reference, thank you.
Here is an example I programmed for Game of Thrones subtitles. The subtitles are in the form 60 text files, one file for one episode in the form of S01E01 were we wanted to keep the episode information.
The following code will read the files into a list, and will turn it into a dataframe with episode information and text. You will have to adapt it to your own problem.
library(plyr)
####### Read data ######
filenames <- list.files("Set7/Game of Thrones Subtitles", pattern="*", full.names=TRUE)
filenames_short <- list.files("Set7/Game of Thrones Subtitles", pattern="*", full.names=FALSE)
ldf <- alply(.data=filenames,.margins=1,.fun=scan,what = "character", quiet = T, quote = "")
names(ldf) <- filenames_short
# Loop over all filenames
# Turns list into two columns of a dataframe, episode and word
# create empty dataframe
df_got_subs <- as.data.frame(NULL)
for (i in 1:60) {
# extract listname
# vector with list name
listenname <- filenames_short[i]
vec_listenname <- rep.int(listenname,length(ldf[[i]]))
# Doublecheck
cat("listenname: ",listenname,"\n")
# turn list element into vector
vec_subs <- as.vector(ldf[[i]])
# create dataframe from vectors
df_subs <- cbind.data.frame(vec_listenname,vec_subs,stringsAsFactors=FALSE)
# attach to the "big" dataframe
df_got_subs <- rbind.data.frame(df_got_subs,df_subs)
}
# test datastructure
str(df_got_subs)
# change column names
colnames(df_got_subs) <- c("episode","subs")
The whole text mining we did with the tidytext package from Julia Silge. I didn't post the code because she gives much better examples in this post:
http://juliasilge.com/blog/Life-Changing-Magic/
I hope this helps with your problem.

How to read a horizontal file from R line by line to a table

I want to read a file as described at
http://snap.stanford.edu/data/wiki-RfA.html
into a data frame in R.
I know the function read.table but I think it works only with vertical table.
How should I read a file like above.
The file format is:
SRC:Guettarda
TGT:Lord Roem
VOT:1
RES:1
YEA:2013
DAT:19:53, 25 January 2013
TXT:'''Support''' per [[WP:DEAL]]: clueful, and unlikely to break Wikipedia.
So I want to read the file into a dataframe with 7 columns SRC, TGT, ... TXT.
here is a method using readLines
dataStartPosn <- 5
nfields <- 7
TXTmaxLen <- 1e3
eachColnameLen <- 3
#download and read lines
temp <- tempfile()
download.file("http://snap.stanford.edu/data/wiki-RfA.txt.gz",temp)
dataLines <- readLines(gzfile(temp, "r"))
library(plyr)
library(stringi)
#extract data
data <- stri_sub(dataLines, dataStartPosn, length=TXTmaxLen)
#extract colnames
colnames <- unname(sapply(dataLines[1:(nfields+1)], function(x) substring(x, 1, eachColnameLen)))
#form table
df <- data.frame(do.call(rbind, split(data, ceiling(seq_along(data)/(nfields+1)))))
#formatting
df <- setNames(df, colnames)
df[-(nfields+1)]
Alternative method mentioned in comments was too slow
SRC <- read.csv(pipe("sed -n '1~8p' wiki-RfA.txt"))
TGT <- read.csv(pipe("sed -n '2~8p' wiki-RfA.txt"))
Here is elegant solution.
I saved your example to ascii file "testdat". One thing you might want to consider first is that your delimiter also crops up in your data. This makes handling the data more difficult, and it should be fairly trivial for you to change this prior to writing the data in. I changed it to this...
SRC;Guettarda
TGT;Lord Roem
VOT;1
RES;1
YEA;2013
DAT;19:53, 25 January 2013
TXT;'''Support''' per [[WP:DEAL]]: clueful, and unlikely to break Wikipedia.
i.e. replaced the delimiting colons with semi-colons.
Then it's easy,
t<-read.table("testdat", stringsAsFactors=F, sep=";")
p=as.data.frame(t(t$V2), stringsAsFactors=F)
colnames(p)<-t$V1
then p is what you want

Multiple text file processing using scan

I have this code that works for me (it's from Jockers' Text Analysis with R for Students of Literature). However, what I need to be able to do is to automate this: I need to perform the "ProcessingSection" for up to thirty individual text files. How can I do this? Can I have a table or data frame that contains thirty occurrences of "text.v" for each scan("*.txt")?
Any help is much appreciated!
# Chapter 5 Start up code
setwd("D:/work/cpd/R/Projects/5/")
text.v <- scan("pupil-14.txt", what="character", sep="\n")
length(text.v)
#ProcessingSection
text.lower.v <- tolower(text.v)
mars.words.l <- strsplit(text.lower.v, "\\W")
mars.word.v <- unlist(mars.words.l)
#remove blanks
not.blanks.v <- which(mars.word.v!="")
not.blanks.v
#create a new vector to store the individual words
mars.word.v <- mars.word.v[not.blanks.v]
mars.word.v
It's hard to help as your example is not reproducible.
Admitting you're happy with the result of mars.word.v,
you can turn this portion of code into a function that will accept a single argument,
the result of scan.
processing_section <- function(x){
unlist(strsplit(tolower(x), "\\W"))
}
Then, if all .txt files are in the current working directory, you should be able to list them,
and apply this function with:
lf <- list.files(pattern=".txt")
lapply(lf, function(path) processing_section(scan(path, what="character", sep="\n")))
Is this what you want?

Create several data.frames via a for loop and name them accordingly

I want to apply a for-loop to every element of a list (station code of air quality stations) and create a single data.frame for each station with specific data.
My current code looks like this:
for (i in Stations))
{i_PM <- data.frame(PM2.5$DateTime,PM2.5$i)
colnames(i_PM)[1] <- "DateTime"
i_AOT <- subset(MOD2011, MOD2011$Station_ID==i)
i <- merge(i_PM, i_AOT, by="DateTime")}
Stations consists of 28 elements. The result should be a data.frame for every station with the colums DateTime, PM2.5 and several elements from MOD2011.
I just dont get it running as its supposed to be. Im sure its my fault, I couldnt find the specific answer via the internet.
Can you show me my mistake?
Try assign:
for (i in Stations)) {
dat <- data.frame(PM2.5$DateTime,PM2.5$i)
dat2 <- subset(MOD2011, MOD2011$Station_ID==i)
colnames(i_PM)[1] <- "DateTime"
assign(paste(i, "_PM", sep=""), dat)
assign(paste(i, "_AOT", sep=""), dat2)
assign(i, merge(dat, dat2, by="DateTime"))
}
Note, however, that this is bad coding practice. You should reconsider your algorithm. For instance, use a list instead.

Resources