For weekly reports I generate, I create a .csv file for my outputs which has 9 columns and 7 rows.
I use this command to create my file:
write.csv(table, paste('home/weekly_',start.date,'.csv',sep=''), row.names=F)
Note: 'table' is a matrix (I believe that's the right R terminology)
I would like to add a footnote/note under the table in this file, would this be possible?
For example, if I were to create a text file instead of a .csv file, I would use the following commands:
cat("Number of participants not eligible:")
cat(length(which((tbl[,'Reg_age_dob']<=18) & as.Date(tbl[,'DateWithdrew'])>='2013-01-01'& as.Date(tbl[,'DateWithdrew'])<'2013-04-01' & as.Date(tbl[,'QuestionnaireEndDate'])<'2013-01-01' )))
cat("\n")
How would I do this to appear under the table in a .csv output file?
After writing the CSV part, just append the rest as new lines using
write("Footer",file="myfile",append=TRUE)
Solution from here: Add lines to a file
But be aware of the fact, that a CSV parser will be a upset, if you do not use comment tags correctly.
It might be better to use a second file for your purpose.
Related
I wish to open and read the following text file in Scilab (version 6.0.2).
The original file is an .xlsx that I have converted to both .txt and .csv through Excel to facilitate opening & working with it in Scilab.
Using both fscanfMat and csvRead, scilab only reads the first column as Nan. I understand why the first column is considered as Nan, but I do not see why the rest of the document isn't read. Columns 2 and 3 are in particular of interest to me.
For csvRead, I used :
M=csvRead(chemin+filename," ",",",[],[],[],[],7);
to skip the 7-row header.
Could it be something to do with the way in which the file has been formatted?
For anyone able to help, I will try to upload an example of a .txt file and also the original .xlsx file
Files available for download, here: Excel and Text files
If you convert your xlsx file into a xls one with Excel you can read it withthe readxls function.
Your separator is a tabulation character (ascii code 9). Use the following command:
M=csvRead("Probe1_350N_2S.txt",ascii(9),",",[],[],[],[],7);
enter image description hereI imported my inputs from a "Table1.txt" file using read.table, then I worked on my table then I would like to save my outputs in a new text file "Table1Modifed.txt" using write.table and keep everything in the same format
I would like to check if the files "Table1.txt" and "TableModified.txt" are exactly in the same format(Number of digits,Uppercase Lower case...)
If you would like to compare contents of two files in R, you cause diffr() from the diffr package. This will point contents that are different. Is this what you are looking for?
I have a csv like this:
"Data,""Ultimo"",""Apertura"",""Massimo"",""Minimo"",""Var. %"""
"28.12.2018,""86,66"",""86,66"",""86,93"",""86,32"",""0,07%"""
What is the solution for importing correctly please?
I tried with read.csv("IT000509408=MI Panoramica.csv", header=T,sep=",", quote="\"") but it doesn't work.
Each row in your file is encoded as a single csv field.
So instead of:
123,"value"
you have:
"123,""value"""
To fix this you can read the file as csv (which will give you one field per row without the extra quotes), and then write the full value of that field to a new file as plain text (without using a csv writer).
I have a series of r scripts which all do very different things to the same .txt file. For various reasons I don't want to combine them into a single file. The name of the input text file changes from time to time which means I have to change the file path on all the scripts by hand. Is there a way of telling r to look for the path name in a text file so I only have to change the text file rather than all the scripts. In other words going from:
df <- read.delim("~/Desktop/Sequ/Blabla.txt", header=TRUE)
to
df <- get the path to read the text file from here
OK. Sorted this one in about 5 seconds. Oops
just use source("myfile.txt")
as in:
df <- read.delim(source("~ Desktop/Sequ/Plots/Path.txt"))
Easy
I've got a tab-delimited text file which I generated by pasting a table from an excel sheet into a text file and I'm trying to read the data into R on a Mac. I get the following output
system.file("path/to/file.txt")
[1]""
no lines available in input
If I try loading the text file using the 'Source script or load data in R' button, I get:
1: col1 col2
^
/path/to/file: unexpected symbol
I thought this might be the tabs but then I added
sep='\t'
to my read.table line and that still doesn't work - any suggestions?
The data is in the format of a matrix, with no entry on the first col/first row entry for the row names, which are the first column
The easiest way I find trying to figure out this path stuff is to mess about: getwd() and setwd(). First, type
getwd()
in your R terminal. This will give your working directory. It also gives you an idea of how to specify the path to your file! The function setwd sets the working directory.
Now you have the correct path in the correct format, you just need to use:
##For csv files
read.csv(....)
##For tab delimited files
read.delim(....)
##For other files - you can specify `sep` to `\t` if you wish.
read.table(....)