Importing manual gating from FlowJo to R (flow cytometry analysis) - r

I am experiencing an issue with reading an .xls or .wspt files into R. This is a table of flow cytometry manual gating schema. my code is as follows:
flowData = system.file("extdata",package="flowWorkspace")
file = list.files(flowData,pattern="manual.xls"/"manual.wspt", full=TRUE)
ws = openWorkspace(file)
When I try to read with openWorkspace, the .xls file gives an error:
Start tag expected. "(" not found.
I have seen this error in another post, but it doesn't seem to explain my case.
while, for opening the .wspt file, i receive an error:
error in data.frame...arguments imply differing number of rows:1,0.
Both of these files (.xls and .wspt) contain the same information. I just wanted to try to read both of them.

Related

Unable to read Landsat 5 metadata using readMeta() of R

I am following a tutorial on remote sensing using R. The tutorial is available from here. pg 44.
I would like to read the metadata for a Landsat 5 image, specifically for 1984-06-22 path/row 170/072. The Landsat Product ID is LT05_L2SP_170072_19840622_20200918_02_T1. Here is the L5 metadata.
I am using the readMeta function from the RSToolbox package. The work should be pretty straightforward in that I put in the path to my metadata file and specify raw = F so that the metadata can be put in a format for further analyses.
mtl <- readMeta(file = ".", raw = F)
After doing this (reading the L5 using readMeta) I get this error.
Error in `.rowNamesDF<-`(x, value = value) : invalid 'row.names' length
Now of course there are many ways of killing a rat, so I used this method here - whereby read.delim function is used to read the metadata file. This brings in a dataframe with all the metadata values alright. However, when this dataframe is put into the radCor function in order to convert the L5 DNs to Top-of-Atmosphere radiance, the following error appears:
Error in radCor(june_landsat2, metaData = metadata_june, method = "rad") :
metaData must be a path to the MTL file or an ImageMetaData object (see readMeta)
Seems like radCor will accept nothing else apart from what is read by readMeta or the path to the MTL file itself. Not even the result from read.delim will do. Because the first error readMeta mentioned row.names length issue, I thought deleting the last row without a value in the metadata file would solve the issue, but this brings more complicated errors.
In short, I would like to find a way to make readMeta read my L5 metadata file, since the result from readMeta is being used in other places of the tutorial. Thanks

readtext returns error when reading too many .rtf files

I am trying to use readtext in R to import over 13,000 .rtf files but received an error message below.
uk <- readtext("/Users/path/*.rtf",
docvarsfrom = "filenames",
docvarnames = c("country", "year", "id"),
dvsep = "_")
Error in chartr(.cptable[[cpname]]$before, .cptable[[cpname]]$after, out[parsed$toconv]) :
invalid input '￾' in 'utf8towcs'
When I applied the same code to a test folder containing only 1,000 files, the code seemed to work fine. However, when I tried to increase the number of files in the folder to 5,000, the same error code returned. The filenames that I'm trying to import are formatted as uk_1992_1.rtf or uk_2010_3568.rtf, as shown in the link below.
filename (1,000)
My questions are:
Is this just a matter of trying to import too many files at once?
Is there a way to fix this code to allow more files to be imported at once?
Is there a workaround if there is no way to fix the code?
Apologies if the question has been asked elsewhere, I have tried to look for a similar question but did not find any. I can (and have tried to) split the files into several smaller folders, which seems to work fine, but there are more countries with the same number of files that will need to be processed and analysed the same way. TIA!

R save() not producing any output but no error

I am brand new to R and I am trying to run some existing code that should clean up an input .csv then save the cleaned data to a different location as a .RData file. This code has run fine for the previous owner.
The code seems to be pulling the .csv and cleaning it just fine. It also looks like the save is running (there are no errors) but there is no output in the specified location. I thought maybe R was having a difficult time finding the location, but it's pulling the input data okay and the destination is just a sub folder.
After a full day of extensive Googling, I can't find anything related to a save just not working.
Example code below:
save(data, file = "C:\\Users\\my_name\\Documents\\Project\\Data.RData", sep="")
Hard to believe you don't see any errors - unless something has switched errors off:
> data = 1:10
> save(data, file="output.RData", sep="")
Error in FUN(X[[i]], ...) : invalid first argument
Its a misleading error, the problem is the third argument, which doesn't do anything. Remove and it works:
> save(data, file="output.RData")
>
sep is used as an argument in writing CSV files to separate columns. save writes binary data which doesn't have rows and columns.

reading gctx file in R

I am trying to read a gctx file extracted from LINCS source for gene expression analysis. The codes for eading the file are provided at the link below.
https://github.com/cmap/l1ktools.
I am using the script provided and I have sourced the script. however when I tried the function parse.gctx it gives me following error:
ds <- parse.gctx("../L1000 Data/zspc_n40172x22268.gctx")
reading ../L1000 Data/zspc_n40172x22268.gctx
Error in h5checktypeOrOpenLoc(file, readonly = TRUE) :
Error in h5checktypeOrOpenLoc(). Cannot open file. File 'C:\L1000 Data\zspc_n40172x22268.gctx' does not exist.
How can I resolve this issue and read my gctx file?
Since you're getting a 'file does not exist' error, I think the problem is because you have a space in the path to the file you're trying to read (specifically, in "L1000 Data"); if you remove the space in the path it should parse properly.
In other words, try renaming your "L1000 Data" folder so that instead of:
ds <- parse.gctx("../L1000 Data/zspc_n40172x22268.gctx")
you have something along the lines of:
ds <- parse.gctx("../L1000_Data/zspc_n40172x22268.gctx")

More problems with "incomplete final line"

This problem is similar to that seen here.
I have a large number of large CSVs which I am loading and parsing serially through a function. Many of these CSVs present no problem, but there are several which are causing problems when I try to load them with read.csv().
I have uploaded one of these files to a public Dropbox folder here (note that the file is around 10.4MB).
When I try to read.csv() that file, I get the warning warning message:
In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on ...
And I cannot isolate the problem, despite scouring StackOverflow and Rhelp for solutions. Maddeningly, when I run
Import <- read.csv("http://dl.dropbox.com/u/83576/Candidate%20Mentions.csv")
using the Dropbox URL instead of my local path, it loads, but when I then save that very data frame and try to reload it thus:
write.csv(Import, "Test_File.csv", row.names = F)
TestImport <- read.csv("Test_File.csv")
I get the "incomplete final line" warning again.
So, I am wondering why the Dropbox-loaded version works, while the local version does not, and how I can make my local versions work -- since I have somewhere around 400 of these files (and more every day), I can't use a solution that can't be automated in some way.
In a related problem, perhaps deserving of its own question, it appears that some "special characters" break the read.csv() process, and prevent the loading of the entire file. For example, one CSV which has 14,760 rows only loads 3,264 rows. The 3,264th row includes this eloquent Tweet:
"RT #akiron3: ácÎå23BkªÐÞ'q(#BarackObama )nĤÿükTPP ÍþnĤüÈ’áY‹ªÐÞĤÿüŽ
\&’ŸõWˆFSnĤ©’FhÎåšBkêÕ„kĤüÈLáUŒ~YÒhttp://t.co/ABNnWfTN
“jg)(WˆF"
Again, given the serialized loading of several hundred files, how can I (a) identify what is causing this break in the read.csv() process, and (b) fix the problem with code, rather than by hand?
Thanks so much for your help.
1)
suppressWarnings(TestImport <- read.csv("Test_File.csv") )
2) Unmatched quotes are the most common cause of apparent premature closure. You could try adding all of these:
quote="", na,strings="", comment.char=""

Resources