Igor Net_CDF loading error - netcdf

I am trying to load in .nc files into Igor using the following line
Execute/Q "Load_NetCDF/i/q/t/z/s"
I have the Load_NetCDF installed and have used it a lot - it definitely works and works for similar files. I think the difference is that these files contain a couple of multiple dimension waves. Using Load_NetCDF in this way seems to be producing some odd looking results which do not match the content if I look at the same file another way (i.e. looking at the variables individually in MATLAB's ncbrowser).
I am seeing a couple of errors in the Igor command line and have ensured that they occur on the Load_Netcdf line of my code as shown above. Here are the error messages I get:
I've been hunting around for help info on the Load_NetCDF external function but without success. Does anyone know the cause of this problem or a good line of attack to try debugging it?

Are you using the XOP from this page to load the netcdf data?
It states that it does not support 2D waves. I don't know any other XOP to load netcdf data.
The promised error messages in your post are not visible.
What netcdf files are these? Classic or the new format? The new format is based on HDF5 and can, according to this post be read by the HDF5 browser in Igor.

Related

'Warning: Error in tempfile: cannot find unused tempfile name' when rendering multiple R Markdowns

I have a process which renders and saves multiple R markdown documents sequentially into a directory, using rmarkdown::render(template_file, output_file).
I'm finding that when the process goes over 100 rendered documents it stops with this message:
Warning: Error in tempfile: cannot find unused tempfile name
I suspect there is something in the knit/pandoc process relating to intermediate files that is causing this, but wondering if anyone else has come across an issue like this before?
I have this issue in using rpy2, looping an R code trunk again and again. It happens only after certain number of loops. Remove temp files or reduce the number of temp files used in the first place should solve the problem.
However, my code using 3rd party packages which is hard to change across platforms. So I just remove all temp files.
I solved the problem by adding this to my code:
sapply(file.path(tempdir(), list.files(tempdir())), unlink)
Then I restart some code that might use the temp file I deleted.
Hope this will solve your problem.

"filename.rdata" file Exploring and Converting to CSV

I'm no R-programmer (because of the problem I started learning it), I'm using Python, In a forcasting task I got a dataset signalList.rdata of a pheomenen called partial discharge.
I tried some commands to load, open and view, Hardly got a glimps
my_data <- get(load('C:/Users/Zack-PC/Desktop/Study/Data Sets/pdCluster/signalList.Rdata'))
but, since i lack deep knowledge about R, I wanted to convert it into a csv file, or any type that I can deal with in python.
or, explore it and copy-paste manually.
so, i'm asking for any solution whether using R or Python or any tool to get what's in the .rdata file.
Have you managed to load the data successfully into your working environment?
If so, write.csv is the function you are looking for.
If not,
setwd("C:/Users/Zack-PC/Desktop/Study/Data Sets/pdCluster/")
signalList <- load("signalList.Rdata")
write.csv(signalList, "signalList.csv")
should do the trick.
If you would like to remove signalList from your working directory,
rm(signalList)
will accomplish this.
Note: changing your working directory isn't necessary, it just makes it easier to read in a comment I feel. You may also specify another path for saving your csv to within the second argument of write.csv.

Load an .RData file triggers `Error: embedded nul in string:` error

I am sorry about posting a not reproducible error but the task is with huge and not splittable files (.RData ones to be precise).
There are several similar questions, like this one or this other one but they all are made for importing .csv files which is not my case.
As title, I am trying to load with the load function an .RData files but this triggers this error:
load("trip.Rdata")
Error: embedded nul in string: ''<div id=BOD,socationid="7278708">\0\004\0\t\0\0\0\037<span ->\0\004\0\t\0\0\0.<div id="MAINWRAP" c'
I've also tried with attach that, by documentation, is able to handle .RData files but the error is always the same.
Now, this is awkward because and .RData file is the last one from which I would expect an error triggered.
To be honest I do not even know how to ask this question properly because of its awkwardness (I can understand the downvote).
Maybe a solution might rely on the fact that the file is been saved under windows OS and I'm trying to load it on a Mac OS but I can't figure out a way, a possible cause nor a possible solution.
Any help is appreciated.

Generating Excel file with XLConnect-Removed Feature: Format from /xl/styles.xml part (Styles)

I am using XLConnect in R for the purpose of daily report generation. I have a program that runs automatically at specific time to append the data for most recent date daily into an excel file (Excel 2007). The program works fine to do this task. But, sometimes when i open the excel file it says that "Excel found unreadable content. do you want to recover the content of this workbook?"
The best part of this issue is that i can't reproduce this issue again to know the exact root cause for the problem. It arises in a random manner. Because, when i try to run the program again it works fine. Can somebody help me to identify the root cause?

Converting .pdf files to excel (.xls)

A friend of mine doing an internship asked me 2 hours ago if I could help him avoid to do manually 462 pdf file to .xls using free online soft.
I thought of a shell script using unoconv, but I didn't find out how to use it properly, and I am not sure if unoconv can solve this problem since it mainly converts file to pdf, not the reverse thing.
Conversion from PDF to any other structured format is not always possible and not generally recommended.
Having said that, this does look like a one-off job and there's a fair few of them (462).
It's worth pursuing, if you can reliably extract text from most of them and it's reasonably structured. It's a matter of trying to get regular text output across a sample of the PDF's that you can reliably parse into a table structure.
There's plenty of tools around that target either direct or OCR based text extraction, just google around.
One I like is pstotext from the ghostscript suite; the -bboxes option lets me get the coordinates of each word and leaves it up to me to re-assemble the structure. Despite its name it does work on input PDFs. Downside is that it can be a bit flakey and works on some PDF's but not others.
If you get this far, you'd then most likely then need to write a shell-script or program to convert that to a CSV. You can either open this directly via a spread-sheet or look for tools to convert this into XLS.
PS If he hasn't already, get the intern to ask if there's any possible way of getting at the original data that was used to created the PDFs It will save a lot of time and effort and lead to a way more accurate result.
Update An alternative to pstotext is renderpdf.pl command which is included in the Perl CAM::PDF module. More robust, but just reports text (x,y) position, not bounding boxes.
Other responses on a linked question suggest Tabula, too.
https://github.com/tabulapdf/tabula
I tried and it works very well.

Resources