I am using Tabula to read some tables from pdf files.
While reading I am getting below warning, and might be because of that some of the pages are not getting read.
Please help if anyone got the same warning/error.
WARNING: Format 14 cmap table is not supported and will be ignored.
Related
I hope someone can help in this great community. I have run for several weeks a script in r which is producing a txt file as output which is then imported in teradata after a daily drop, create of a table. I have never had any issue so far but today I received the error: "Error executing query for record 1:2621. Bada character in format or data...".
I frantically googled all the googalable content but none of it could answer my problem. In fact I also tried to replace the content of the txt file with an old file which was uploaded just fine, and today it generated this horrendous error in return. It is only a small table of 6 columns with these characteristics:
Top_wagered_Games varchar(111)
,support DEC(9,6)
,confidence DEC(9,6)
,lift DEC(9,2)
,cnt int
, "date" date
And generally made of few rows (no more than 15). What went wrong? Why is this happening? Could anyone help?
Teradata provider version: ODBC 15.10.01.01
Thanks!
I'm trying to load an Excel workbook with a large number of tabs into R, do some analysis, and then export the results back into Excel. I'm using the openxlsx package because of some of the features of that package that are not easily accessible using other packages (such as the ability to create "comments" in the output file, color code the tabs, and work with 64-bit R).
When I try to read in the workbooks, I sometimes get the following error message (or something similar):
Error in unzip(xlsxFile, exdir = xmlDir) :
cannot open file 'C:/Users/MENDEL~1/AppData/Local/Temp/RtmpIb3WOf/_excelXMLRead/xl/worksheets/sheet5.xml': Permission denied
This error message doesn't always show up - but sometimes it will appear and the program crashes.
Does anyone have any ideas how to fix this problem? I don't know why the program sometimes thinks it doesn't have permission to access the sheets.
Thank you in advance!
I can think of two possible scenarios for this error:
Scenario 1:
C:/Users/MENDEL~1/AppData/Local/ (This looks like you are trying to read a temporary file)
Solution:
If that is the case try moving the file to a different location like desktop and make sure that you update your working directory accordingly.
Scenario 2
C:/Users/MENDEL~1/AppData/Local/Temp/RtmpIb3WOf/_excelXMLRead/xl/worksheets/sheet5.xml' (Looks like there is some issue with Sheet5 which is of type .xml and the openxlsx does not allow you to read .xml)
Solution:
Check if there is some issue with the format or contents of sheet5 in the file that you are trying to read.
For additional information check CRAN Documentation
I am sorry about posting a not reproducible error but the task is with huge and not splittable files (.RData ones to be precise).
There are several similar questions, like this one or this other one but they all are made for importing .csv files which is not my case.
As title, I am trying to load with the load function an .RData files but this triggers this error:
load("trip.Rdata")
Error: embedded nul in string: ''<div id=BOD,socationid="7278708">\0\004\0\t\0\0\0\037<span ->\0\004\0\t\0\0\0.<div id="MAINWRAP" c'
I've also tried with attach that, by documentation, is able to handle .RData files but the error is always the same.
Now, this is awkward because and .RData file is the last one from which I would expect an error triggered.
To be honest I do not even know how to ask this question properly because of its awkwardness (I can understand the downvote).
Maybe a solution might rely on the fact that the file is been saved under windows OS and I'm trying to load it on a Mac OS but I can't figure out a way, a possible cause nor a possible solution.
Any help is appreciated.
I am trying to load in .nc files into Igor using the following line
Execute/Q "Load_NetCDF/i/q/t/z/s"
I have the Load_NetCDF installed and have used it a lot - it definitely works and works for similar files. I think the difference is that these files contain a couple of multiple dimension waves. Using Load_NetCDF in this way seems to be producing some odd looking results which do not match the content if I look at the same file another way (i.e. looking at the variables individually in MATLAB's ncbrowser).
I am seeing a couple of errors in the Igor command line and have ensured that they occur on the Load_Netcdf line of my code as shown above. Here are the error messages I get:
I've been hunting around for help info on the Load_NetCDF external function but without success. Does anyone know the cause of this problem or a good line of attack to try debugging it?
Are you using the XOP from this page to load the netcdf data?
It states that it does not support 2D waves. I don't know any other XOP to load netcdf data.
The promised error messages in your post are not visible.
What netcdf files are these? Classic or the new format? The new format is based on HDF5 and can, according to this post be read by the HDF5 browser in Igor.
I am attempting to create a script which will distribute a number of pdfs into a folder tree according to tags. I have the file metadata (including filepath) in a bibtex format. I have tried a number of work-arounds to import the metadata, but so far have been unable to get the filepath, year, title, and tags into a single data frame.
When I try to import using read.bib (which seems the simplest solution) I get the following error:
dbase_full <- read.bib("C:/Users/WILIAM-PLAN/Desktop/My Collection 23 07.bib")
Error in read.bib("C:/Users/WILIAM-PLAN/Desktop/My Collection 23 07.bib") :
lex fatal error:
fatal flex scanner internal error--end of buffer missed
I have looked up the error but language of the 'under the hood' part of the {bibtex} package (lex scanners etc) is beyond me.
Is there quick fix for this error?
If not, is there another way to get the file metadata from bibtex into a dataframe?
i had the same problem.
The problem is that in the bib file could be in some fields (as abstract) lines with a lot of chars..
You need to split and wrap them.
I hope it is useful