What is causing code to fail after creating executable? Fiona CRS error - pyinstaller

I have some code that uses geopandas to take data and convert it to a shapefile. The code works perfectly fine when I run it in pycharm. However, if I put the code into a executable using "pyinstaller --onefile" the executable fails to run.
Note that getting geopandas to work at all with pyinstaller was a massive pain and only after a couple days of googling and tinkering was I able to get that to work.
This is the error that the executable gives me:
Traceback (most recent call last):
File "Shapefile_Tool.py", line 3601, in module
create_shapefile()
File "Shapefile_Tool.py", line 3593, in create_shapefile
shapefile.to_file(Path(output_filepath, 'Final_Shapefile'))
File "geopandas\geodataframe.py", line 515, in to_file
File "geopandas\io\file.py", line 128, in to_file
File "fiona\env.py", line 408, in wrapper
File "fiona\__init__.py", line 285, in open
File "fiona\collection.py", line 153, in __init__
File "fiona\_crs.pyx", line 78, in fiona._crs.crs_to_wkt
fiona.errors.CRSError: Invalid input to create CRS: epsg:32012
[53860] Failed to execute script 'Shapefile_Tool' due to unhandled exception!
Any help is greatly appreciated.

Related

Warning while using nbconvert to convert .ipynb to html

I have nbconvert version 6.4.2 installed on my macpro. And i am using following command to get html output from command line;
jupyter nbconvert --to html Used-air-loose-data.ipynb
But it gives a warning message;
File "/Users/jsingh/.local/share/virtualenvs/Air_assignee_analysis-SSL-Cc05/lib/python3.9/site-packages/jinja2/loaders.py", line 566, in load
raise TemplateNotFound(name)
jinja2.exceptions.TemplateNotFound: index.html.j2
But I can find some default templates at the location;
/Users/jsingh/.local/share/virtualenvs/Air_assignee_analysis-SSL-Cc05/share/jupyter/nbconvert/templates/lab
May I get some help here, how can I make the right template and put it in correct location. Thanks in advance.

r swirl - can't run swirl course, get enigmatic error message

I've created a course using the swirl package in r. The yaml file is created fine.
new_lesson("lesson_name", "course_name")
demo_lesson()
However, I attempt to run the course but get the following error:
Scanner error: mapping values are not allowed in this context at line 61, column 32
This error is incomprehensible to me. Anyone who knows swirl has an idea of what it means, and how to fix it?
Found the error. It was an indentation problem in the yaml document that the swirl() function was using to display the questions. How to find "line 61, column 32" is still mysterious to me, though.

Sweave2Knitr Code

I keep getting an error message when attempting to compile a pdf that reads:
It seems you are using the Sweave-specific syntax in line(s) 85; you may
need Sweave2knitr("Filename.Rnw") to convert it to knitr. Running
pdflatex.exe on File_Name_Update_FINAL.tex ...Running pdflatex.exe on
File_Name_Update_FINAL.tex ...failed
I have checked my settings in tools>global options>sweave and they are set to
Weave rnw files using: knitr
And checked box in always enable Rnw concordance.
I have also unchecked and checked the enable concordance box.
I have attached the code below. It starts on line 85, the line of code with the error.
\begin{document}
\SweaveOpts{concordance=TRUE}
%\SweaveOpts{concordance=TRUE}
Any help would be great. Thanks!

I can't see thumbnails for JPG images in Plone

I put a lot of images JPG and PNG in a folder. That folder was using the thumbnails view but only PNG images are showed in the thumbnails. I was using Plone 4.1 using a very simple buildout:
[buildout]
extends =
http://dist.plone.org:5021/release/4.1/versions.cfg
parts = instance
[instance]
recipe = plone.recipe.zope2instance
user = user:pass
eggs =
Plone
Then I tried to rotate a JPG image and I got the next error:
Traceback (innermost last):
Module ZPublisher.Publish, line 126, in publish
Module ZPublisher.mapply, line 77, in mapply
Module ZPublisher.Publish, line 46, in call_object
Module Products.ATContentTypes.lib.imagetransform, line 205, in transformImage
Module PIL.Image, line 1676, in transpose
Module PIL.ImageFile, line 189, in load
Module PIL.Image, line 385, in _getdecoder
IOError: decoder jpeg not available
So I tried installing libjpeg8 and libjpeg8-dev (with apt-get because I'm working with debian 6). Also I changed the buildout adding the appropiate line for the Pillow egg:
[buildout]
extends =
http://dist.plone.org:5021/release/4.1/versions.cfg
parts = instance
[instance]
recipe = plone.recipe.zope2instance
user = user:pass
eggs =
Plone
Pillow
And now the JPEG thumbnails are displayed.
Thanks for your help. I got a bit confused with buildout at beginning.
Which operating system are you using? Did you compile PIL with JPEG support? You are mostly missing something around those lines, so grab your buildout.cfg and add something like this:
...
[instance]
...
eggs =
PILwoTk
...
Try running the buildout again in another folder (so that is completely fresh) and try to see what you get when it compiles PILwoTk, to JPEG support to work you should see something like this:
# Now you'll see
# --------------------------------------------------------------------
# *** TKINTER support not available
# --- JPEG support ok
# --- ZLIB (PNG/ZIP) support ok
# --- FREETYPE2 support ok
# -------------------
If JPEG support ok is not what you get, you are most likely (definitely) missing jpeg development headers.
Images in Plone are resized during upload. Your formerly uploaded images are not resized or rotated or whatever because the library wasn't there. Now you have a working PIL/Pillow. Re-upload the image and it will work. But you can all images manually in the ZMI. Visit portal_atct and choose the tab 'Image scales'. Then recreate. All images are recalculated. That can last long time depending on the amount of images.

Saving in hdf5save creates an unreadable file

I'm trying to save an array as a HDF5 file using R, but having no luck.
To try and diagnose the problem I ran example(hdf5save). This successfully created a HDF5 file that I could read easily with h5dump.
When I then ran the R code manually, I found that it didn't work. The code I ran was exactly the same as is ran in the example script (except for a change of filename to avoid overwriting). Here is the code:
(m <- cbind(A = 1, diag(4)))
ll <- list(a=1:10, b=letters[1:8]);
l2 <- list(C="c", l=ll); PP <- pi
hdf5save("ex2.hdf", "m","PP","ll","l2")
rm(m,PP,ll,l2) # and reload them:
hdf5load("ex2.hdf",verbosity=3)
m # read from "ex1.hdf"; buglet: dimnames dropped
str(ll)
str(l2)
and here is the error message from h5dump:
h5dump error: unable to open file "ex2.hdf"
Does anyone have any ideas? I'm completely at a loss.
Thanks
I have had this problem. I am not sure of the cause and neither are the hdf5 maintainers. The authors of the R package have not replied.
Alternatives that work
In the time since I originally answered, the hdf5 package has been archived, and suitable alternatives (h5r, rhdf5, and ncdf4) have been created; I am currently usingncdf4`:
Since netCDF-4 uses hdf5 as a storage layer, the ncdf4 package provides an interface to both netCDF-4 and hdf5.
The h5r package with R>=2.10
the rhdf5 package is available on BioConductor.
Workarounds Two functional but unsatisfactory workarounds that I used prior to finding the alternatives above:
Install R 2.7, hdf5 version 1.6.6, R hdf5 v1.6.7, and zlib1g version 1:1.2.3.3 and use this when writing the files (this was my solution until migrating to the ncdf4 library).
Use h5totxt at the command line from the [hdf5utils][1] program (requires using bash and rewriting your R code)
A minimal, reproducible demonstration of the issue:
Here is a reproducible example that sends an error
First R session
library(hdf5)
dat <- 1:10
hdf5save("test.h5","dat")
q()
n # do not save workspace
Second R session:
library(hdf5)
hdf5load("test.h5")
output:
HDF5-DIAG: Error detected in HDF5 library version: 1.6.10 thread
47794540500448. Back trace follows.
#000: H5F.c line 2072 in H5Fopen(): unable to open file
major(04): File interface
minor(17): Unable to open file
#001: H5F.c line 1852 in H5F_open(): unable to read superblock
major(04): File interface
minor(24): Read failed
#002: H5Fsuper.c line 114 in H5F_read_superblock(): unable to find file
signature
major(04): File interface
minor(19): Not an HDF5 file
#003: H5F.c line 1304 in H5F_locate_signature(): unable to find a valid
file signature
major(05): Low-level I/O layer
minor(29): Unable to initialize object
Error in hdf5load("test.h5") : unable to open HDF file: test.h5
I've also run into the same issue and found a reasonable fix.
The issue seems like it stems from when the hdf5 library finalizes the file. If it doesn't get a chance to finalize the file, then the file is corrupted. I think this happens after the buffer is flushed but the buffer doesn't always flush.
One solution I've found is to do the hdf5save in a separate function. Assign the variables into the globalenv(), then call hdf5save and exit the function. When the function completes, the memory seems to clean up which makes the hdf5 libarary flush the buffer and finalize the file.
Hope this helps!

Resources