NameError: name 'predict_from_url' is not defined - python-3.4

I am trying an MXNet tutorial mentioned at http://mxnet.io/tutorials/embedded/wine_detector.html (Section "Running the Model" on a raspberry pi3 using python3.4, specifically the script "inception_predict.py". I managed to fix a couple of issue but am getting stumped at this error:
>> import inception_predict
[23:43:37] src/nnvm/legacy_json_util.cc:190: Loading symbol saved by previous version v0.8.0. Attempting to upgrade...
[23:43:37] src/nnvm/legacy_json_util.cc:198: Symbol successfully upgraded!
>> predict_from_url("http://imgur.com/HzafyBA")
Traceback (most recent call last):
File "", line 1, in
NameError: name 'predict_from_url' is not defined
Function predict_from_url is defined in the imported file inception_predict.py (as mentioned in the tutorial) so why is python telling me it is not defined?
What am I doing wrong?

The tutorial has a few errors that you need to fix to make it run:
add time to the import list in the inception_predict.py
...
import cv2, os, urllib, time
...
use a URL that you can actually download directly (use your favorite image search engine to find ones)
call the full name function
inception_predict. predict_from_url("https://media.mnn.com/assets/images/2017/01/cow-in-pasture.jpg.838x0_q80.jpg")
After these small changes you will see something like this:
pre-processed image in 0.27312707901
MKL Build:20170209
forward pass in 0.131096124649
probability=0.784963, class=n02403003 ox
probability=0.099463, class=n03868242 oxcart
probability=0.035585, class=n03967562 plow, plough
probability=0.033620, class=n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis
probability=0.015443, class=n02412080 ram, tup
[(0.78496253, 'n02403003 ox'), (0.09946309, 'n03868242 oxcart'), (0.035584591, 'n03967562 plow, plough'), (0.033620458, 'n02415577 bighorn, bighorn sheep, cimarron, Rocky Mountain bighorn, Rocky Mountain sheep, Ovis canadensis'), (0.015442736, 'n02412080 ram, tup')]

Related

Dataflow from Colab issue

I'm trying to run a Dataflow job from Colab and getting the following worker error:
sdk_worker_main.py: error: argument --flexrs_goal: invalid choice: '/root/.local/share/jupyter/runtime/kernel-1dbd101c-a79e-432e-89b3-5ba68df104d7.json' (choose from 'COST_OPTIMIZED', 'SPEED_OPTIMIZED')
I haven't provided the flexrs_goal argument, and if I do it doesn't fix this issue. Here are my pipeline options:
beam_options = PipelineOptions(
runner='DataflowRunner',
project=...,
job_name=...,
temp_location=...,
subnetwork='regions/us-west1/subnetworks/default',
region='us-west1'
)
My pipeline is very simple, it's just:
with beam.Pipeline(options=beam_options) as pipeline:
(pipeline
| beam.io.ReadFromBigQuery(
query=f'SELECT column FROM {BQ_TABLE} LIMIT 100')
| beam.Map(print))
It looks like the command line args for the sdk worker are getting polluted by jupyter somehow. I've rolled back to the past two apache-beam library versions and it hasn't helped. I could move over to Vertex Workbench but I've invested a lot in this Colab notebook (plus I like the easy sharing) and I'd rather not migrate.
Figured it out. The PipelineOptions constructor will pull in sys.argv if no parameter is given for the first argument (called flags). In my case it was pulling in the command line args that my jupyter notebook was started with and passing them as Beam options to the workers.
I fixed my issue by doing this:
beam_options = PipelineOptions(
flags=[],
...
)

How to register MMT as Isabelle component on Windows 10 (to call isabelle mmt_build)?

I have installed Isabelle2021 in C:\Homes\Isabelle2021\Isabelle2021 and MMT (from https://uniformal.github.io//doc/setup/) in C:\Homes\MMT21 and I have made additional entries in the C:\Homes\Isabelle2021\Isabelle2021\etc\Components file:
/cygdrive/c/Homes/MMT21
/cygdrive/c/Homes/MMT21/systems/MMT/deploy
But my cygwin-terminal.bat command gives the error that the component can not be found:
C:\Homes\Isabelle2021\Isabelle2021>cygwin-terminal
This is the GNU Bash interpreter of Cygwin.
Use command "isabelle" to invoke Isabelle tools.
tomr#DESKTOP /cygdrive/c/Homes/Isabelle2021/Isabelle2021
$ isabelle mmt_build
*** Unknown Isabelle tool: "mmt_build"
tomr#DESKTOP /cygdrive/c/Homes/Isabelle2021/Isabelle2021
I have tried to follow https://drops.dagstuhl.de/opus/volltexte/2020/13065/pdf/LIPIcs-TYPES-2019-1.pdf:
if the Mmt directory is registered to Isabelle as component, it
provides a tool isabelle mmt_build (shell script) to build MMT with
Isabelle support enabled. The resulting mmt.jar will provide further
tools isabelle mmt_import and isabelle mmt_server (in Scala) to
perform the import and view its results. Users merely need to invoke,
e.g., isabelle mmt_import -B ZF.
What is wrong with my efforts? Does the registration of Isabelle component required additional activies? And is mmt.jar really so adapted to Isabelle (one specific tool in opposition of MMT being the very universal system) that mmt.jar really contains such mmt_build command?
I am going to read https://isabelle.in.tum.de/dist/Isabelle2021/doc/system.pdf Chapter 7.2 "Managing Isabelle Components", maybe it will help and maybe it will work on Windows...
This is partial answer. Simple addition of the MMT directory was not possible due to error messages:
tomr# /cygdrive/c/Homes/Isabelle2021/Isabelle2021
$ isabelle components -x /cygdrive/c/Homes/MMT21
*** Bad component directory: "/cygdrive/c/Homes/MMT21"
I found in the Isabelle source code https://isabelle-dev.sketis.net/file/data/ldiwqhsxa4d5zojczqye/PHID-FILE-2yuwbxlojqunqpdcqvqg/file the reason for this:
def update_components(add: Boolean, path0: Path, progress: Progress = new Progress): Unit =
{
val path = path0.expand.absolute
if (!(path + Path.explode("etc/settings")).is_file &&
!(path + Path.explode("etc/components")).is_file) error("Bad component directory: " + path)
So, it is required (for directory to be used as the Isabelle component) that MMT directory contains etc/settings file. MMT didn't have one, so, I grabbed one such file from the existinc Isabelle components and I modified it to be:
# -*- shell-script -*- :mode=shellscript:
classpath "C:/Homes/MMT21/mmt.jar"
isabelle_scala_service "isabelle.FlatLightLaf"
isabelle_scala_service "isabelle.FlatDarkLaf"
After having that file in MMT21 directory I managed to add MMT21 as a component:
tomr#DESKTOP /cygdrive/c/Homes/Isabelle2021/Isabelle2021
$ isabelle components -u /cygdrive/c/Homes/MMT21
Added component "/cygdrive/c/Homes/MMT21"
But, unfortunately, it didn't solve my initial problem, I am still getting an error:
tomr#DESKTOP /cygdrive/c/Homes/Isabelle2021/Isabelle2021
$ isabelle mmt_build
*** Unknown Isabelle tool: "mmt_build"
So, know I am going deep down into Google and into respective project sites to dig this out...
ADDED:
There are good materials https://sketis.net/2019/mmt-as-component-for-isabelle2019 and https://github.com/UniFormal/MMT/tree/master/src/mmt-isabelle about this, so, it appears, that there is special jar, not the main mmt.jar. I am now reading about this. I will see whether it this works with Isabelle2021 or should I use it with my older Isabelle2020.

MPI errors "MPI_ERR_IN_STATUS" and "BadPickleGet" in OpenMDAO when running external codes in parallel with many processors

The OpenMDAO problem that I'm running is quite complicated so I don't think it would be helpful to post the entire script. However, the basic setup is that my problem root is a ParallelFDGroup (not actually finite differencing for now--just running the problem once) that contains a few normal components as well as a parallel group. The parallel group is responsible for running 56 instances of an external code (one component per instance of the code). Strangely, when I run the problem with 4-8 processors, everything seems to work fine (sometimes even works with 10-12 processors). But when I try to use more processors (20+), I fairly consistently get the errors below. It provides two tracebacks:
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 746, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48575)
mpi4py.MPI.Exception: MPI_ERR_IN_STATUS: error code in status
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 749, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48609)
File "MPI/msgpickle.pxi", line 191, in mpi4py.MPI.Pickle.loadv (src/mpi4py.MPI.c:41957)
File "MPI/msgpickle.pxi", line 143, in mpi4py.MPI.Pickle.load (src/mpi4py.MPI.c:41248)
cPickle.BadPickleGet: 65
I am running under Ubuntu with OpenMDAO 1.7.3. I have tried running with both mpirun.openmpi (OpenRTE) 1.4.3 and mpirun (Open MPI) 1.4.3 and have gotten the same result in each case.
I found this post that seems to suggest that there is something wrong with the MPI installation. But if this were the case, it strikes me as strange that the problem would work for a small number of processors but not with a larger number. I also can run a relatively simple OpenMDAO problem (no external codes) with 32 processors without incident.
Because the traceback references OpenMDAO unknowns, I wondered if there are limitations on the size of OpenMDAO unknowns. In my case, each external code component has a few dozen array outputs that can be up to 50,000-60,000 elements each. Might that be problematic? Each external code component also reads the same set of input files. Could that be an issue as well? I have tried to ensure that read and write access is defined properly but perhaps that's not enough.
Any suggestions about what might be culprit in this situation are appreciated.
EDIT: I should add that I have tried running the problem without actually running the external codes (i.e. the components in the parallel group are called and set up but the external subprocesses are never actually created) and the problem persists.
EDIT2: I have done some more debugging on this issue and thought I should share the little that I have discovered. If I strip the problem down to only the parallel group containing the external code instances, the problem persists. However, if I reduce the components in the parallel group to basically nothing--just a print function for setup and for solve_nonlinear--then the problem can successfully "run" with a large number of processors. I started adding setup lines back in one by one to see what would create problems. I ran into issues when trying to add many large unknowns to the components. I can actually still add just a single large unknown--for example, this works:
self.add_output('BigOutput', shape=[100000])
But when I try to add too many large outputs like below, I get errors:
for i in range(100):
outputname = 'BigOutput{0}'.format(i)
self.add_output(outputname, shape=[100000])
Sometimes I just get a general segmentation violation error from PETSc. Other times I get a fairly length traceback that is too long to post here--I'll post just the beginning in case it provides any helpful clues:
*** glibc detected *** python2.7: free(): invalid pointer: 0x00007f21204f5010 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7da26)[0x7f2285f0ca26]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_free+0x4f)[0x7f2269b7754f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x1cbbc)[0x7f2269b87bbc]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x54d6c)[0x7f2269bbfd6c]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x9d31f)[0x7f2269c0831f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_step+0x1bf)[0x7f2269be261f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(pysqlite_step+0x2d)[0x7f2269e4306d]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(_pysqlite_query_execute+0x661)[0x7f2269e404b1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8942)[0x7f2286c6a5a2]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7f2286c6b1ce]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x797e1)[0x7f2286be67e1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x5c54f)[0x7f2286bc954f]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x43)[0x7f2286c60d63]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x136652)[0x7f2286ca3652]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f2286957e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f2285f8236d]
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00600000-00601000 rw-p 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00aca000-113891000 rw-p 00000000 00:00 0 [heap]
7f21107d6000-7f2241957000 rw-p 00000000 00:00 0
etc...
its hard to guess whats going on here, but if it works for a small number of processors and not on larger ones one guess might be that the issue shows up when you use more than one node, and data has to get transfered across the network. I have seen bad MPI compilations that behaved this way. Things would work if I kept the job to one node, but broke on more than one.
The traceback shows that you're not even getting through setup. So its not likely to be anything in your external code or any other components run method.
If you're running on a cluster, are you compiling your own MPI? You usually need to compile very with very specific options/libraries for any kind of HPC library. But most HPC systems provide modules you can load that have mpi pre-compiled.

Is there an equivalent of FSC_COLOR in GDL?

I am trying to run my IDL programs at home in GDL. Among other problems I run into this one:
GDL> bk=fsc_color('Black')
% Ambiguous: Variable is undefined: FSC_COLOR or: Function not found: FSC_COLOR
% Execution halted at: $MAIN$
Is there a way to get colours by name in GDL?
This sounds like you just haven't put fsc_color.pro in your !path (or the GDL equivalent). FSC_COLOR is not provided by the IDL distribution; you have to install it and tell IDL where it is located.

Turbo Pascal BGI Error: Graphics not initialized (use InitGraph)

I'm making a Turbo Pascal 7.0 program for my class, it has to be on Graphic Mode.
A message pops up
BGI Error: Graphics not initialized (use InitGraph).
I'm already using InitGraph and graph.tpu and I specified the route as "C:\TP7\BGI".
My S.O is Windows 7 and I'm using DosBox 0.74, I already tried to paste all the files from the folder BGI into BIN.
What should I do?
Since dos doesn't have system graphic drivers, the BGI functions as such for BP7.
So in short, use a BGI suitable for your videocard. The ones supplied with BP7 are very old, there are newer, VESA ones that you could try.
Afaik 3rd party BGI needs to be registered explicitly in code though.
At first I have had this "missing Graph.tpu"- ... and later the "Use Initgraph"-issue too.
After hours trying (and reading some not politeful comments in the internet) I finally got Turbo Pascal 7 succesfully running (in Windows 10, x64). In summary I want to share "some secrets":
install the "TP(WDB)-7.3.5-Setup.msi" (comes from clever people in Vietnam)
make sure, that there's the CORRECT PATH to the "BGI"-directory in your program-code. For example:
driver := Detect;
InitGraph (driver, modus, 'c:\TPWDB\BGI');
(By the way: This is ALL, what's there to do with "Initgraph".)
make sure, that in TP7 under "Options" --> "Directories" are the CORRECT PATHS both to "C:\TPWDB\UNITS" and Your actual working dir e.g. "C:\TPWDB\myPrograms"
THAT's IT.
Annotations: The "Graph.TPU" (usually) is already in "UNITS" (together with "Graph3.tpu" by the way).
Hazzling around old driver's isn't needed even... :)
Just the correct paths... :)
Hope, that can help ...

Resources