Unix.error 31 write when using Functory module - unix

I am using the functory module and I am facing a very bizarre issue with the code.
My code is working fine and I have been able to parallelized a play on my game but when I try to play once again (launch another time a parallelized function) it raises a really weird error.
Here you can find the error :
Fatal error: exception Unix.Unix_error(43, "write", "")
Raised by primitive operation at file "unix.ml", line 252, characters 7-34
Called from file "protocol.ml", line 45, characters 10-32
Re-raised at file "network.ml", line 536, characters 10-11
Called from file "network.ml", line 565, characters 47-80
Called from file "list.ml", line 73, characters 12-15
Called from file "network.ml", line 731, characters 4-27
Called from file "map_fold.ml", line 98, characters 4-242
Called from file "game_ia.ml", line 111, characters 10-54
Called from file "gameplay.ml", line 34, characters 12-48
Called from file "gameplay.ml", line 57, characters 22-37
Called from file "gameplay.ml", line 85, characters 5-22
So I've decided to search in the following folders to see what primitive operation has been raised :
(unix.ml) external rename : string -> string -> unit = "unix_rename"
(network.ml) Some jid when w.state <> Disconnected -> send w (Protocol.Master.Kill jid)
So for some reason, it seems that my worker disconnects by itself. I was wondering if any of you already had this issue and what to do in order to solve it ?
You can find my game here. The main files involved are game_ia.ml (best_move_parallelized) and gameplay.ml (at the very bottom).
Thank you in advance for your help.

The error you get is (type the following in the toploop)¹:
# (Obj.magic 43: Unix.error);;
- : Unix.error = Unix.EPROTOTYPE
which means: Protocol wrong type for socket. So you have to examine how you initialize your socket.
¹ You can also count the exceptions in unix.mli, knowing that the first one, E2BIG, is 0. Emacs C-u 43 ↓ helps.

Related

How to add custom component to Elyra's list of available airflow operators?

Trying to make my own component based on KubernetesPodOperator. I am able to define and add the component to the list of components but when trying to run it, I get:
Operator 'KubernetesPodOperator' of node 'KubernetesPodOperator' is not configured in the list of available operators. Please add the fully-qualified package name for 'KubernetesPodOperator' to the AirflowPipelineProcessor.available_airflow_operators configuration.
and error:
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/tornado/web.py", line 1704, in _execute
result = await result
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/handlers.py", line 120, in post
response = await PipelineProcessorManager.instance().process(pipeline)
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/processor.py", line 134, in process
res = await asyncio.get_event_loop().run_in_executor(None, processor.process, pipeline)
File "/opt/conda/lib/python3.9/asyncio/futures.py", line 284, in __await__
yield self # This tells Task to wait for completion.
File "/opt/conda/lib/python3.9/asyncio/tasks.py", line 328, in __wakeup
future.result()
File "/opt/conda/lib/python3.9/asyncio/futures.py", line 201, in result
raise self._exception
File "/opt/conda/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/airflow/processor_airflow.py", line 122, in process
pipeline_filepath = self.create_pipeline_file(pipeline=pipeline,
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/airflow/processor_airflow.py", line 420, in create_pipeline_file
target_ops = self._cc_pipeline(pipeline, pipeline_name)
File "/opt/conda/lib/python3.9/site-packages/elyra/pipeline/airflow/processor_airflow.py", line 368, in _cc_pipeline
raise ValueError(f"Operator '{component.name}' of node '{operation.name}' is not configured "
ValueError: Operator 'KubernetesPodOperator' of node 'KubernetesPodOperator' is not configured in the list of available operators. Please add the fully-qualified package name for 'KubernetesPodOperator' to the AirflowPipelineProcessor.available_airflow_operators configuration.
After looking through the src code, I can see in the processor_airflow.py these lines:
# This specifies the default airflow operators included with Elyra. Any Airflow-based
# custom connectors should create/extend the elyra configuration file to include
# those fully-qualified operator/class names.
available_airflow_operators = ListTrait(
CUnicode(),
["airflow.operators.slack_operator.SlackAPIPostOperator",
"airflow.operators.bash_operator.BashOperator",
"airflow.operators.email_operator.EmailOperator",
"airflow.operators.http_operator.SimpleHttpOperator",
"airflow.contrib.operators.spark_sql_operator.SparkSqlOperator",
"airflow.contrib.operators.spark_submit_operator.SparkSubmitOperator"],
help="""List of available Apache Airflow operator names.
Operators available for use within Apache Airflow pipelines. These operators must
be fully qualified (i.e., prefixed with their package names).
""",
).tag(config=True)
tho I am unsure if this can be extended from the client.
The available_airflow_operators list is a configurable trait in Elyra. You’ll have to add the fully-qualified package name for the KubernetesPodOperator to this list in order for it to create the DAG correctly.
To do so, generate a config file from the command line with jupyter elyra --generate-config. Open the created file and add the following line (you can add it under the PipelineProcessor(LoggingConfigurable) heading if you prefer to keep the file organized):
c.AirflowPipelineProcessor.available_airflow_operators.append("airflow.providers.cncf.kubernetes.operators.kubernetes_pod.KubernetesPodOperator")
Change that string value to the correct package for your use case if it's not the above (make sure that it ends with the class name of the required operator). If you need to add multiple packages, you can also use extend rather than append.
Edit: here is the link to the relevant documentation

Functions work when in a Sage worksheet directly but not when in a library

I'm taking a class, Intro to Algebraic Cryptology. We're using Sage for everything and CoCalc. This class is the first I've heard of either. The instructor has provided many convenience functions for our use. I do not like repeatedly copying them into new Sage worksheets in CoCalc. So, I put them in a library.
It took some time but I finally learned that to use them I have to do this in Sage:
load_attach_path('/path/to/the/directory')
%attach elliptic_curve_common.sage
Now, there is a function which she wrote for us to use called HPSonEC. This function is about using the Hellman-Pohlig-Silver exploit for cracking encryption on elliptic curves. What's infuriating is that the function will not work when used as I have above and I get this error:
Error in lines 5-5
Traceback (most recent call last):
File "/cocalc/lib/python3.8/site-packages/smc_sagews/sage_server.py", line 1230, in execute
exec(
File "", line 1, in <module>
File "<string>", line 298, in HPSonEC
File "<string>", line 263, in listptorder
File "<string>", line 151, in ECTimes
File "sage/rings/rational.pyx", line 2401, in sage.rings.rational.Rational.__mul__ (build/cythonized/sage/rings/rational.c:20911)
return coercion_model.bin_op(left, right, operator.mul)
File "sage/structure/coerce.pyx", line 1248, in sage.structure.coerce.CoercionModel.bin_op (build/cythonized/sage/structure/coerce.c:11304)
raise bin_op_exception(op, x, y)
TypeError: unsupported operand parent(s) for *: 'Rational Field' and 'Abelian group of points on Elliptic Curve defined by y^2 = x^3 + 389787687398479 over Finite Field of size 324638246338947256483756487461'
However, if I take that function, and the others on which it depends, and copy them into my Sage worksheet, they work just fine. Literally, no differences in the code at all. What may be the issue?
When reading code from a worksheet or a .sage file,
the Sage preparser is applied.
When reading code from a .py file, it is not.
See many questions where this came up:
https://stackoverflow.com/search?q=%5Bsage%5D+preparser
https://ask.sagemath.org/questions/scope:all/sort:activity-desc/page:1/query:preparser/

TypeError: string indices must be integers on ArangoDB

Arango module gives a weird error while accessing databases:
from arango import ArangoClient
client = ArangoClient(hosts='http://localhost:8529/')
sys_db = client.db('_system', username='root', password='root')
sys_db.databases()
below is the error:
Traceback (most recent call last): File "", line 1, in
File
"/home/ubuntu/arangovenv/lib/python3.6/site-packages/arango/database.py",
line 699, in databases
return self._execute(request, response_handler) File "/home/ubuntu/arangovenv/lib/python3.6/site-packages/arango/api.py",
line 66, in _execute
return self._executor.execute(request, response_handler) File "/home/ubuntu/arangovenv/lib/python3.6/site-packages/arango/executor.py",
line 82, in execute
return response_handler(resp) File "/home/ubuntu/arangovenv/lib/python3.6/site-packages/arango/database.py",
line 697, in response_handler
return resp.body['result'] TypeError: string indices must be integers
calling database module from "packages/arango/database.py" giving me the same error.
my env:
1) ubuntu 16.4
2) python-arango==5.2.1
any help appreciated.
If you are running it on some server, it may be a server issue. It was in my case at least. I ran the following to clear the proxy and it worked fine.
export http_proxy=''
As I guessed, resp.body is not the data type that you provided. line 697 of database.py is expecting something else. For example:
>>> data = "MyName"
>>> print(data[0])
'M'
>>> print(data['anything'])
TypeError: string indices must be integers
First print command gives the result while seconds command throws the error.
I hope this might solve your problem.

how to fix 'struct.error: char format requires a bytes object of length 1'

I am trying to use a pyvisa library to write a configuration to a test device. When I am trying to send the command over I get the following error from the utilty function used to talk with the equipment.
"struct.error: char format requires a byte object of length 1"
I have traced down to the code that is throwing the error and am able to reproduce just the error using a small code fragment which I have included below.
iter = b'<setup pro'
fullfmt = '<10c'
header = b'#510'
result = header + struct.pack(fullfmt, *iter)
When I do this I expect to get a packed byte structure containing
b'#510<10c
but instead I get the following:
Traceback (most recent call last):
File "/usr/lib/python3.6/code.py", line 91, in runcode
exec(code, self.locals)
File "", line 1, in
struct.error: char format requires a bytes object of length 1
So them I decided that I would that maybe my format was wrong and that it it should be a 's' instead of a 'c' so I tried the above code again by changing fullfmt from '<10c' to '<10s' and then get the following error message.
Traceback (most recent call last):
File "/usr/lib/python3.6/code.py", line 91, in runcode
exec(code, self.locals)
File "", line 1, in
struct.error: pack expected 1 items for packing (got 10)
I'm at a bit of a loss. Any help would be greatly appreciated.

MPI errors "MPI_ERR_IN_STATUS" and "BadPickleGet" in OpenMDAO when running external codes in parallel with many processors

The OpenMDAO problem that I'm running is quite complicated so I don't think it would be helpful to post the entire script. However, the basic setup is that my problem root is a ParallelFDGroup (not actually finite differencing for now--just running the problem once) that contains a few normal components as well as a parallel group. The parallel group is responsible for running 56 instances of an external code (one component per instance of the code). Strangely, when I run the problem with 4-8 processors, everything seems to work fine (sometimes even works with 10-12 processors). But when I try to use more processors (20+), I fairly consistently get the errors below. It provides two tracebacks:
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 746, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48575)
mpi4py.MPI.Exception: MPI_ERR_IN_STATUS: error code in status
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 749, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48609)
File "MPI/msgpickle.pxi", line 191, in mpi4py.MPI.Pickle.loadv (src/mpi4py.MPI.c:41957)
File "MPI/msgpickle.pxi", line 143, in mpi4py.MPI.Pickle.load (src/mpi4py.MPI.c:41248)
cPickle.BadPickleGet: 65
I am running under Ubuntu with OpenMDAO 1.7.3. I have tried running with both mpirun.openmpi (OpenRTE) 1.4.3 and mpirun (Open MPI) 1.4.3 and have gotten the same result in each case.
I found this post that seems to suggest that there is something wrong with the MPI installation. But if this were the case, it strikes me as strange that the problem would work for a small number of processors but not with a larger number. I also can run a relatively simple OpenMDAO problem (no external codes) with 32 processors without incident.
Because the traceback references OpenMDAO unknowns, I wondered if there are limitations on the size of OpenMDAO unknowns. In my case, each external code component has a few dozen array outputs that can be up to 50,000-60,000 elements each. Might that be problematic? Each external code component also reads the same set of input files. Could that be an issue as well? I have tried to ensure that read and write access is defined properly but perhaps that's not enough.
Any suggestions about what might be culprit in this situation are appreciated.
EDIT: I should add that I have tried running the problem without actually running the external codes (i.e. the components in the parallel group are called and set up but the external subprocesses are never actually created) and the problem persists.
EDIT2: I have done some more debugging on this issue and thought I should share the little that I have discovered. If I strip the problem down to only the parallel group containing the external code instances, the problem persists. However, if I reduce the components in the parallel group to basically nothing--just a print function for setup and for solve_nonlinear--then the problem can successfully "run" with a large number of processors. I started adding setup lines back in one by one to see what would create problems. I ran into issues when trying to add many large unknowns to the components. I can actually still add just a single large unknown--for example, this works:
self.add_output('BigOutput', shape=[100000])
But when I try to add too many large outputs like below, I get errors:
for i in range(100):
outputname = 'BigOutput{0}'.format(i)
self.add_output(outputname, shape=[100000])
Sometimes I just get a general segmentation violation error from PETSc. Other times I get a fairly length traceback that is too long to post here--I'll post just the beginning in case it provides any helpful clues:
*** glibc detected *** python2.7: free(): invalid pointer: 0x00007f21204f5010 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7da26)[0x7f2285f0ca26]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_free+0x4f)[0x7f2269b7754f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x1cbbc)[0x7f2269b87bbc]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x54d6c)[0x7f2269bbfd6c]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x9d31f)[0x7f2269c0831f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_step+0x1bf)[0x7f2269be261f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(pysqlite_step+0x2d)[0x7f2269e4306d]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(_pysqlite_query_execute+0x661)[0x7f2269e404b1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8942)[0x7f2286c6a5a2]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7f2286c6b1ce]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x797e1)[0x7f2286be67e1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x5c54f)[0x7f2286bc954f]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x43)[0x7f2286c60d63]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x136652)[0x7f2286ca3652]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f2286957e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f2285f8236d]
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00600000-00601000 rw-p 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00aca000-113891000 rw-p 00000000 00:00 0 [heap]
7f21107d6000-7f2241957000 rw-p 00000000 00:00 0
etc...
its hard to guess whats going on here, but if it works for a small number of processors and not on larger ones one guess might be that the issue shows up when you use more than one node, and data has to get transfered across the network. I have seen bad MPI compilations that behaved this way. Things would work if I kept the job to one node, but broke on more than one.
The traceback shows that you're not even getting through setup. So its not likely to be anything in your external code or any other components run method.
If you're running on a cluster, are you compiling your own MPI? You usually need to compile very with very specific options/libraries for any kind of HPC library. But most HPC systems provide modules you can load that have mpi pre-compiled.

Resources