Functions work when in a Sage worksheet directly but not when in a library - sage

I'm taking a class, Intro to Algebraic Cryptology. We're using Sage for everything and CoCalc. This class is the first I've heard of either. The instructor has provided many convenience functions for our use. I do not like repeatedly copying them into new Sage worksheets in CoCalc. So, I put them in a library.
It took some time but I finally learned that to use them I have to do this in Sage:
load_attach_path('/path/to/the/directory')
%attach elliptic_curve_common.sage
Now, there is a function which she wrote for us to use called HPSonEC. This function is about using the Hellman-Pohlig-Silver exploit for cracking encryption on elliptic curves. What's infuriating is that the function will not work when used as I have above and I get this error:
Error in lines 5-5
Traceback (most recent call last):
File "/cocalc/lib/python3.8/site-packages/smc_sagews/sage_server.py", line 1230, in execute
exec(
File "", line 1, in <module>
File "<string>", line 298, in HPSonEC
File "<string>", line 263, in listptorder
File "<string>", line 151, in ECTimes
File "sage/rings/rational.pyx", line 2401, in sage.rings.rational.Rational.__mul__ (build/cythonized/sage/rings/rational.c:20911)
return coercion_model.bin_op(left, right, operator.mul)
File "sage/structure/coerce.pyx", line 1248, in sage.structure.coerce.CoercionModel.bin_op (build/cythonized/sage/structure/coerce.c:11304)
raise bin_op_exception(op, x, y)
TypeError: unsupported operand parent(s) for *: 'Rational Field' and 'Abelian group of points on Elliptic Curve defined by y^2 = x^3 + 389787687398479 over Finite Field of size 324638246338947256483756487461'
However, if I take that function, and the others on which it depends, and copy them into my Sage worksheet, they work just fine. Literally, no differences in the code at all. What may be the issue?

When reading code from a worksheet or a .sage file,
the Sage preparser is applied.
When reading code from a .py file, it is not.
See many questions where this came up:
https://stackoverflow.com/search?q=%5Bsage%5D+preparser
https://ask.sagemath.org/questions/scope:all/sort:activity-desc/page:1/query:preparser/

Related

Error when exporting from BigQuery to MySQL

I am trying to export a table from BigQuery to Google Cloud MySQL database.
I found this operator called BigQueryToMySqlOperator (documented here https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/bigquery_to_mysql/index.html?highlight=bigquerytomysqloperator#module-airflow.providers.google.cloud.transfers.bigquery_to_mysql)
When I deploy the DAG containing this task onto cloud composer, the task always failed with the error
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1113, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1287, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1317, in _execute_task
result = task_copy.execute(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/bigquery_to_mysql.py", line 166, in execute
for rows in self._bq_get_data():
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/transfers/bigquery_to_mysql.py", line 138, in _bq_get_data
response = cursor.get_tabledata(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 2508, in get_tabledata
return self.hook.get_tabledata(*args, **kwargs)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/cloud/hooks/bigquery.py", line 1284, in get_tabledata
rows = self.list_rows(dataset_id, table_id, max_results, selected_fields, page_token, start_index)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/google/common/hooks/base_google.py", line 412, in inner_wrapper
raise AirflowException(
airflow.exceptions.AirflowException: You must use keyword arguments in this methods rather than positional
I don't really understand why it is throwing out this error. Can anyone help me figuring out what went wrong or how should I export data from BigQuery to MySQL DB? Much thanks for your help!
EDIT: My operator code would basically look like this
transfer_data = BigQueryToMySqlOperator(
task_id='task_id',
dataset_table='origin_bq_table',
mysql_table='dest_table_name',
replace=True,
)
Based on the stacktrace, you are most likely using apache-airflow-providers-google==2.2.0.
airflow.exceptions.AirflowException: You must use keyword arguments in
this methods rather than positional
This error originates from the GoogleBaseHook, which can be traced back the BigQueryToMySqlOperator.
BigQueryToMySqlOperator > BigQueryHook > BigQueryConnection > BigQueryCursor > get_tabledata
The reason why you are getting the AirflowException is because get_tabledata
is called as part of the execute method.
Unforuntately, the test for the operator is not comprehensive since it only checks whether or not the method was called was the correct parameters.
I think this will require a new release of the google provider where the cursor in BigQueryToMySqlOperator calls list_rows with keyword arguments instead of get_tabledata, which calls list_rows with positional arguments.
I have also made a Github Issue in the Airflow repository.

Unix.error 31 write when using Functory module

I am using the functory module and I am facing a very bizarre issue with the code.
My code is working fine and I have been able to parallelized a play on my game but when I try to play once again (launch another time a parallelized function) it raises a really weird error.
Here you can find the error :
Fatal error: exception Unix.Unix_error(43, "write", "")
Raised by primitive operation at file "unix.ml", line 252, characters 7-34
Called from file "protocol.ml", line 45, characters 10-32
Re-raised at file "network.ml", line 536, characters 10-11
Called from file "network.ml", line 565, characters 47-80
Called from file "list.ml", line 73, characters 12-15
Called from file "network.ml", line 731, characters 4-27
Called from file "map_fold.ml", line 98, characters 4-242
Called from file "game_ia.ml", line 111, characters 10-54
Called from file "gameplay.ml", line 34, characters 12-48
Called from file "gameplay.ml", line 57, characters 22-37
Called from file "gameplay.ml", line 85, characters 5-22
So I've decided to search in the following folders to see what primitive operation has been raised :
(unix.ml) external rename : string -> string -> unit = "unix_rename"
(network.ml) Some jid when w.state <> Disconnected -> send w (Protocol.Master.Kill jid)
So for some reason, it seems that my worker disconnects by itself. I was wondering if any of you already had this issue and what to do in order to solve it ?
You can find my game here. The main files involved are game_ia.ml (best_move_parallelized) and gameplay.ml (at the very bottom).
Thank you in advance for your help.
The error you get is (type the following in the toploop)¹:
# (Obj.magic 43: Unix.error);;
- : Unix.error = Unix.EPROTOTYPE
which means: Protocol wrong type for socket. So you have to examine how you initialize your socket.
¹ You can also count the exceptions in unix.mli, knowing that the first one, E2BIG, is 0. Emacs C-u 43 ↓ helps.

MPI errors "MPI_ERR_IN_STATUS" and "BadPickleGet" in OpenMDAO when running external codes in parallel with many processors

The OpenMDAO problem that I'm running is quite complicated so I don't think it would be helpful to post the entire script. However, the basic setup is that my problem root is a ParallelFDGroup (not actually finite differencing for now--just running the problem once) that contains a few normal components as well as a parallel group. The parallel group is responsible for running 56 instances of an external code (one component per instance of the code). Strangely, when I run the problem with 4-8 processors, everything seems to work fine (sometimes even works with 10-12 processors). But when I try to use more processors (20+), I fairly consistently get the errors below. It provides two tracebacks:
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 746, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48575)
mpi4py.MPI.Exception: MPI_ERR_IN_STATUS: error code in status
Traceback (most recent call last):
File "opt_5mw.py", line 216, in <module>
top.setup() #call setup
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/problem.py", line 644, in setup
self.root._setup_vectors(param_owners, impl=self._impl, alloc_derivs=alloc_derivs)
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/group.py", line 476, in _setup_vectors
self._u_size_lists = self.unknowns._get_flattened_sizes()
File "/home/austinherrema/.local/lib/python2.7/site-packages/openmdao/core/petsc_impl.py", line 204, in _get_flattened_sizes
return self.comm.allgather(sizes)
File "MPI/Comm.pyx", line 1291, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109194)
File "MPI/msgpickle.pxi", line 749, in mpi4py.MPI.PyMPI_allgather (src/mpi4py.MPI.c:48609)
File "MPI/msgpickle.pxi", line 191, in mpi4py.MPI.Pickle.loadv (src/mpi4py.MPI.c:41957)
File "MPI/msgpickle.pxi", line 143, in mpi4py.MPI.Pickle.load (src/mpi4py.MPI.c:41248)
cPickle.BadPickleGet: 65
I am running under Ubuntu with OpenMDAO 1.7.3. I have tried running with both mpirun.openmpi (OpenRTE) 1.4.3 and mpirun (Open MPI) 1.4.3 and have gotten the same result in each case.
I found this post that seems to suggest that there is something wrong with the MPI installation. But if this were the case, it strikes me as strange that the problem would work for a small number of processors but not with a larger number. I also can run a relatively simple OpenMDAO problem (no external codes) with 32 processors without incident.
Because the traceback references OpenMDAO unknowns, I wondered if there are limitations on the size of OpenMDAO unknowns. In my case, each external code component has a few dozen array outputs that can be up to 50,000-60,000 elements each. Might that be problematic? Each external code component also reads the same set of input files. Could that be an issue as well? I have tried to ensure that read and write access is defined properly but perhaps that's not enough.
Any suggestions about what might be culprit in this situation are appreciated.
EDIT: I should add that I have tried running the problem without actually running the external codes (i.e. the components in the parallel group are called and set up but the external subprocesses are never actually created) and the problem persists.
EDIT2: I have done some more debugging on this issue and thought I should share the little that I have discovered. If I strip the problem down to only the parallel group containing the external code instances, the problem persists. However, if I reduce the components in the parallel group to basically nothing--just a print function for setup and for solve_nonlinear--then the problem can successfully "run" with a large number of processors. I started adding setup lines back in one by one to see what would create problems. I ran into issues when trying to add many large unknowns to the components. I can actually still add just a single large unknown--for example, this works:
self.add_output('BigOutput', shape=[100000])
But when I try to add too many large outputs like below, I get errors:
for i in range(100):
outputname = 'BigOutput{0}'.format(i)
self.add_output(outputname, shape=[100000])
Sometimes I just get a general segmentation violation error from PETSc. Other times I get a fairly length traceback that is too long to post here--I'll post just the beginning in case it provides any helpful clues:
*** glibc detected *** python2.7: free(): invalid pointer: 0x00007f21204f5010 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x7da26)[0x7f2285f0ca26]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_free+0x4f)[0x7f2269b7754f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x1cbbc)[0x7f2269b87bbc]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x54d6c)[0x7f2269bbfd6c]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(+0x9d31f)[0x7f2269c0831f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/../../libsqlite3.so.0(sqlite3_step+0x1bf)[0x7f2269be261f]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(pysqlite_step+0x2d)[0x7f2269e4306d]
/home/austinherrema/miniconda2/lib/python2.7/lib-dynload/_sqlite3.so(_pysqlite_query_execute+0x661)[0x7f2269e404b1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x8942)[0x7f2286c6a5a2]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalFrameEx+0x86c3)[0x7f2286c6a323]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_EvalCodeEx+0x89e)[0x7f2286c6b1ce]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x797e1)[0x7f2286be67e1]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x5c54f)[0x7f2286bc954f]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyObject_Call+0x53)[0x7f2286bb6dc3]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(PyEval_CallObjectWithKeywords+0x43)[0x7f2286c60d63]
/home/austinherrema/miniconda2/bin/../lib/libpython2.7.so.1.0(+0x136652)[0x7f2286ca3652]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f2286957e9a]
/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f2285f8236d]
======= Memory map: ========
00400000-00401000 r-xp 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00600000-00601000 rw-p 00000000 08:03 9706352 /home/austinherrema/miniconda2/bin/python2.7
00aca000-113891000 rw-p 00000000 00:00 0 [heap]
7f21107d6000-7f2241957000 rw-p 00000000 00:00 0
etc...
its hard to guess whats going on here, but if it works for a small number of processors and not on larger ones one guess might be that the issue shows up when you use more than one node, and data has to get transfered across the network. I have seen bad MPI compilations that behaved this way. Things would work if I kept the job to one node, but broke on more than one.
The traceback shows that you're not even getting through setup. So its not likely to be anything in your external code or any other components run method.
If you're running on a cluster, are you compiling your own MPI? You usually need to compile very with very specific options/libraries for any kind of HPC library. But most HPC systems provide modules you can load that have mpi pre-compiled.

Steps to load a .txt file and convert the strings in it to usable data in a Sage notebook

I am on a Mac with OS 10.11.6, and I'm learning the notebook interface for Sage 7.2. As a start, in a Sage worksheet I created a .txt file containing the string [1, 2, 3] and saved it. I can open the text file directly and verify its contents just by clicking on it, but I can't yet do this in Sage.
I'd like to be able to open it and convert the string to a usable Sage object. I'd appreciate explicit instructions, assuming nothing at all about my Sage background. Thank you.
Note: The procedure to do what I just asked in the Sage documentation under "Saving and Loading Individual Objects" doesn't work in my environment (specs above.) I do A = [1, 2, 3]. Then I do save(A, 'A') and Sage returns a hot link for A.sobj. Then I hit the save-and-quit button. Then I hit "sign out." Then I sign back in and go to the worksheet where I did the steps I just described. I do A = load('A'). This is what Sage says:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "_sage_input_3.py", line 10, in <module>
exec compile(u'open("___code___.py","w").write("# -*- coding: utf-8 -*-\\n" + _support_.preparse_worksheet_cell(base64.b64decode("QSA9IGxvYWQoJ0EnKQ=="),globals())+"\\n"); execfile(os.path.abspath("___code___.py"))
File "", line 1, in <module>
File "/private/var/folders/7n/t9k4hfyn44s2qp7wxt479kn80000gn/T/tmpEa1OkK/___code___.py", line 2, in <module>
exec compile(u"A = load('A')" + '\n', '', 'single')
File "", line 1, in <module>
File "sage/structure/sage_object.pyx", line 1032, in sage.structure.sage_object.load (build/cythonized/sage/structure/sage_object.c:11594)
IOError: [Errno 2] No such file or directory: 'A.sobj'`
I found an answer in Finch's book. First a quote:
“We used a module called os from the Python standard library module to help us write code that can run on multiple platforms. A text file must have a special character to denote the end of each line in the file. Unfortunately, for historical reasons, each family of operating systems (Mac, Windows, and UNIX) uses a different end-of-line character. The os module has a constant called linesep that contains the correct character for the platform that the code is run on. We used the statement import os to make the module available, and accessed the constant using the syntax os.linesep. We also used the function os.path.join to join the path to the file name with the correct character for the current operating system.”
Excerpt From: Craig Finch. “Sage Beginner's Guide.”
Example using a file named "File2.txt" containing a single text character, '1':
import os path='/Users/barrybrent/.sage/sage_notebook.sagenb/home/store/2/21/212/2123/admin/19/data/' fileName='File2.txt' times = [] text_file = open(os.path.join(path, fileName), 'r') line = text_file.readline()
(Comment: is just a character string. To convert it a Sage object useful in computations:)
elements=line.split(',') times.append(float(elements[0].strip()))
(Comment: evaluate:)
times[0]
(Comment: Sage says"1.0". Now can we do arithmetic with times[0]?)
times[0]+1
Sage says "2.0"

Unexpected error reading GML graph

I have downloaded the gml file which contains the dolphins social network.
Some time ago I did some analysis on that network running python 3.4 and networkx 1.9 on a a Windows7 machine, but now I am running on a Arch linux machine (with the same version of python but with networkx 1.10) and found an issue when tried to read the file.
This is the code used to read the file:
import networkx as nx
nx.read_gml("dolphins.gml")
And this is the stack trace of the error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 2, in read_gml
File "/usr/lib/python3.4/site-packages/networkx/utils/decorators.py",line 220, in _open_file
result = func(*new_args, **kwargs)
File "/usr/lib/python3.4/site-packages/networkx/readwrite/gml.py", line 210, in read_gml
G = parse_gml_lines(filter_lines(path), label, destringizer)
File "/usr/lib/python3.4/site-packages/networkx/readwrite/gml.py", line 383, in parse_gml_lines
graph = parse_graph()
File "/usr/lib/python3.4/site-packages/networkx/readwrite/gml.py", line 372, in parse_graph
curr_token, dct = parse_kv(next(tokens))
File "/usr/lib/python3.4/site-packages/networkx/readwrite/gml.py", line 323, in tokenize
(line[pos:], lineno + 1, pos + 1))
networkx.exception.NetworkXError: cannot tokenize 'graph' at (1, 1)
Are you able to read the file? Someone has experienced a simmilar issue? or knows what is generating the error?
Thank you in advance!
In the newer versions of networkx, the gml file should follow a more specific format. The problem with the dolphins.gml is that there should not be any carriage return before the open square brackets. For example:
Wrong format:
graph
[
directed 0
node
[
id 0
label "Beak"
]
.
.
.
Correct format:
graph [
directed 0
node [
id 0
label "Beak"
]
.
.
.
It does not care about how many spaces there are before the square bracket as long as there is more than one and there is no carriage return.
What I ended up doing was using regular expression to get rid of the white spaces before the opening square brackets. The following regex worked for me:
\s+\[
and just replace it with " [". There has to be at least one space before the bracket.
Also keep in mind that every node has to have a unique label.
Hope it helped.
It worked by downgrading the networkx version from 1.10 to 1.9.1.
Hope this answer can help someone else.

Resources