Request GET failing for RESTFUL API call - python-requests

I'm writing a program that will run inside Fusion 360. Fusion 360 uses Python as its scripting language and it provides its own Python. When my program is executed, Fusion 360 loads it within its Python and runs it. Because of that, I don't have any control over the Python environment. It's possible to use additional packages as long as they're local to my program and imported using relative paths but I prefer to use the Python standard library to avoid the extra issues of re-delivering more components and their dependencies.
Fusion 360 is using Python 3.5.3 and I'm trying to make some RESTFUL API calls. On Windows, everything is working as expected, but on Mac it's failing. I was initially trying to use requests and was assuming the failure was with the requests package but someone suggested using urllib instead to stick with the standard library and it's also failing for the same reason.
The code works for most standard websites (google in the example below) but fails for others. In my testing, it always fails when the endpoint is a REST API but it is also failing for github.com so that may be a red herring. This is an area that I have very little experience with and can use some suggestions on how to debug and resolve the issue.
import traceback
import urllib.request
def run(context):
try:
# url = 'https://github.com'
# url = 'https://google.com'
url = 'https://api.github.com'
req = urllib.request.urlopen(url)
print(req.read())
req.close()
except:
print(traceback.format_exc())
As I said before, this is working on Windows but failing on Mac. Here's the trace results of the failure.
Traceback (most recent call last): File
"/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 1254, in do_open
h.request(req.get_method(), req.selector, req.data, headers) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/http/client.py",
line 1107, in request
self._send_request(method, url, body, headers) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/http/client.py",
line 1152, in _send_request
self.endheaders(body) File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/http/client.py",
line 1103, in endheaders
self._send_output(message_body) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/http/client.py",
line 934, in _send_output
self.send(msg) File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/http/client.py",
line 877, in send
self.connect() File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/http/client.py",
line 1261, in connect
server_hostname=server_hostname) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/ssl.py",
line 385, in wrap_socket
_context=self) File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/ssl.py",
line 760, in init
self.do_handshake() File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/ssl.py",
line 996, in do_handshake
self._sslobj.do_handshake() File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/ssl.py",
line 641, in do_handshake
self._sslobj.do_handshake() ssl.SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version
(_ssl.c:720)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"/Users/admin/Dropbox/Scripts/RestfulTest/RestfulTest.py", line 23, in
run
req = urllib.request.urlopen(url) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 163, in urlopen
return opener.open(url, data, timeout) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 466, in open
response = self._open(req, data) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 484, in _open
'_open', req) File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 444, in _call_chain
result = func(*args) File "/Users/admin/Library/Application Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 1297, in https_open
context=self._context, check_hostname=self._check_hostname) File "/Users/admin/Library/Application
Support/Autodesk/webdeploy/production/a71844880b03ed71d4a9c581cd70965fd6323ebc/Autodesk
Fusion
360.app/Contents/Frameworks/Python.framework/Versions/Current/lib/python3.5/urllib/request.py",
line 1256, in do_open
raise URLError(err) urllib.error.URLError:

Related

Script compiled with pyinstaller is missing a .dll file, when the file is manually copied in the program's folder it just dies

I have a python script which is basically a graphic interface (pysimpleguy) to a mysql database.
I am working in python 3.8; my dependencies are:
PySimpleGUI 4.55.1
sqlalchemy 1.3.20
pymysql 1.0.2
pandas 1.1.3
regex 2020.10.15
pillow 8.0.1
The code works and I'd like to compile it to .exe to distribute it to users in my organization.
I tried to compile it with:
pyinstaller -D .\db_interface_v3.6.1_release.py --debug=imports
However, pyinstaller throws some errors when compiling:
201667 INFO: Building COLLECT COLLECT-00.toc
Traceback (most recent call last):
File "c:\users\spit\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\spit\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\Spit\anaconda3\Scripts\pyinstaller.exe\__main__.py", line 7, in <module>
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\__main__.py", line 124, in run
run_build(pyi_config, spec_file, **vars(args))
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\__main__.py", line 58, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\building\build_main.py", line 782, in main
build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\building\build_main.py", line 714, in build
exec(code, spec_namespace)
File "C:\Users\Spit\Desktop\DIPEx db parser\db_interface_v3.6.1_release.spec", line 37, in <module>
coll = COLLECT(exe,
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\building\api.py", line 818, in __init__
self.__postinit__()
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\building\datastruct.py", line 155, in __postinit__
self.assemble()
File "c:\users\spit\anaconda3\lib\site-packages\PyInstaller\building\api.py", line 866, in assemble
shutil.copy(fnm, tofnm)
File "c:\users\spit\anaconda3\lib\shutil.py", line 415, in copy
copyfile(src, dst, follow_symlinks=follow_symlinks)
File "c:\users\spit\anaconda3\lib\shutil.py", line 261, in copyfile
with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\Spit\\Desktop\\DIPEx db parser\\dist\\db_interface_v3.6.1_release\\share\\jupyter\\lab\\staging\\node_modules\\.cache\\terser-webpack-p
lugin\\content-v2\\sha512\\2e\\ba\\cfce62ec1f408830c0335f2b46219d58ee5b068473e7328690e542d2f92f2058865c600d845a2e404e282645529eb0322aa4429a84e189eb6b58c1b97c1a'
If I try to run the compiled exe, I get an error regarding a specific .dll:
INTEL MKL ERROR: Impossibile trovare il modulo specificato. mkl_intel_thread.dll.
Intel MKL FATAL ERROR: Cannot load mkl_intel_thread.dll.
If I take this missing .dll from my Anaconda environment and copy it into the program's folder, when I try to run the .exe again it just dies without further messages:
import 'numpy.ma' # <pyimod03_importers.FrozenImporter object at 0x000001F6A455BEE0>
PS C:\Users\Spit\Desktop\DIPEx db parser\dist\db_interface_v3.6.1_release>
Any idea on how to sort it out?
Thanks!
Sorted out. As a future reference if someone stumbles upon this question, the error is caused by Windows' PATH_MAX limitation, preventing pyinstaller to find all the necessary files.
In order to disable said limitation: https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd
Kudos to https://github.com/bwoodsend

Airflow scheduler gets stuck

I have setup airflow on Windows 10 WSL with Python 3.6.8. I started the scheduler of airflow using airflow scheduler command but it got following error :
[2020-04-28 19:24:06,500] {base_job.py:205} ERROR - SchedulerJob heartbeat got an exception
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlite3.OperationalError: disk I/O error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/jobs/base_job.py", line 173, in heartbeat
previous_heartbeat = self.latest_heartbeat
File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
next(self.gen)
File "/home/akshay/.local/lib/python3.6/site-packages/airflow/utils/db.py", line 45, in create_session
session.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 1036, in commit
self.transaction.commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/orm/session.py", line 507, in commit
t[1].commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1736, in commit
self._do_commit()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1767, in _do_commit
self.connection._commit_impl()
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 757, in _commit_impl
self._handle_dbapi_exception(e, None, None, None, None)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1482, in _handle_dbapi_exception
sqlalchemy_exception, with_traceback=exc_info[2], from_=e
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 178, in raise_
raise exception
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 755, in _commit_impl
self.engine.dialect.do_commit(self.connection)
File "/home/akshay/.local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 543, in do_commit
dbapi_connection.commit()
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) disk I/O error
(Background on this error at: http://sqlalche.me/e/e3q8)
What is the reason for this failing? What can be the solution to resolve this as my airflow scheduler is running well since last 10 days?

pip search not working on pypi repository

I have setup a PyPi repo in Artifactory, but I cannot search uploaded packages with pip.
I created a PyPi repo in Artifactory and pushed two versions of an example package, which worked perfectly. The package and it's two versions is present in Artifactory under the correct repo. Running pip search and trying to find this package results in a timeout.
Uploading the packages dind't present any issues at all.
I have tried without /simple as well.
pip search example -i http://artifactory_server/api/pypi/pypi-repo/simple
produces the following:
Exception:
Traceback (most recent call last):
File "c:\python27\lib\site-packages\pip\_internal\cli\base_command.py", line 179, in main
status = self.run(options, args)
File "c:\python27\lib\site-packages\pip\_internal\commands\search.py", line 48, in run
pypi_hits = self.search(query, options)
File "c:\python27\lib\site-packages\pip\_internal\commands\search.py", line 65, in search
hits = pypi.search({'name': query, 'summary': query}, 'or')
File "c:\python27\lib\xmlrpclib.py", line 1243, in __call__
return self.__send(self.__name, args)
File "c:\python27\lib\xmlrpclib.py", line 1602, in __request
verbose=self.__verbose
File "c:\python27\lib\site-packages\pip\_internal\download.py", line 823, in request
headers=headers, stream=True)
File "c:\python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 581, in post
return self.request('POST', url, data=data, json=json, **kwargs)
File "c:\python27\lib\site-packages\pip\_internal\download.py", line 403, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "c:\python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "c:\python27\lib\site-packages\pip\_vendor\requests\sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "c:\python27\lib\site-packages\pip\_vendor\requests\adapters.py", line 529, in send
raise ReadTimeout(e, request=request)
ReadTimeout: HTTPConnectionPool(host='artifactory_server', port=80): Read timed out. (read timeout=15)
Any ideas are most welcome.
Thanks.
Since you are not using https, you need to add the following option at the end of your command
--trusted-host artifactory_server

Apache Airflow - Was Working Fine Now Says Log File Isn't Local Error & Exceptions are Popping Up

So it looks like my install of apache airflow on a Google Compute Engine instance broke down. Everything was working great and then two days ago all the DAG runs show up stuck in a running state. I am using LocalExecutioner.
When I try to look at the log I get this error:
* Log file isn't local.
* Fetching here: http://:8793/log/collector/aa_main_combined_collector/2017-12-15T09:00:00
*** Failed to fetch log file from worker.
I didn't touch a setting anywhere. I looked through all the config files and I scanned the logs and I see this error
[2017-12-16 20:08:42,558] {jobs.py:355} DagFileProcessor0 ERROR - Got an exception! Propagating...
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 347, in helper
pickle_dags)
File "/usr/local/lib/python3.4/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 1584, in process_file
self._process_dags(dagbag, dags, ti_keys_to_schedule)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 1173, in _process_dags
dag_run = self.create_dag_run(dag)
File "/usr/local/lib/python3.4/dist-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/usr/local/lib/python3.4/dist-packages/airflow/jobs.py", line 763, in create_dag_run
last_scheduled_run = qry.scalar()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2843, in scalar
ret = self.one()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2814, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2784, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2855, in iter
return self._execute_and_instances(context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2878, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1405, in _handle_dbapi_exception
util.reraise(*exc_info)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.4/dist-packages/airflow/bin/cli.py", line 69, in sigint_handler
sys.exit(0)
SystemExit: 0
Any thoughts out there?
I solved this problem though in doing so I discovered another problem.
Long and short of it as soon as I manually started the scheduler, everything worked again. It appears the problem was that the scheduler did not get restarted correctly after a system reboot.
I have scheduler running through SystemD. The Webserver .service works fine. However I do notice that the scheduler .service continually restarts. It appears there is an issue there I need to resolve. This part of it is solved for now.
Look at the log URL, verify if it ends with a date with special characters +:
&execution_date=2018-02-23T08:00:00+00:00
This was fixed here.
You can replace the + for -, or replace all special characters, in my case:
&execution_date=2018-02-23T08%3A00%3A00%2B00%3A00
This happens here.
The FileTaskHandler can not load the log from local disk, and try to load from worker.
Another thing that could be causing this error is the exclusion of the airflow/logs folder or the sub folders inside it.

3.3 -> 4.1 migration fails, busted RAMCache AttributeError: 'RAMCache' object has no attribute '_cacheId'

After 3.3 -> 4.1 migration I get exception on the resulting page
File "/fast/buildout-cache/eggs/plone.app.viewletmanager-2.0.2-py2.6.egg/plone/app/viewletmanager/manager.py", line 85, in render
return u'\n'.join([viewlet.render() for viewlet in self.viewlets])
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/volatile.py", line 281, in replacement
cached_value = cache.get(key, _marker)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 124, in get
return self.__getitem__(key)
File "/fast/buildout-cache/eggs/plone.memoize-1.1.1-py2.6.egg/plone/memoize/ram.py", line 166, in __getitem__
MARKER)
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 107, in query
s = self._getStorage()
File "/fast/buildout-cache/eggs/zope.ramcache-1.0-py2.6.egg/zope/ramcache/ram.py", line 122, in _getStorage
cacheId = self._cacheId
AttributeError: 'RAMCache' object has no attribute '_cacheId'
Looks like RAMCache object is in invalid state.
Also before this seeing in logs:
2012-06-21 16:42:54 INFO plone.app.upgrade Ran upgrade step: Miscellaneous
2012-06-21 16:42:54 INFO plone.app.upgrade End of upgrade path, migration has finished
2012-06-21 16:42:54 INFO plone.app.upgrade Your Plone instance is now up-to-date.
2012-06-21 16:43:02 ERROR txn.4553572352 Error in tpc_abort() on manager <Connection at 10be48490>
Traceback (most recent call last):
File "/fast/buildout-cache/eggs/transaction-1.1.1-py2.6.egg/transaction/_transaction.py", line 484, in _cleanup
rm.tpc_abort(self)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZODB/Connection.py", line 730, in tpc_abort
self._storage.tpc_abort(transaction)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ClientStorage.py", line 1157, in tpc_abort
self._server.tpc_abort(id(txn))
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/ServerStub.py", line 255, in tpc_abort
self.rpc.call('tpc_abort', id)
File "/fast/buildout-cache/eggs/ZODB3-3.10.5-py2.6-macosx-10.7-x86_64.egg/ZEO/zrpc/connection.py", line 768, in call
raise inst # error raised by server
OSError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdc/0x55/0x00FWigqp.tmp-'
2012-06-21 16:43:03 ERROR Zope.SiteErrorLog 1340286183.10.000607291180815 http://localhost:9666/xxx/##plone-upgrade
Traceback (innermost last):
Module ZPublisher.Publish, line 134, in publish
Module Zope2.App.startup, line 301, in commit
Module transaction._manager, line 89, in commit
Module transaction._transaction, line 329, in commit
Module transaction._transaction, line 446, in _commitResources
Module ZODB.Connection, line 781, in tpc_vote
Module ZEO.ClientStorage, line 1098, in tpc_vote
Module ZEO.ClientStorage, line 929, in _check_serials
IOError: [Errno 2] No such file or directory: '/fast/xxx-2012/var/blobstorage/0x00/0x00/0x00/0x00/0x00/0x07/0xdd/0xca/0x009kWNYQ.tmp-'
Why this would might happen?
Any pointers how to reinitialize RAMCache object?
RAMCache is first time referred by FaviconViewlet which is using #memoize deorator and it leads to this error.
Well, your migration obviously did not complete successfully, based on the traceback. So I would focus on figuring out why it failed, rather than working around things like the broken RAMCache which are likely a result of the migration not having run.
The traceback indicates that it broke while trying to abort the transaction...so you'll probably need to do some debugging to determine what caused it to try to abort, since that's not indicated in the logs you pasted.

Resources