Can't export a workflow from portal_setup - plone

I've created a new workflow in portal_workflow (let's name it my_workflow), and I'm trying to export it using portal_setup without success. I've done this in the past with other workflows, and it worked like a charm. But, somehow, this last workflow I created can't be exported.
When exported, the new created workflow is presented in workflows.xml, but workflows/my_workflow/ and workflows/my_workflow/definition.xml doesn't exist. The other workflows (including some custom ones) are exported.
Is there anything I'm unaware of that is preventing to export my new workflow? portal_catalog, something?
EDIT: I get this error when trying to extract the files. Is this something correlated? Just my_workflow isn't presented in my tar.gz.
gzip: stdin: invalid compressed data--length error
tar: Skipping to next header
tar: Child returned status 1
tar: Exiting with failure status due to previous errors

It seems the problem relies in having non-ascii characters on any field (title, description, whatever) in your workflow definition.
I did some debugging in eggs/Products.DCWorkflow-2.1.2-py2.4.egg/Products/DCWorkflow/exportimport.py and eggs/Products.GenericSetup-1.4.5-py2.4.egg/Products/GenericSetup/utils.py, it exports my_workflow correctly, but the exported tar.gz had errors in the end.
When I removed all non-ascii characters from the workflows, the export went without errors, and workflows/my_workflow was present.
Anyone knows why is this? Am I correct in my assumptions?

Related

nbsphinx causes build to fail when building Jupyter Notebooks

Details
I am getting a build failure of my read-the-docs that I don't understand. The assertion of "verbatim" in line 2151 of nbsphinx.py is causing the build failure. So the build fails when I try to include the Jupyter Notebook tutorials I created. I compared current versions of the tutorials to previous versions which had not caused the build to fail, and I can't find a difference that could account for the current failure.
Read the Docs project URL: lofti_gaia
Build URL: https://github.com/logan-pearce/lofti_gaia
Read the Docs username: logan-pearce
Expected Result
A passing build including *.ipynb files
Actual Result
Build failed at line 2151 of nbsphinx.py due to assertion of 'Verbatim' failing.
Terminal output:
Running Sphinx v4.1.2
loading translations [en]... done
making output directory... done
WARNING: html_static_path entry '_static' does not exist
loading pickled environment... done
building [mo]: targets for 0 po files that are out of date
building [latex]: all documents
updating environment: 0 added, 0 changed, 0 removed
looking for now-outdated files... none found
processing lofti_gaia.tex... index installation tutorials/QuickStart tutorials/Tutorial api lofti loftitools
resolving references...
done
writing... failed
Exception occurred:
File "/home/docs/checkouts/readthedocs.org/user_builds/lofti-gaia/conda/latest/lib/python3.7/site-packages/nbsphinx.py", line 2151, in depart_codearea_latex
assert 'Verbatim' in lines[0]
AssertionError
The full traceback has been saved in /tmp/sphinx-err-x1h83s3m.log, if you want to report the issue to the developers.
Please also report this if it was a user error, so that a better error message can be provided next time.
A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
According to the github issue 584 for nbsphinx (https://github.com/spatialaudio/nbsphinx/issues/584), this is due to a compatibility issue with sphinx 4.1.0. It can be gotten around by requiring sphinx version 4.0.2.
So in my requirements.txt file, I included sphinx==4.0.2, after which the build passes. So now my requirements.txt file looks like:
numpy
matplotlib
astropy>=4.0.1.post1
astroquery>=0.4
sphinx==4.0.2
ipython==7.19.0
nbsphinx>=0.8.6
and the build passes.
I have encountered the same issue. I did not solve it with .ipynb format, but converting the jupyter notebook to .rst format works.
May it helps

R: Suppressing renv project startup message

Typically, when starting up an renv project, one gets a message that looks something like this:
* Project '~/path/to/project' loaded. [renv 0.10.0]
I am trying to suppress this message, particularly when non-interactively running a script from this project.
Checking the package help, I noted ?config i.e. User-Level Configuration of renv. Specifically, I found synchronized.check, of which the document states is for controlling how renv lockfile synchronization is checked (this is also outputted to the console). However, I couldn't find how to control the main startup message. I also checked the ?settings but found nothing relevant either.
I've tried fiddling with options and Sys.setenv without luck so far.
So, is it possible to suppress the message, seeing that the renv script activate.R controls how the package itself is loaded?
You are correct that there isn't a specific documented way to configure this in renv. For now, you can set:
options(renv.verbose = FALSE)
before renv is loaded. (You may want to turn it back to TRUE if you want renv to display other messages as part of its normal work.)
You can suppress library startup messages with suppressPackageStartupMessages, e.g.
suppressPackageStartupMessages(library(igraph))
There is also suppressMessages for arbitrary function calls.

Files not being parsed by Gnatchop

Gnatchop - I am trying to run several files through gnatchop and I am getting 3 error messages for every file. I originally thought that the error was simply the permissions were wrong. But I changed the permissions and I still get the errors.
file.a: parse errors detected
file.a: chop may not be successful
file.a: error parsing offset info
Is there something I need to do to the files before I run them through Gnatchop?

fast export unexplained failure

I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.

Installing product 'collective.examples.userdata' gives "error: docs/HISTORY.txt: No such file or directory"

I am trying to try out collective.examples.userdata (by adding collective.examples.userdata to the eggs section of my buildout) but it's giving an error:
Getting distribution for 'collective.examples.userdata'.
error: docs/HISTORY.txt: No such file or directory
I have looked at the git repo and there is a docs/HISTORY.txt, so I am not sure why this would happen.
Because the release is broken.
Often times due to the complexity of releasing a package, mistakes are made. To avoid errors like this one, release managers can use the check-manifest utility which matches files in the sdist to files in the MANIFEST.in file and reports the results.

Resources