Error when installing `snowflake-connector-python` to GCP Cloud Composer - airflow

I'm trying to use install snowflake-connector-python to a Cloud Composer environment but keep receiving an error that pops up in the UI, "Failed to install PyPI Packages. Check the Cloud Build log for details". The build log doesn't seem overly helpful and is very long, so I'll just show some of the notable things here.
This is a warning that shows after installing snowflake-connector-python and its dependencies.
Installing collected packages: pycryptodomex, asn1crypto, oscrypto, jmespath, botocore, s3transfer, boto3, isodate, msrest, azure-core, azure-storage-blob, azure-common, snowflake-connector-python
Successfully installed asn1crypto-1.4.0 azure-common-1.1.25 azure-core-1.8.1 azure-storage-blob-12.5.0 boto3-1.14.63 botocore-1.17.63 isodate-0.6.0 jmespath-0.10.0 msrest-0.6.19 oscrypto-1.2.1 pycryptodomex-3.9.8 s3transfer-0.3.3 snowflake-connector-python-2.3.2
+ [[ -z fail ]]
+ python3 -m pipdeptree --warn fail
Warning!!! Possibly conflicting dependencies found:
* botocore==1.17.63
- docutils [required: >=0.10,<0.16, installed: 0.16]
This is the final error notification.
The command '/bin/sh -c bash installer.sh $COMPOSER_PYTHON_VERSION fail' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
I'm running the image composer-1.12.1-airflow-1.10.10 in zone us-central1-c and is using python 3. I've tried installing different versions of the package with no luck, and have tried doing it via the UI and via gcloud. Any help as to the root of this problem would be much appreciated!

I tried trying to install older versions of snowflake-connector-python but that didn't work.
What did end up working was adding docutils==0.15 to the list of packages to install. This fixed the dependency conflict warning and led to a successful build.
Previously, I've received dependency conflict warnings that led to successful builds. But for this case, I actually needed to resolve that conflict.

The answer from nate did not work for me, the PyPi package install succeeds in CloudBuild but then the Web Server fails to update and rolls back installing the packages.
In order to get this working we had to downgrade our Composer to the September release which comes with docutils==0.15.2.
The image we used was composer-1.12.1-airflow-1.10.9, it seems that any image in airflow 1.10.10 or above has this issue as they come with docutils==0.16 as standard. You can see which version of docutils is in an image by using this link and expanding the PyPi packages column.

Related

Installing pypi package google-cloud-datastore on Google Cloud Composer fails

I'm running Airflow 1.10.6 on Google Cloud Composer with Python 3. To initiate a dataflow job from the composer environment I need to have the PyPi package google-cloud-datastore. When I try to add this package through either the interface or cloud shell, the build fails. It raises the following error:
UPDATE operation on this environment failed 1 hour ago with the following error message:
Failed to install PyPI packages.
If I check the logs of Google Build for the Kubernetes Engine I see the following error:
I 2020-03-04T14:36:23.939075607Z [0mgoogle-cloud-datastore 1.11.0 has requirement google-cloud-core<2.0dev,>=1.0.3, but you have google-cloud-core 0.29.1.
I 2020-03-04T14:36:23.939669242Z google-cloud-datastore 1.11.0 has requirement google-api-core[grpc]<2.0.0dev,>=1.14.0, but you have google-api-core 1.8.1.
So google-cloud-datastore requires google-cloud-core>=1.0.3. I tried to explicitly set google-cloud-core to 1.0.3, but then I get the following error:
I 2020-03-05T08:13:18.539300693Z [0mgoogle-cloud-logging 1.9.1 has requirement google-cloud-core<0.30dev,>=0.29.0, but you have google-cloud-core 1.0.3.
I 2020-03-05T08:13:18.539878367Z google-cloud-bigtable 0.32.0 has requirement google-cloud-core<0.30dev,>=0.29.0, but you have google-cloud-core 1.0.3.
I 2020-03-05T08:13:18.540308937Z google-cloud-bigquery 1.8.1 has requirement google-cloud-core<0.30dev,>=0.29.0, but you have google-cloud-core 1.0.3.
So the ranges of versions required for the packages are not overlapping. Does anyone know if this can be fixed? Please let me know.
There is a compatibility issue with google-cloud-datastore in the main container of Composer, and that the latest version of this package could not be installed. The Composer product team has confirmed the issue and is working on it.
Attempting to install an older version of google-cloud-datastore may be a workaround before the root cause is fixed. I have tried with 1.7.4 version and got positive result:
I hope it helps.
In case you (or someone else reading this) needs to use the latest version of google-cloud-datastore (==1.11.0), I was able to install it in my environment (composer-1.9.1-airflow-1.10.6) by adding recent versions of the following libraries to the dependencies too:
googleapis-common-protos==1.6.0
google-cloud-spanner==1.14.0
google-cloud-bigtable==1.2.1
google-cloud-logging==1.14.0
google-cloud-bigquery==1.22

Installing R on Apache Zeppelin

I'm trying to install Apache Zeppelin on my old computer that runs Ubuntu. So far, I'm able to install Zeppelin very easily by cloning the latest 0.6.0 snapshot release using
git clone https://github.com/apache/incubator-zeppelin.git
cd incubator-zeppelin
mvn clean package -DskipTests
but I want to have R on Zeppelin. Supposedly, the 0.6.0 snapshot has two R interpreters, but when I run the R tutorial (the pre-made note that uses %r), I get this list of errors.
I followed several guides to try and install R as an interpreter, but each one resulted in some kind of error. I tried this instructional:
http://www.r-bloggers.com/interactive-data-science-with-r-in-apache-zeppelin-notebook/, and got a build failure on "R Interpreter". The error message was
"dependency 'evaluate' is not available for package 'rzeppelin'
* removing '/home/rebecca/Zeppelin-With-R/R/lib/rzeppelin'"
and then a bit lower down
Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2.1:exec (default) on project zeppelin-zrinterpreter: Command execution failed. Process exited with an error: 1
I also tried this Stack Overflow guide: Anyone tried to add R interpreter onto Apache Zeppelin?, and while I was able to run incubator-zeppelin, I received an error when I used either the %spark.r or %r interpreter tags, saying both "interpreter not found" and "prefix not found". Spark doesn't work either after following the first solution, getting the same error mentioned in the second solution (the jar file not being there), and then trying the second solution.
Does anyone have a guide for installing R onto the newest version of Zeppelin? I'm very flexible in the way I can install it. I can run other operating systems onto my computer, and I also have Virtual Box installed on my other computer, which is a mac.
There is currently a bug in the latest HEAD of zeppelin that was recently introduced and prevents the R interpreter from launching cleanly
Did anyone created a Zeppelin Jira Issue for that?
For me it is working on Zeppelin branch-0.6
build Zeppelin with r profile: -DskipTests -Prthis will...
create a directory 'R' in git repo root
copy the 'zeppelin-rinterpreter*.jar' into git_repo_root/interpreter/spark
build Zeppelin with build distro profile: e.g. -DskipTests -Pbuild-distr -Pspark-1.6 -Phadoop-2.6
use zeppelin-distribution/target/zeppelin*.tar.gz for installation
ensure both 1.1 and 1.2 are present in your zeppelin installation
The error you're getting is that you need to have the R package evaluate installed. You can install this simply by launching R and typing install.packages('evaluate').
That said, your excerpt mentions the directory Zeppelin-with-R. That's my repo, which is the R interpreter in the form when it was accepted into Zeppelin. That is version 0.5.6, not 0.6.0. There is currently a bug in the latest HEAD of zeppelin that was recently introduced and prevents the R interpreter from launching cleanly. Your best bet for now is to use the one from my repo and install clean, without trying to pull-in from Zeppelin HEAD.

Errors building R-packages for conda

I am having a tough time installing R-packages that are not available in the Anaconda repositories. My attempts so far can be found here How to install R-packages not in the conda repositories?.
Currently, I am trying to build the R-package rafalib for conda by following the instructions from this article under the heading Building a conda R package.
The first part works fine.
conda skeleton cran rafalib
Out:
Tip: install CacheControl to cache the CRAN metadata
Fetching metadata from http://cran.r-project.org/
Writing recipe for rafalib
Done
The build command runs into errors
conda build r-rafalib
Out:
Removing old build environment
Removing old work directory
BUILD START: r-rafalib-1.0.0-r3.2.2_0
Using Anaconda Cloud api site https://api.anaconda.org
Fetching package metadata: ......
Solving package specifications: .
Error: Packages missing in current linux-64 channels:
- r 3.2.2*
- r-rcolorbrewer
I have r 3.2.2-64bit installed via conda and it runs without problems. I also already have r-colorbrewer installed via conda and I can use that package without issues in R. Why am I getting these errors when trying to build a conda package?
I am on Linux (Antergos, an Arch derivative) with kernel 4.4.5-1-ARCH.
UPDATE 2015/04/19
Thanks to this answer, I found out that I could include the dependencies by building them separately in the same directory as the package I want to install. That didn't work for me, but I also read that I can include a channel in the build command with -c, just as when installing. So now I do:
conda build -c r r-rafalib
This gets passed all the dependency problems, but after fetching, extracting and linking packages, it fails. Here is the end of the error message.
Removing old work directory
Source cache directory is: /home/joel/anaconda2/conda-bld/src_cache
Downloading source to cache: rafalib_1.0.0.tar.gz
Downloading http://cran.r-project.org/src/contrib/rafalib_1.0.0.tar.gz
rafalib_1.0.0. 100% |#######################| Time: 0:00:00 4.87 MB/s
Success
Extracting download
Package: r-rafalib-1.0.0-r3.2.2_0
source tree in: /home/joel/anaconda2/conda-bld/work/rafalib
+ mv DESCRIPTION DESCRIPTION.old
+ grep -v '^Priority: ' DESCRIPTION.old
+ /home/joel/anaconda2/envs/_build/bin/R CMD INSTALL --build .
sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook
Command failed: /bin/bash -x -e /home/joel/drafts/r-rafalib/build.sh
The error sh: symbol lookup error: sh: undefined symbol: rl_signal_event_hook is the same as I encounter when using install.packages() as reported here.
There is some related discussion in this thread. I have tried to get around this error by installing different versions of ncurses, including this patched version, and I have tried to link the readline libraries, as suggested here, but I keep running into the same error. I'm quite lost at this point and any help to solve this would be greatly appreciated.
Although I started out with a different problem, the final solution turned out to be the same as I posted elsewhere How to install R-packages not in the conda repositories?. I am adding it here for completeness.
In the end, I got around the rl_event_hookproblems by following the approach recommended here and symlinking anaconda's libreadline to the system one:
mv ~/anaconda3/lib/libreadline.s.6.2 ~/anaconda3/lib/libreadline.s.6.2.bak
ln -s /usr/lib/libreadline.so.6.3 ~/anaconda3/lib/libreadline.s.6.2
I am still having troubles installing some dependency heavy R-packages due to failure to load shared objects when using install.packages() from withing R. However, simpler packages work fine and I can get most of the dependency heavy packages from anacondas R-repositories.

Setting up IJulia

I am trying to set up IJulia on my PC, but after I get it running I get the constant messages that "kernel has died" and there is a problem with ZMQ library.
Trying to rebuild or reinstall ZMQ does not work. I get an error:
================================[ BUILD ERRORS ]================================
WARNING: ZMQ had build errors.
- packages with build errors remain installed in C:\Users\Gisaev\.julia\v0.3
- build the package(s) and all dependencies with `Pkg.build("ZMQ")`
- build a single package by running its `deps/build.jl` script
Trying to execute build.jl by line (part that correspond to windows) I get the error "Provider PackageManager faild to satisfy dependency zmq."
I am kind of lost here, because ZMQ is obviously correctly installed and working for IPython - IPython notebooks work just fine; I have a fresh installation from Anaconda.
Try to run
Pkg.build("ZMQ")
to see what happens.
If there are warnings about "libpgm" and "zeromq32", you can delete the lib directories and then try to build ZMQ again.
rm -rf /Users/username/.julia/v0.x/Homebrew/deps/usr/Cellar/zeromq32/3.2.5
rm -rf /Users/username/.julia/v0.x/Homebrew/deps/usr/Cellar/libpgm/5.2.122
Pkg.build("ZMQ")
This will correctly build the libpgm and zeromq32 you needed.
(I suppose the operation system is OS X. If you are using Windows, please refer to Installing ZMQ on Windows 7+ seems to require admin privileges #69.)

Could not resolve the specified constraints for this project: Error: conflict: blaze#1.0.0 vs 2.0.0

I tried to upgrade my meteor app from 0.8something to 0.9.1.1 and now
Could not resolve the specified constraints for this project: Error:
conflict: blaze#1.0.0 vs 2.0.0
I'm not sure how to proceed. I try to run meteor list and meteor remove but no matter what meteor command I run I get this error.
The steps I did were
run meteor update
updated to Meteor 0.9.1.1.
run mrt migrate-app
got some errors
Error: The version 1.2.11 of package roles has not yet been migrated
Error: The version 1.2.0 of package accounts-meld has not yet been
migrated Error: The version 2.4.13 of package kadira has not yet been
migrated Error: The version 0.4.8 of package analytics has not yet
been migrated Error: The version 2.1.0.2 of package momentjs has not
yet been migrated Error: The version 1.0.2 of package subs-manager has
not yet been migrated
If you want to continue, remove the package(s) from smart.json, run
`mrt install`, and try again. After you have successfully migrated,
you can add them back but note: You will NOT receive further
updates!. See https://hackpad.com/Migrating-Apps-UfPrM192vSQ for more
information.
removed those packages from smart.json and ran mrt install. no errors
reran mrt migrate-app. This time no errors.
But now I can't do anything as I always get the error
meteor list
Figuring out the best package versions to use. This may take a moment.
Refreshing package metadata. This may take a moment.
Could not resolve the specified constraints for this project:
Error: conflict: blaze#1.0.0 vs 2.0.0
Please help troubleshoot. Thanks.
as I figured out here, perhaps you should try out:
remove all packages, update, add them all back, without the version suffix... pretty easy to do now that they are all single lines in the .meteor/packages file:
sed -e 's/^[a-zA-Z0-9]/meteor remove &/' .meteor/packages | sed 's/\#[0-9\.]*//g' > packages-rm.sh
sed -e 's/ remove / add /' packages-rm.sh > packages-add.sh
bash packages-rm.sh
meteor list # should be empty
meteor update
bash packages-add.sh
meteor list

Resources