I am trying to create airflow connections on Cloud Composer using gcloud CLI.
I follow the document and run the following comment.
https://cloud.google.com/composer/docs/how-to/managing/connections#creating_new_airflow_connections
gcloud composer environments run my-env \
--project my-project \
--location asia-northeast1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra '{"extra\__google\_cloud\_platform\__project": "my-project", \
"extra\__google\_cloud\_platform\__key_path": "/home/airflow/gcs/data/keys/my-key.json", \
"extra\__google\_cloud\_platform\__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for asia-northeast1-my-env-44718514-gke.
Executing within the following kubectl namespace: default
W0525 22:51:11.244104 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244246 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244256 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
[2019-05-25 14:51:13,215] {settings.py:176} INFO - setting.configure_orm():
Using pool settings. pool_size=5, pool_recycle=1800
[2019-05-25 14:51:13,598] {default_celery.py:80} WARNING - You have configured a
result_backend of redis://airflow-redis-service:6379/0, it is highly recommended to use an
alternative result_backend (i.e. a database).
[2019-05-25 14:51:13,600] {__init__.py:51} INFO - Using executor CeleryExecutor
[2019-05-25 14:51:13,680] {app.py:51} WARNING - Using default Composer Environment Variables.
Overrides have not been applied.
[2019-05-25 14:51:13,688] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
[2019-05-25 14:51:13,698] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
The connection is successfully created but project ID, key file path and scope are empty. So the connection is invalid.
When I manually created, those properties are not empty. Am I missing something?
Composer image: composer-1.5.0-airflow-1.10.1
I am not able to replicate it. When I run the following command the connection does get added with the extras field:
gcloud composer environments run my-env \
--project my-project \
--location europe-west1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra='{"extra__google_cloud_platform__project": "my-project", "extra__google_cloud_platform__key_path":"/tesf"}
Found some syntax error on escaping quotation. This works fine.
$ CONNECTION_CREATE_COMMAND="gcloud composer environments run $COMPOSER_ENVIRONMENT \
--project $COMPOSER_PROJECT \
--location ${COMPOSER_LOCATION} \
connections -- --add \
--conn_id=${CONN_ID_BASE}_${app}_${c} \
--conn_type=google_cloud_platform \
--conn_extra '{\"extra__google_cloud_platform__project\": \"${BQ_PROJECT}\", \
\"extra__google_cloud_platform__key_path\": \"${KEY_JSON_FILE_PATH}\", \
\"extra__google_cloud_platform__scope\": \"https://www.googleapis.com/auth/cloud-platform\"}'"
$ eval $CONNECTION_CREATE_COMMAND
Related
Is it possible to use firebase-functions from my laptop? If not, is firebase-admin the only option remaining?
Here are some examples:
How can I rent and use my own servers for cloud functions?
Listen only to additions to a cloud firestore collection?
Does Firebase Admin SDK perform any caching?
I am able to make an index.js file on my laptop, npm install firebase-admin module, link to my Firestore database and make changes to data just fine, using admin-credentials. When I also try npm install firebase-functions make use of event-triggers onCreate/onWrite/onUpdate/onDelete, they do not get any updates?
To my understanding, the only way possible to make use of event-triggers is by uploading to cloud functions, since you need Google's infrastructure to use those and you can't use them on your local machine, which you can with firebase-admin package. You can use use the local emulator(?), but it isn't production ready and is not for that use case(?).
So, in order to listen for new events on my Firestore database, only using my laptop (not Google Cloud Functions platform or some other server-hosted option), I have to use .onSnapshot() from firebase-admin npm.
However that module is unable to cache, and you are left querying the whole firestore database, downloading every document.
Is this correct? or is there any way possible to make firebase-functions work from my laptop server using firebase-admin + admin credentials, almost as if I uploaded the file to cloud platform. I don't require this part of data to be on the cloud, so I want to make changes and adjust firestore database from my laptop's Terminal.
You will need to balance scalability and what you want to achieve. One approach uses a Parse Server and a Docker container that works with Express. This method's advantage is its flexibility as to where the parse server can run. You can run this on your laptop or move it to Google Cloud if you need more processing power. However, it is worth noting that the container cannot access all Firebase trigger types.
I am not sure which Operating System you are using, but for Ubuntu, you can install Docker and run Parse Server like this:
# Update the apt package first
$ sudo apt-get update
# install dependencies
$ sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common \
git
# Add Docker’s GPG key:
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# add the Docker repository
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Update the apt package again
$ sudo apt-get update
# Install Docker
$ sudo apt-get install docker-ce docker-ce-cli containerd.io
$ git clone https://github.com/parse-community/parse-server
$ cd parse-server
$ docker build --tag parse-server .
$ docker run --name my-mongo -d mongo
To run the Parse Server:
$ docker run --name my-parse-server -v config-vol:/parse-server/config \
-p 1337:1337 --link my-mongo:mongo -d parse-server --appId APPLICATION_ID \
--masterKey MASTER_KEY --databaseURI mongodb://mongo/test
To link Firebase and the Docker Parse Server, you will need an adapter. The container above is an example, but it should be enough to get you started running from your laptop.
Originally posted in https://github.com/flyway/flyway/issues/2429
I have an issue (probably a wrong configuration) using flyway placeholders; I can use placeholders for my own variables; but it fails cos one value in an sql query has a similar syntax as flyway placeholder syntax.
Which version and edition of Flyway are you using?
5.2.4 using official docker image
If this is not the latest version, can you reproduce the issue with the latest one as well? (Many bugs are fixed in newer releases and upgrading will often resolve the issue)
5.2.4 tag is the latest version in docker hub (https://hub.docker.com/r/boxfuse/flyway/)
Which client are you using? (Command-line, Java API, Maven plugin, Gradle plugin)
Command line thru the docker image
Which database are you using (type & version)?
MySQL Server version: 5.7.26 - MySQL Community Server (GPL) - This is a legacy project
Which operating system are you using?
Linux CentOS 7 x64 (uname -r = 3.10.0-957.5.1.el7.x86_64)
What did you do?
(Please include the content causing the issue, any relevant configuration settings, the SQL statement that failed (if relevant) and the command you ran.)
I apply flyway to initialize/update a MySQL database; here are a couple of SQL commands.
Here I use placeholders with xxx prefixes:
CREATE USER IF NOT EXISTS '${xxxdbuser}'#'${xxxdbclip}' IDENTIFIED WITH mysql_native_password BY '${xxxdbpass}';
GRANT ALL PRIVILEGES ON ${xxxdbbase}.* TO '${xxxdbuser}'#'${xxxdbclip}';
FLUSH PRIVILEGES;
... then in another SQL script, from a thirdparty app, I insert a content with ${row}. I don't want Flyway to interpret ${row} as a placeholder, only my own vars starting by ${xxx such as ${xxxdbuser}
INSERT INTO `xxx_xxx` (`name`, `template`, `lang`, `group`, `version`, `data`, `size`, `style`, `modified`) VALUES
... ('addressbook.email.rows', '', '', 0, '1.3.001', 'a:1:{i:0;a:6:{ ... \"label\";s:21:\"$row_cont[type_label]\";s:4:\"name\";s:12:\"${row}[type]\";s:5:\"align\";... :{i:0;s:4:\"100%\";}}}', '100%', '', 1150326789), ...
I guess the placeholderPrefix parameter described into https://flywaydb.org/documentation/commandline/info or FLYWAY_PLACEHOLDER_PREFIX env var described into https://flywaydb.org/documentation/envvars#FLYWAY_PLACEHOLDER_PREFIX is for that purpose; but I didn't succeed in using them!
Here is my command using docker:
docker run --rm --network="$(docker network ls --filter name=app_mysql_dev --filter "label=type=app" --format '{{.ID}}')" \
-v `pwd`/code/Admin/install:/flyway/sql \
-e FLYWAY_URL=jdbc:mysql://${host}:${port}?useSSL=false \
-e FLYWAY_SCHEMAS=${base} \
-e FLYWAY_USER=root \
-e FLYWAY_PASSWORD=${root_pwd} \
-e FLYWAY_PLACEHOLDERS_PREFIX="\${xxx" \
-e FLYWAY_PLACEHOLDERS_XXXDBBASE=${base} \
-e FLYWAY_PLACEHOLDERS_XXXDBUSER=${user} \
-e FLYWAY_PLACEHOLDERS_XXXDBPASS=${pass} \
-e FLYWAY_PLACEHOLDERS_XXXDBCLIP=${clip} \
-e FLYWAY_PLACEHOLDERS_XXXVHOST=${vhost} \
-e FLYWAY_PLACEHOLDERS_XXXSCHEME=${scheme} \
-e FLYWAY_CONNECT_RETRIES=5 \
boxfuse/flyway:5.2.4 -locations=filesystem:/flyway/sql/custom/ \
migrate
What did you expect to see?
All ${xxx placeholders should be replaced by their corresponding ENV values; and the ${row} chain in SQL code stay unchanged.
What did you see instead?
Flyway error:
Flyway Community Edition 5.2.4 by Boxfuse
Database: jdbc:mysql://tasks.atlas-mysql:3306 (MySQL 5.7)
ERROR: No value provided for placeholder expressions: ${row}. Check your configuration!
I guess I did not configure my command correctly... any help, advise and/or command line example would help.
Regrads,
Chris
I think there are a couple of problems in your command:
-e FLYWAY_PLACEHOLDERS_PREFIX="\${xxx"
should be FLYWAY_PLACEHOLDER_PREFIX (no S), and
-e FLYWAY_PLACEHOLDERS_XXXDBBASE=${base}
should be FLYWAY_PLACEHOLDERS_DBBASE (as XXX is part of the prefix, it's not included in the placeholder name; and analogously for following lines).
I am running a simple map reduce job in Azure HDInsight,below is the command that we are running:
java -jar WordCount201.jar wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa/CustData.csv wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa
Getting the below error :
java.io.IOException: No FileSystem for scheme: wasb
For Java use jdk1.8 and below POM org.apache.hadoop hadoop-mapreduce-examples2.7.3scope>provided org.apache.hadoophadoop-mapreduce-client-common2.7.3providedjdk.toolsjdk.toolsorg.apache.hadoophadoop-common2.7.3provided
WASB is a wrapper around HDFS file system. I am not sure you can use it in normal java program. Do you have any reference / link which you referred to?
You can try to get the https equivalent of the custData.csv file.Below is an example of Spark job I am able to submit on HDInsight cluster using WASB
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/learning-spark-1.0.jar \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/ratings.csv \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/movies.csv
And here is an example of passing the same files using their equivalent https URI
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/learning-spark-1.0.jar \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/ratings.csv \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/movies.csv
For hadoop job kindly run the jar from root user . Once you login to HDinsight run the command sudo su - . And the create a folder and place the jar to that folder and run the jar .
I want to run one test from one test suite using gcloud firebase test android run but I can't figure out the syntax.
This command failed:
gcloud firebase test android run \
--type instrumentation \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=Pixel2,version=27,locale=en_US,orientation=portrait \
--verbosity debug \
--test-targets "class com.example.firebasetestlabplayground.ExampleInstrumentedTest.test002"
Error message:
java.lang.ClassNotFoundException: com.example.firebasetestlabplayground.ExampleInstrumentedTest.test002
at java.lang.Class.classForName(Native Method)
at java.lang.Class.forName(Class.java:453)
at android.support.test.internal.runner.TestLoader.doCreateRunner(TestLoader.java:72)
at android.support.test.internal.runner.TestLoader.getRunnersFor(TestLoader.java:104)
at android.support.test.internal.runner.TestRequestBuilder.build(TestRequestBuilder.java:789)
at android.support.test.runner.AndroidJUnitRunner.buildRequest(AndroidJUnitRunner.java:539)
at android.support.test.runner.AndroidJUnitRunner.onStart(AndroidJUnitRunner.java:382)
at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:2075)
Caused by: java.lang.ClassNotFoundException: Didn't find class "com.example.firebasetestlabplayground.ExampleInstrumentedTest.test002" on path: DexPathList[[zip file "/system/framework/android.test.runner.jar", zip file "/system/framework/android.test.mock.jar", zip file "/data/app/com.example.firebasetestlabplayground.test-F71g9fZiJ95s1aLuzIJRsw==/base.apk", zip file "/data/app/com.example.firebasetestlabplayground-EuQG5YDD3hrz6BPtBz2t6g==/base.apk"],nativeLibraryDirectories=[/data/app/com.example.firebasetestlabplayground.test-F71g9fZiJ95s1aLuzIJRsw==/lib/x86, /data/app/com.example.firebasetestlabplayground-EuQG5YDD3hrz6BPtBz2t6g==/lib/x86, /system/lib, /system/vendor/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125)
at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
... 8 more
In my research, I came across How to execute a designated test suite class in Firebase Test Lab which was very helpful in getting me this far but it doesn't specify how to run a single test within the suite.
Also, according to https://github.com/piotrmadry/FirebaseTestLab-Android/issues/11, it should be possible to run a single test with the --test-targets parameter, but the documentation doesn't give an example at https://cloud.google.com/sdk/gcloud/reference/firebase/test/android/run#--test-targets.
After trial and error I discovered that I could specify a single test by using a # instead of a . between the test suite class name and the test name.
This command successfully ran a single test called test002 in com.example.firebasetestlabplayground.ExampleInstrumentedTest:
gcloud firebase test android run \
--type instrumentation \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=Pixel2,version=27,locale=en_US,orientation=portrait \
--verbosity debug \
--test-targets "class com.example.firebasetestlabplayground.ExampleInstrumentedTest#test002"
Michael, that syntax using # is documented in the gcloud command reference (or add --help to your command above to get docs from the command line). That gcloud documentation in turn links to the full documentation for the AndroidJUnitRunner which also describes the full syntax.
I'm looking for how to install Graphicsmagick at Meteor Up Docker.
I found this solution (Access binaries inside docker) but I couldn't make work, where do I put those lines at start.sh?
meteorDockerId=docker ps | grep meteorhacks/meteord:base | awk '{print $1}'
docker exec $meteorDockerId apt-get install graphicsmagick -y
That's my start.sh:
#!/bin/bash
APPNAME=instagatas
APP_PATH=/opt/$APPNAME
BUNDLE_PATH=$APP_PATH/current
ENV_FILE=$APP_PATH/config/env.list
PORT=80
USE_LOCAL_MONGO=0
# remove previous version of the app, if exists
docker rm -f $APPNAME
# remove frontend container if exists
docker rm -f $APPNAME-frontend
set -e
docker pull meteorhacks/meteord:base
if [ "$USE_LOCAL_MONGO" == "1" ]; then
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
else
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--hostname="$HOSTNAME-$APPNAME" \
--env-file=$ENV_FILE \
--name=$APPNAME \
meteorhacks/meteord:base
fi
docker pull meteorhacks/mup-frontend-server:latest
docker run \
-d \
--restart=always \
--volume=/opt/$APPNAME/config/bundle.crt:/bundle.crt \
--volume=/opt/$APPNAME/config/private.key:/private.key \
--link=$APPNAME:backend \
--publish=443:443 \
--name=$APPNAME-frontend \
meteorhacks/mup-frontend-server /start.sh
Re-installing the graphicsmagick package every time you re-start the containers seems like a hack I wouldn't want to do.
If you're modifying the start script already, might as well use a Dockerfile:
FROM meteorhacks/meteord:base
RUN apt-get install graphicsmagick -y
Then modify start.sh template to build a new docker image with graphicsmagick, tag it and use that image instead:
see: https://gist.github.com/so0k/7d4be21c5e2d9abd3743/revisions
EDIT: Where to put Dockerfile?
start.sh template is copied to /opt/<appName>/config/, currently the Dockerfile would need to be in that same directory (/opt/<appName>/config/Dockerfile)
see Linux init Task
Alternatively, you can specify specific Dockerfile with the -f flag for the docker build
Or your third option is to pipe Dockerfile to docker build using a here document
I've updated the start.sh gist, we no longer pull the meteord:base image and build it instead:
docker build -t meteorhacks/meteord:app - << EOF
FROM meteorhacks/meteord:base
RUN apt-get install graphicsmagick -y
EOF
The docker build will run every time, but as long as the requirements aren't changing, docker will use the docker images it cached.
The development Version of Meteor Up at Kadirahq allows specification of a custom Docker Image in the config file (mup.js).
MeteorD-Images with Graphicsmagick installed are available on Docker Hub.
This got me a working deployment (Meteor 1.3.2.4, Meter Up 309cefb, Node v5.4.1):
mup.js:
module.exports = {
…
meteor: {
dockerImage: 'ianmartorell/meteord-graphicsmagick',
…
},
};
I couldn't get the docker image that #bskp mentioned to work, so I figured out how to write one that uses abernix/meteord:base and then has graphicsmagick installed. Very simple, but it seems to be working for me on Meteor 1.4.1.1
I just did this in my mup.js file
docker: {
image: "joshjoe/meteor-graphicsmagick",
},
This was a huge pain to get working, so I'd be happy to help anyone who is struggling with this.
https://github.com/c316/meteor-graphicsmagick
If the if statement successes, you should be able to see a running container corresponding to the image you are grepping. In my opinion you can add the two lines after the fi to obtain the environment variable.
Build an image for get things right, but you can do temporary:
docker exec -it MeteorAppName apt-get install imagemagick -y
docker restart MeteorAppName
Check imagemagick: docker exec -it MeteorAppName convert -version
Why don't you add the following package meteor add cfs:graphicsmagick
https://atmospherejs.com/cfs/graphicsmagick
It tries to make sure Graphicsmagick is available. It worked for my use case i think it will work with docker too.