How to execute a designated test in Firebase Test Lab - firebase

I want to run one test from one test suite using gcloud firebase test android run but I can't figure out the syntax.
This command failed:
gcloud firebase test android run \
--type instrumentation \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=Pixel2,version=27,locale=en_US,orientation=portrait \
--verbosity debug \
--test-targets "class com.example.firebasetestlabplayground.ExampleInstrumentedTest.test002"
Error message:
java.lang.ClassNotFoundException: com.example.firebasetestlabplayground.ExampleInstrumentedTest.test002
at java.lang.Class.classForName(Native Method)
at java.lang.Class.forName(Class.java:453)
at android.support.test.internal.runner.TestLoader.doCreateRunner(TestLoader.java:72)
at android.support.test.internal.runner.TestLoader.getRunnersFor(TestLoader.java:104)
at android.support.test.internal.runner.TestRequestBuilder.build(TestRequestBuilder.java:789)
at android.support.test.runner.AndroidJUnitRunner.buildRequest(AndroidJUnitRunner.java:539)
at android.support.test.runner.AndroidJUnitRunner.onStart(AndroidJUnitRunner.java:382)
at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:2075)
Caused by: java.lang.ClassNotFoundException: Didn't find class "com.example.firebasetestlabplayground.ExampleInstrumentedTest.test002" on path: DexPathList[[zip file "/system/framework/android.test.runner.jar", zip file "/system/framework/android.test.mock.jar", zip file "/data/app/com.example.firebasetestlabplayground.test-F71g9fZiJ95s1aLuzIJRsw==/base.apk", zip file "/data/app/com.example.firebasetestlabplayground-EuQG5YDD3hrz6BPtBz2t6g==/base.apk"],nativeLibraryDirectories=[/data/app/com.example.firebasetestlabplayground.test-F71g9fZiJ95s1aLuzIJRsw==/lib/x86, /data/app/com.example.firebasetestlabplayground-EuQG5YDD3hrz6BPtBz2t6g==/lib/x86, /system/lib, /system/vendor/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:125)
at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
... 8 more
In my research, I came across How to execute a designated test suite class in Firebase Test Lab which was very helpful in getting me this far but it doesn't specify how to run a single test within the suite.
Also, according to https://github.com/piotrmadry/FirebaseTestLab-Android/issues/11, it should be possible to run a single test with the --test-targets parameter, but the documentation doesn't give an example at https://cloud.google.com/sdk/gcloud/reference/firebase/test/android/run#--test-targets.

After trial and error I discovered that I could specify a single test by using a # instead of a . between the test suite class name and the test name.
This command successfully ran a single test called test002 in com.example.firebasetestlabplayground.ExampleInstrumentedTest:
gcloud firebase test android run \
--type instrumentation \
--project locuslabs-android-sdk \
--app app/build/outputs/apk/debug/app-debug.apk \
--test app/build/outputs/apk/androidTest/debug/app-debug-androidTest.apk \
--device model=Pixel2,version=27,locale=en_US,orientation=portrait \
--verbosity debug \
--test-targets "class com.example.firebasetestlabplayground.ExampleInstrumentedTest#test002"

Michael, that syntax using # is documented in the gcloud command reference (or add --help to your command above to get docs from the command line). That gcloud documentation in turn links to the full documentation for the AndroidJUnitRunner which also describes the full syntax.

Related

How to test Firestore Security Rules with Jenkins?

I'm developing some Firestore security rules locally. I use mocha to test the rules, and locally everything works. I've a Jenkins pipeline that every time I merge a PR on develop it published the rules on Firebase in cloud. What I want to do is running my unit tests within Jenkins. Anyway, every time Jenkins calls yarn test from the pipeline, I get an error that says
#firebase/firestore: Firestore (7.18.0): Could not reach Cloud Firestore backend. Connection failed 1 times. Most recent error: FirebaseError: [code=internal]: 13 INTERNAL: Received RST_STREAM with code 2 triggered by internal client error: Protocol error
This typically indicates that your device does not have a healthy Internet connection at the moment. The client will operate in offline mode until it is able to successfully connect to the backend.
Is there a way to run the firebase emulators from Jenkins?
Thanks!
I found a way to do that.
By using firebase-tools-docker I can easily run my tests inside a docker container that brings up the emulators suite.
The Jenkinsfile goes like this:
def jenkinsUser = 1001
def firebaseDocker = 'andreysenov/firebase-tools:9.14.0'
stage('Pull docker image') {
sh "docker pull $firebaseDocker"
}
stage('Unit tests') {
sh "docker run -d --rm \
--user $jenkinsUser:$jenkinsUser \
-p 8080:8080 \
-v ${pwd()}:/home/node \
--name firebase-emulators \
$firebaseDocker \
firebase emulators:start"
sleep(5)
sh "docker exec firebase-emulators /bin/bash -c 'cd tests && yarn test'"
sh "docker stop firebase-emulators"
}
This is my folder structure (for reference):
Hope this helps 😉

Create Airflow connections on Cloud Composer using gcloud CLI

I am trying to create airflow connections on Cloud Composer using gcloud CLI.
I follow the document and run the following comment.
https://cloud.google.com/composer/docs/how-to/managing/connections#creating_new_airflow_connections
gcloud composer environments run my-env \
--project my-project \
--location asia-northeast1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra '{"extra\__google\_cloud\_platform\__project": "my-project", \
"extra\__google\_cloud\_platform\__key_path": "/home/airflow/gcs/data/keys/my-key.json", \
"extra\__google\_cloud\_platform\__scope": "https://www.googleapis.com/auth/cloud-platform"}'
kubeconfig entry generated for asia-northeast1-my-env-44718514-gke.
Executing within the following kubectl namespace: default
W0525 22:51:11.244104 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244246 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
W0525 22:51:11.244256 93234 flags.go:39] conn_extra is DEPRECATED and will be removed in
a future version. Use conn-extra instead.
[2019-05-25 14:51:13,215] {settings.py:176} INFO - setting.configure_orm():
Using pool settings. pool_size=5, pool_recycle=1800
[2019-05-25 14:51:13,598] {default_celery.py:80} WARNING - You have configured a
result_backend of redis://airflow-redis-service:6379/0, it is highly recommended to use an
alternative result_backend (i.e. a database).
[2019-05-25 14:51:13,600] {__init__.py:51} INFO - Using executor CeleryExecutor
[2019-05-25 14:51:13,680] {app.py:51} WARNING - Using default Composer Environment Variables.
Overrides have not been applied.
[2019-05-25 14:51:13,688] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
[2019-05-25 14:51:13,698] {configuration.py:516} INFO - Reading the config from
/etc/airflow/airflow.cfg
The connection is successfully created but project ID, key file path and scope are empty. So the connection is invalid.
When I manually created, those properties are not empty. Am I missing something?
Composer image: composer-1.5.0-airflow-1.10.1
I am not able to replicate it. When I run the following command the connection does get added with the extras field:
gcloud composer environments run my-env \
--project my-project \
--location europe-west1 \
connections -- --add \
--conn_id=my-conn-id \
--conn_type=google_cloud_platform \
--conn_extra='{"extra__google_cloud_platform__project": "my-project", "extra__google_cloud_platform__key_path":"/tesf"}
Found some syntax error on escaping quotation. This works fine.
$ CONNECTION_CREATE_COMMAND="gcloud composer environments run $COMPOSER_ENVIRONMENT \
--project $COMPOSER_PROJECT \
--location ${COMPOSER_LOCATION} \
connections -- --add \
--conn_id=${CONN_ID_BASE}_${app}_${c} \
--conn_type=google_cloud_platform \
--conn_extra '{\"extra__google_cloud_platform__project\": \"${BQ_PROJECT}\", \
\"extra__google_cloud_platform__key_path\": \"${KEY_JSON_FILE_PATH}\", \
\"extra__google_cloud_platform__scope\": \"https://www.googleapis.com/auth/cloud-platform\"}'"
$ eval $CONNECTION_CREATE_COMMAND

Getting No FileSystem for Schema WASB . Hdinsight Map Reduce

I am running a simple map reduce job in Azure HDInsight,below is the command that we are running:
java -jar WordCount201.jar wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa/CustData.csv wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa
Getting the below error :
java.io.IOException: No FileSystem for scheme: wasb
For Java use jdk1.8 and below POM org.apache.hadoop hadoop-mapreduce-examples2.7.3scope>provided org.apache.hadoophadoop-mapreduce-client-common2.7.3providedjdk.toolsjdk.toolsorg.apache.hadoophadoop-common2.7.3provided
WASB is a wrapper around HDFS file system. I am not sure you can use it in normal java program. Do you have any reference / link which you referred to?
You can try to get the https equivalent of the custData.csv file.Below is an example of Spark job I am able to submit on HDInsight cluster using WASB
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/learning-spark-1.0.jar \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/ratings.csv \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/movies.csv
And here is an example of passing the same files using their equivalent https URI
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/learning-spark-1.0.jar \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/ratings.csv \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/movies.csv
For hadoop job kindly run the jar from root user . Once you login to HDinsight run the command sudo su - . And the create a folder and place the jar to that folder and run the jar .

MUP (meteor) deploy deletes files

Sadly although mup deploy works perfectly, I have a folder called ".uploads", which users can upload files into.
Each deploy deletes the files in the directory. I would like to exclude or protect the file from the deploy deleting the files, any ideas?
Currently I filed an issue: https://github.com/arunoda/meteor-up/issues/1022
Not sure if its an issue with mup or my system setup. I use tomitrescak:meteor-uploads and also cfs:file-collection. They both have the same issue.
But from what I see it shall be easy to do, you need to modify the script at https://github.com/arunoda/meteor-up/blob/mupx/templates/linux/start.sh#L26
And add a new mapping. You can map multiple volumes such as following: Mounting multiple volumes on a docker container?
So your script would look like (mounting the folder on the host machine /opt/uploads/myapp to the folder /opt/uploads in the container):
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--volume=/opt/uploads/myapp:/opt/uploads/ \ .... this is the new line
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
This can be found in "start.sh" in MUP until this issue is resolved and you can mount volumes via the options.
Also see discussion here: https://github.com/tomitrescak/meteor-uploads/issues/235#issuecomment-228618130

How to solve the CSScomb error in PhpStorm?

I wanted to install on PhpStorm 8.0.1 CSScomb.js
I do everything as it is written on a page on github. Established CSScomb globally and locally (so sure). Prescribed ways and...
...run and out this error:
Error running CSScomb: Can not run program "C: \ Users \ Kanat \
AppData \ Roaming \ npm \ node_modules \ csscomb \ bin \ csscomb" (in
directory "D: \ OpenServer \ domains \ LPDevplate \ src \ scss \
modules"): CreateProcess error = 193% 1 is not a valid Win32
application
Someone faced with this error and help solve it?
the same error "Error running 'CSScomb': Cannot run program"

Resources