I have the following issue:
For now I have successfully added travis notifications to slack channel by editing the travis.yml file. My next step is adding data encryption. I've gone through the slack manual about travis and found that I have to add something the following line of code:
language: bash
travis encrypt "account:token#channel_name" --add notifications.slack
notifications:
slack: account:token#channel_name
Without the second line of code the travis worked perfectly and notifications were send to the channel but after I've added the line of code supposed to do the encryption, travis failed with the following output:
The error was "could not find expected ':' while scanning a simple key at line 2 column 1".
I've also tried to add
.rooms
after
notifications.slack
or to remove the channel name from the line supposed to do the encryption but without any success. I've added : before travis encrypt but still have the same error!
Thank you in advance!
Just in case someone is looking for an answer. You need to run that command with the Travis CLI. Install it and run travis encrypt "account:token#channel_name" --add notifications.slack in the folder of the repository you want to add the slack integration.
Related
It is known that additional files can be uploaded via the other files flag in gcloud.
I am trying to use the firebase test lab performance application to check the performance of a network.
The command i run is:
gcloud firebase test android run --type=game-loop --app=bazel-bin/tensorflow/lite/tools/benchmark/experimental/firebase/android/benchmark_model_firebase.apk --device model=flame,version=29 --other-files=/data/local/tmp/graph=network1.tflite
The command crashes with the error:
ERROR: gcloud crashed (InvalidUserInputError): Could not guess mime type for network1.tflite
Is there a way to circumvent the problem or somehow pass the mime type to the command line ?
I found a crazy workaround. I renamed the tflite file to so so the mimetypes library gave it an octet stream mime type and everything worked.
gcloud firebase test android run --type=game-loop --app=bazel-bin/tensorflow/lite/tools/benchmark/experimental/firebase/android/benchmark_model_firebase.apk --device model=flame,version=29 --other-files=/data/local/tmp/graph=network1.so
Opened an issue in google: https://issuetracker.google.com/issues/196230363
I am tring to enable crashlytics for my NDK android app. Ive followed the the guide here. I got stuck on Step 2.
Step 2: Enable native symbol uploading To produce readable stack:
traces from NDK crashes, Crashlytics needs to know about the symbols
in your native binaries. Our Gradle plugin includes the
uploadCrashlyticsSymbolFileBUILD_VARIANT task to automate this process
(to access this task, make sure nativeSymbolUploadEnabled is set to
true).
For method names to appear in your stack traces, you must explicitly
invoke the uploadCrashlyticsSymbolFileBUILD_VARIANT task after each
build of your NDK library. For example:
>./gradlew app:assembleBUILD_VARIANT\
app:uploadCrashlyticsSymbolFileBUILD_VARIANT
What does For method names to appear in your stack traces, you must explicitly invoke the uploadCrashlyticsSymbolFileBUILD_VARIANT task after each build of your NDK library.mean? I also saw that they left a line with gradlew. Is this a command on a command line? I am very lost. Can anyone help me achieve step 2?
I was also at a lost, but finally understand.
This command should be like this.
At first, move to the directory
cd /YourProjectRootPath/proj.android/
You can find gradlew file in this directory.
Then execute gradlew to run two tasks.
Task1: assembleDebug or assembleRelease
Task2: uploadCrashlyticsSymbolFileDebug or uploadCrashlyticsSymbolFileRelease
the command is, (Example of debug)
./gradlew XXXXXX:assembleDebug XXXXXX:uploadCrashlyticsSymbolFileDebug
Please replace "XXXXXX" to your "app name".
If you don't know what is your app name, please run the command below.
./gradlew tasks --all
You can see all task names and can find these two tasks.
XXXXXX:assembleDebug
XXXXXX:uploadCrashlyticsSymbolFileDebug
This "XXXXXX" is your "app name".
I don't know why Google describes such a complicated command using ">" and "\", but it's just a simple command,
./gradlew <TASK1> <TASK2>
When you add "nativeSymbolUploadEnabled true" to your gradle file like mentioned in Step1 this will instruct the gradle plugin to generate a new task with the format "uploadCrashlyticsSymbolFileBUILD_VARIANT" for each build type and architectures. Check this screenshot where I only have one build type "release" but also have three architectures. The tasks generated are:
uploadCrashlyticsSymbolFileArm8Release
uploadCrashlyticsSymbolFileUniversalRelease
uploadCrashlyticsSymbolFileX86_64Release
To run these tasks, you will need to either execute the command in a terminal updated for the desired build variant, e.g.
>./gradlew app:assembleX86_64\
app:uploadCrashlyticsSymbolFileX86_64Release
Or manually calling those tasks in the gradle tab. They need to be executed in this order (first the assemble and then the uploadCrashlyticsSymbolFile...) to make sure the binaries have been created for Crashlytics to generate and upload the symbol files.
To answer your question: What does For method names to appear in your stack traces, you must explicitly invoke the uploadCrashlyticsSymbolFileBUILD_VARIANT task after each build of your NDK library.mean? Crashlytics will need the symbol files in order to convert the crash report into a readable stack trace with method names and line numbers.
I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error:
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.
I am trying to setup an encrypted drive using the TPM2.0 module on a NUC7i5 on a new installation of Ubuntu server 18.04.
I compiled from sources and installed tpm2-tss (1.3.0), tpm2-abrmd (1.2.0) and tpm2-tools (3.0.2), and I tested some of the tpm2_* utilities and they seem to work. I also installed clevis v10.
I generated a secret using tpm2_getrandom 32 -o secret.key, and then tried to encrypt the secret using the TPM using the following command:
cat secret.key | sudo clevis encrypt tpm2 '{"pcr_ids":"7","pcr_bank":"sha256"}' > secret.jwe
When I do that however, I get the following error:
ERROR:
CreatePrimary Failed ! ErrorCode: 0x9a2
ERROR: Unable to run tpm2_createprimary
Creating TPM2 primary key failed!
When checking the status of the tpm2-abrmd service (systemctl status tpm2-abrmd.service), I get this error:
tpm2-abrmd[1308]: tpm2_response_get_handle: insufficient buffer to get handle
I tried different options for the clevis encryption, tried different ways to generate the secret, but I still can't figure out what the issue is.
The TPM module is a SLB9665 from Infineon Technologies.
I tried with and without taking ownership of the TPM, and always with a clear TPM every time.
Has anyone ran into that issue?
So, apparently the issue was that I shouldn't have taken ownership of the TPM.
After resetting the TPM, the clevis command works.
I have CodeBuild project that works fine.
Trying to use it in CodePipeline and it failure with empty Repository and Submitter.
Failure logs are simple as:
01:34:17
[Container] 2018/03/08 01:34:10 Waiting for agent ping
01:34:17
[Container] 2018/03/08 01:34:12 Waiting for DOWNLOAD_SOURCE
There are no any settings to adjust CodeBuild phase anywhere.
How can I fix/customise it?
Recreate the build project from within CodePipeline, so it receives the source code from the provider called "CodePipeline".
Source of the information: https://apassionatechie.wordpress.com/2018/02/08/codebuild-aws-from-codepipeline-aws/
Just if somebody would need an answer.
The issue was in not precise file naming for CodeBuild stage where CodeDeploy in it's turn won't be able to pull the ZIP file.
As a fix I've added an extra command to builspec.yml
post_build:
commands:
- zip -r Application.zip target/Application-0.0.1.war