I am trying to start Conclave in release mode, followed instructions as below :
// Firstly, built the signing material:
./gradlew prepareForSigning -PenclaveMode=release
// Generated a signature from the signing material. The password for the sample external key is '12345'
openssl dgst -sha256 -out signing/signature.bin -sign signing/external_signing_private.pem -keyform PEM enclave/build/enclave/Release/signing_material.bin
// Finally built the signed enclave:
./gradlew build -PenclaveMode="release" -x test
./gradlew host:installDist
cd host/build/install
./host/bin/host
After invoking request from client , the attestation still prints:
Mode: SIMULATION
Is there any flag/step being missed ?
You need to include -PenclaveMode=release when building the host:installDist target otherwise it will build the default Simulation version and package that, even if you previously built the release enclave.
Just run this command and it will use the release enclave instead:
./gradlew host:installDist -PenclaveMode=release
Related
from the following URL I have downloaded the YubiHSM SDK https://developers.yubico.com/YubiHSM2/Releases/.
However the other URL says we need to validate the package by downloading the keys which is specified from the following URL:https://developers.yubico.com/Software_Projects/Software_Signing.html
However what is the keyserver do we need to use to download a key?
I will be using the following command to receive a key:
gpg --keyserver pgp.mit.edu --recv-keys 70D7145F2F35C4745501829A1B21578FC4686BFE
And the command output is as following:
gpg: keyserver receive failed: Server indicated a failure
Regards,
Sudheer
PGP key servers synchronise their keys so it shouldn't matter which one you use. Also, your system should be configured already to use a pool of key servers, so the --keyserver option is not required.
Try without the --keyserver option.
If that doesn't work, try with an alternative key server. For instance keyserver.ubuntu.com, sks.pgpkeys.eu, or keys.openpgp.org.
I am posting a follow on question to this one that I posted recently: Docker container failed to start when deploying to Google Cloud Run. I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I've been able to successfully deploy the Docker container, but I cannot invoke it. I believe I'm misunderstanding something fundamental about APIs, and I'd greatly appreciate any input!
So far, I have:
1.- Used the plumber R package to expose the R code as a service by "decorating" it with special annotations
# script called big-query-tutorial.R
library(bigrquery)
library(tidyverse)
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
#* #get /time
systime <- function(){
# upload Sys.time() to Big Query
insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND", values=Sys.time() %>% as_tibble(), billing=project)
}
2.- Translated the R code from (1) to a plumber API with this R script
# script called main.R
library(plumber)
r <- plumb("/home/rstudio/big-query-tutorial.R")
r$run(host="0.0.0.0", port=8080)
3.- Made the Dockerfile
FROM rocker/tidyverse:latest
# BEGIN rstudio/plumber layers
RUN apt-get update -qq && apt-get install -y --no-install-recommends \
git-core \
libssl-dev \
libcurl4-gnutls-dev \
curl \
libsodium-dev \
libxml2-dev
RUN R -e "install.packages('plumber', repos='http://cran.us.r-project.org')"
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
# add json file for authentication with BigQuery and necessary R scripts
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial.R /home/rstudio
ADD main.R /home/rstudio
# open port 8080 to traffic
EXPOSE 8080
# when the container starts, start the main.R script
ENTRYPOINT ["Rscript", "/home/rstudio/main.R", "--host", "0.0.0.0"]
4.- Successfully run the container locally on my machine, with the system time being written to BigQuery when I visit http://0.0.0.0:8080/time and then refresh the browser.
5.- Pushed the container to my container registry in Google Cloud
6.- Successfully deployed the container to Cloud Run.
7.- Created a service account (i.e., xxxx#xxxx.iam.gserviceaccount.com) that has roles "Cloud Run Invoker" and "Cloud Scheduler Service Agent".
8.- Set up a Cloud Scheduler job by filling out the fields in the console as follows
Frequency: ***** (i.e., once per minute)
Timezone: Pacific Standard Time (PST)
Target: HTTP
URL: xxxx-xxxx.run.app
HTTP method: GET
Auth header: Add OIDC token
Service account: xxxx#xxxx.iam.gserviceaccount.com (i.e., account from (7))
Audience: xxxx-xxxx.run.app (I leave this field blank, it is automatically added)
When I click on "RUN NOW" in Cloud Scheduler, I get the error
httpRequest: {
status: 404
}
When I check the log for Cloud Run, every minute there is the 404 error. The request count under the "METRICS" tab averages out to 0.02/s.
Thank you!
-H.
A couple of recommendations:
Make sure your service account has roles/iam.serviceAccountTokenCreator and roles/cloudscheduler.serviceAgent that will enable impersonation. And roles/run.Invoker to be able to call Cloud Run.
Also you have chosen OIDC Audience
A bit about the audience: field in OIDC tokens.
You must set this field for the invoking service and specify the fully qualified URL of the receiving service. For example, if you are invoking Cloud Run or Cloud Functions, the id_token must include the URL/path of the service.
Example declaration:
gcloud beta scheduler jobs create http oidctest --schedule "5 * * * *" --http-method=GET \
--uri=https://hello-6w42z6vi3q-uc.a.run.app \
--oidc-service-account-email=schedulerunner#$PROJECT_ID.iam.gserviceaccount.com \
--oidc-token-audience=https://hello-6w42z6vi3q-uc.a.run.app
I am deploying symfony app to google cloud, I did this (https://cloud.google.com/community/tutorials/run-symfony-on-appengine-standard) tutorial followed and everythings is ok.
I am using LexikJWTAuthenticationBundle for authentication but I cannot generate ssh keys.
How I can run this commands on google cloud
openssl genpkey -out config/jwt/private.pem -aes256 -algorithm rsa -pkeyopt rsa_keygen_bits:4096
openssl pkey -in config/jwt/private.pem -out config/jwt/public.pem -pubout
You can't run those commands on the AppEngine, you need to run them before deployment and deploy the keys.
The filesystem is readonly which means that you cannot create any files (ouside of /tmp) which in turn means there is no other way than to generate the keys on the system you're deploying from (e.g. your computer).
I got the following messages every time i generate c++ client from openapi-generator:
[main] INFO o.o.c.languages.AbstractCppCodegen - Environment variable CPP_POST_PROCESS_FILE not defined so the C++ code may not be properly formatted. To define it, try 'export CPP_POST_PROCESS_FILE="/usr/local/bin/clang-format -i"' (Linux/Mac)
[main] INFO o.o.c.languages.AbstractCppCodegen - NOTE: To enable file post-processing, 'enablePostProcessFile' must be set to `true` (--enable-post-process-file for CLI).
[main] WARN o.o.codegen.DefaultCodegen - The value (generator's option) must be either boolean or string. Default to `false`.
I used the following command to run the generator:
npx openapi-generator generate -i api.yaml -g cpp-restsdk -o %CD%
How can I fix these messages.
Please use npx #openapitools/openapi-generator-cli instead as https://www.npmjs.com/package/#openapitools/openapi-generator-cli is the official repo of the npm wrapper for openapi-generator.
To enable post file processing, please add --enable-post-process-file to the command, e.g.
export CPP_POST_PROCESS_FILE="/usr/local/bin/clang-format -i"
npx #openapitools/openapi-generator-cli generate -i api.yaml -g cpp-restsdk -o %CD% --enable-post-process-file
Having a pipeline with a stage that executes a script that is supposed to decrypt a key file, the GitLab Runner fails:
$ scripts/decrypt.sh $LWCMAP_SERVER_KEY
bad decrypt
139810674749504:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:../crypto/evp/evp_enc.c:536:
ERROR: Job failed: exit code 1
The $LWCMAP_SERVER_KEY contains a passphrase that is used to decrypt the key inside a folder, using the following command in the .gitlab-ci.yml:
- scripts/decrypt.sh $LWCMAP_SERVER_KEY
And the content of the shell script is just the OpenSSL command to decrypt the file:
openssl aes-256-cbc -k $1 -in assets/server.key.enc -out assets/decripted_server.key -d
I wonder why the job fails with "bad decrypt" since the exact same command executes just fine locally. I even calculated the md5 of both the file and the key used on decryption, and they are the exact same on the runner and locally (which means it is not corrupted data).
Any ideas?
Edit:
Locally openssl version outputs "LibreSSL 2.8.3", and on the server, I upgraded it to the same version. On the Runner's container though, the output is "OpenSSL 1.1.0j 20 Nov 2018".
So I think I figured why, and what to do to fix..
It does seem like LibreSSL 2.8x is incompatible with OpenSSL 1.1x.
This means that files encrypted on one TLS implementation is unable to decrypt with the other.
What I did instead, was to ssh into the Ubuntu VM then run the encryption there. Since the CI is going to be run with any of our Ubuntu VMs and will be deployed on similar machines using similar TLS implementations, I do not anticipate any further problems with key file encryption/decryption.
This means that I would be unable to test decrypt on my local machine though, but I'm sure I can live with that :-)