The Sqitch deploy command fails with below error:-
sqitch deploy 'db:snowflake://amrutmonu:PASSWORD#wea87235.us-east-1.snowflakecomputing.com/DEMO_DB?Driver=Snowflake'
error:
Adding registry tables to db:snowflake://amrutmonu:#wea87235.us-east-1.snowflakecomputing.com/DEMO_DB?Driver=Snowflake
250001 (08001): Failed to connect to DB. Verify the account name is correct: wea
"/root/bin/snowsql" unexpectedly returned exit value 1
Could you please look into this and help us fixing this issue?
Please remove the .snowflakecomputing.com from the connection string, try it out, and let us know how it goes:
sqitch deploy 'db:snowflake://amrutmonu:PASSWORD#wea87235.us-east-1/DEMO_DB?Driver=Snowflake'
Related
I am trying to use amplify authentication with my development. I have configured the staging environment on the AWS console, and I currently am trying to setup locally on my PC.
The command I was supposed to key into the terminal is:
amplify pull --appId dxxxxxxxxxxx --envName staging
And this is the result:
C:\Users\xxxxx\xxxxx>amplify pull --appId dxxxxxxxxxxx --envName staging
Found whitespace in your key name (use quotes to include) at line 2,8 >>><<<<<<< HEAD
"ver ...
Error: Found whitespace in your key name (use quotes to include) at line 2,8 >>><<<<<<< HEAD
"ver ...
at error (C:\snapshot\node_modules\hjson\lib\hjson-parse.js:41:11)
at keyname (C:\snapshot\node_modules\hjson\lib\hjson-parse.js:157:76)
at object (C:\snapshot\node_modules\hjson\lib\hjson-parse.js:353:15)
at legacyRootValue (C:\snapshot\node_modules\hjson\lib\hjson-parse.js:428:38)
at Object.<anonymous> (C:\snapshot\node_modules\hjson\lib\hjson-parse.js:454:23)
at Function.parse (C:\snapshot\node_modules\amplify-cli-core\lib\jsonUtilities.js:91:22)
at Function.readJson (C:\snapshot\node_modules\amplify-cli-core\lib\jsonUtilities.js:47:32)
at StateManager.getData (C:\snapshot\node_modules\amplify-cli-core\lib\state-manager\stateManager.js:215:56)
at StateManager.getProjectConfig (C:\snapshot\node_modules\amplify-cli-core\lib\state-manager\stateManager.js:95:25)
at Object.checkProjectConfigVersion (C:\snapshot\node_modules\#aws-amplify\cli\lib\project-config-version-check.js:33:63)
Anyone has any idea what did I do wrong and how do I fix it? Many thanks in advance! Cheers!
For me there was a conflict under amplify/.config that directory by default is not displayed in directory listings, (i.e. listing using the ls command or in your IDE). Check that out and see if it doesn't solve your issue.
I am new to GCP, and am trying to teach myself by deploying a simple R script in a Docker container that connects to BigQuery and writes the system time. I am following along with this great tutorial: https://arbenkqiku.github.io/create-docker-image-with-r-and-deploy-as-cron-job-on-google-cloud
So far, I have:
1.- Made the R script
library(bigrquery)
library(tidyverse)
bq_auth("/home/rstudio/xxxx-xxxx.json", email="xxxx#xxxx.com")
project = "xxxx-project"
dataset = "xxxx-dataset"
table = "xxxx-table"
job = insert_upload_job(project=project, data=dataset, table=table, write_disposition="WRITE_APPEND",
values=Sys.time() %>% as_tibble(), billing=project)
2.- Made the Dockerfile
FROM rocker/tidyverse:latest
RUN R -e "install.packages('bigrquery', repos='http://cran.us.r-project.org')"
ADD xxxx-xxxx.json /home/rstudio
ADD big-query-tutorial_forQuestion.R /home/rstudio
EXPOSE 8080
CMD ["Rscript", "/home/rstudio/big-query-tutorial.R", "--host", "0.0.0.0"]
3.- Successfully run the container locally on my machine, with the system time being written to BigQuery
4.- Pushed the container to my container registry in Google Cloud
When I try to deploy the container to Cloud Run (fully managed) using the Google Cloud Console I get this error:
"Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information. Logs URL:https://console.cloud.google.com/logs/xxxxxx"
When I review the log, I find these noteworthy entries:
1.- A warning that says "Container called exit(0)"
2.- Two bugs that say "Container Sandbox: Unsupported syscall setsockopt(0x8,0x0,0xb,0x3e78729eedc4,0x4,0x8). It is very likely that you can safely ignore this message and that this is not the cause of any error you might be troubleshooting. Please, refer to https://gvisor.dev/c/linux/amd64/setsockopt for more information."
When I check BigQuery, I find that the system time was written to the table, even though the container failed to deploy.
When I use the port specified in the tutorial (8787) Cloud Run throws an error about an "Invalid ENTRYPOINT".
What does this error mean? How can it be fixed? I'd greatly appreciate input as I'm totally stuck!
Thank you!
H.
The comment of John is the right source of the errors: You need to expose a webserver which listen on the $PORT and answer to HTTP/1 HTTP/2 protocols.
However, I have a solution. You can use Cloud Build for this. Simply define your step with your container name and the args if needed
Let me know if you need more guidance on this (strange) workaround.
Log information "Container Sandbox: Unsupported syscall setsockopt" from Google Cloud Run is documented as an issue for gVisor.
I am trying to run a pubsub.schedule function on Firebase emulator. I tried to follow the instructions in the following links.
https://github.com/firebase/firebase-tools/issues/1748#issuecomment-609735979
https://firebase.google.com/docs/functions/local-shell#set_up_admin_credentials_optional
I keep getting the error below when I run firebase functions:shell.
Error: Server Error. connect ECONNREFUSED 127.0.0.1:4400
Any idea how to solve it? Thank you.
In my case, this was caused by Microsoft Defender, it may be any other firewall block.
Try running "firebase emulators:start" in the same cmd window, if you get a message from MS Defender, click "Allow". Then you can stop the emulators and run "firebase functions:shell" in the same cmd window, it should work properly.
In my case, after running "firebase emulators:start" everything worked. It appeared that the command installed a few .jar files and perhaps that was the reason.
a have upgraded my DB server from 10.1.36 to 10.2.30 and after running command mysql_upgrade it fail at step 4. Please take a look on screenshot. Does anybody have idea what is wrong?
I believe you have to give mysql users read and execute permissions to /var/log/ and /var/log/mysql/ directory, before it can access /var/log/mysql/slow_log.CSV file.
I keep getting this error when trying to run riak commands.
The nodetool file does not exist in that directory. When I copy the nodetool file from 5.8.4 I start getting this error:
{"init terminating in do_boot",{'cannot get bootfile','start_clean.boot'}}
EDIT
I followed this great advice here: http://onerlang.blogspot.co.uk/2009/10/fighting-with-riak.html. Now when I run riak start I get:
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Crash dump was written to: erl_crash.dump
init terminating in do_boot ()
Error reading /abc/def/otp_src_R14B03/riak-1.1.2/dev/dev1/etc/app.config
{"init terminating in do_boot",{'cannot get bootfile','start_clean.boot'}}
EDIT 2
I seem to be getting this problem http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-November/006436.html.
Whenever I build from source (required for multiple nodes on the same machine) riak tries to user erts-5.8.5 whereas riak requires(?) erts-5.8.4.
Is it possible for me to tell to not use 5.8.5 and use 5.8.4 maybe?