I have been using Boxfuse to deploy my app (PROD and TEST) without issues for well over a year, but now when trying to deploy to TEST (using the same command i've always used - boxfuse run app-name -env=test), I am getting this error.
"Running app/image failed!"
and that is it. It shows up just after "Waiting for AWS to create an encrypted AMI for app/image in us-west-2 (this may take up to 90 seconds)..." and the stack trace is
at com.boxfuse.client.core.Boxfuse.run(Boxfuse.java:653)
at com.boxfuse.client.commandline.Main.run(Main.java:325)
at com.boxfuse.client.commandline.Main.main(Main.java:133)
I am not sure where to begin in debugging this as the error message is not very descriptive and nothing has changed in my AWS account/settings or app configuration/setting etc. Any help or suggested places to start would be greatly appreciated. Thanks!
Related
I try to get a simple firebase cloud message app, web, to run. The README of the sample app says:
1. On the command line run `firebase serve -p 8081` using the Firebase CLI tool to launch a local server.
That gives the error
Error: Cannot understand what targets to deploy/serve. If you are using PowerShell make sure you place quotes around any comma-separated lists (ex: --only "functions,firestore").
So I tried
firebase serve -p 8081 .
which returns
Having trouble? Try firebase [command] --help
How can I get a local web app that receives messages get running? Will similar troubles start again when hosting on firebase.
I am stuck. Firebase is an endless series of weird docu, weird results, all errors have been encountered by others and there are lon long threads on any of them.
I'm running a DAG in Google Cloud Composer (hosted Airflow) which runs fine in Airflow locally. All it does is print "Hello World". However, when I run it through Cloud Composer I receive the error:
*** Log file does not exist: /home/airflow/gcs/logs/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log
*** Fetching from: http://airflow-worker-d775d7cdd-tmzj9:8793/log/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-d775d7cdd-tmzj9', port=8793): Max retries exceeded with url: /log/matts_custom_dag/main_test/2020-04-20T23:46:53.652833+00:00/2.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8825920160>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I've also tried making the DAG add data into a database and it actually succeeds 50% of the time. However, it always returns this error message (and no other print statements or logs). Any help much appreciated on why this might be happening.
We also faced the same issue then raised a support ticket to GCP and got the following reply.
The message is related to the latency of syncing logs from Airflow workers to WebServer, it takes at least some minutes (depending on the number of objects and their size)
The total log size seems not large but it’s enough to noticeably slow down synchronization, hence, we recommend cleanup/archive the logs
Basically we recommend relying on Stackdriver logs instead, because of latency due to the design of this sync
I hope this will help you solve the problem.
I have the same problem after upgrading from 1.10.3 to 1.10.6 of Google Composer.
I can see in my logs that airflow is trying to get the logs from a bucket with a name ended with -tenant while the bucket in my account ends with -bucket
In the configuration, I can see something weird too.
## airflow.cfg
[core]
remote_base_log_folder = gs://us-east1-dada-airflow-xxxxx-bucket/logs
## also in the running configuration says
core remote_base_log_folder gs://us-east1-dada-airflow-xxxxx-tenant/logs env var
I wrote to google support and they said the team is working on a fix.
EDIT:
I've been accessing my logs with gsutil and replacing the bucket name suffix to -bucket
gsutil cat gs://us-east1-dada-airflow-xxxxx-bucket/logs/...../5.logs
I faced the same situation in multiple occasions.
As soon as when the job finished when I take a look at the log on Airflow Web UI, it used to give me the same error. Although when I check back the same logs on UI after a min or 2, I could see the logs properly.
As per the above answers, its a sync issue between the webserver and the Worker node.
In general, the issue describe here should be more like a sporadic issue.
In certain situations, what could help is setting default-task-retries to a value that allows for retrying a task at least 1.
This issue is resolved at least since Airflow version: 1.10.10+composer.
I am trying to execute a Airflow script and get error when checking the logs of the Task_id in the Graph View:
Hi,
I am getting a log file isn't local error when running a Airflow script. given below is the error message I get from the Graph view.
I am using Sqlite DB locally and the function I am trying to execute is connecting to a Amazon Redshift DB.
Could anyone assist. Thanks..
The url looks strange: http://:8793/log... - the host name is missing.
It seems to me that there is no correct configuration of the base_url or web_server_host parameter in airflow.cfg.
If this is all setup correctly, then the settings for log storage might be off.
I am finding when working with larger datasets that the kernel may die, something I also experiance on my local machine. Sometimes it comes back and sometimes not. So even the Tree panel won't react to terminate a errant Kernel. EG "restart" does not work and the server itself seems to die. So the tree view won't respond or refresh. On my local machine I just kill the terminal instance and start over.
What is the "proper" way to restart everything?
FWIW the instance seems pegged at 150% cpu utilization atm
Related: is there any way to allow long running stuff to work?
I am trying to use a report generator (pandas-profiling) on a 2mm record dataset.. Works on my local..
found it here: https://cloud.google.com/datalab/getting-started
FWIW These commands can be used in the new command line shell on the Cloud console page.see https://cloud.google.com/shell/docs/ .. Without the sdk on your machine.. You need to modify the commands slightly since you will be logged into your project already,
Stopping/starting VM instances
You may want to stop a Cloud Datalab managed VM instance to avoid incurring ongoing charges. To stop a Cloud Datalab managed machine instance, go to a command prompt, and run:
$ gcloud auth login
$ gcloud config set project <YOUR PROJECT ID>
$ gcloud preview app versions stop main
After confirming that you want to continue, wait for the command to complete, and make sure that the output indicates that the version has stopped. If you used a non-default instance name when deploying, please use that name instead of "main" in the stop command, above (and in the start command, below).
For restarting a stopped instance, run:
$ gcloud auth login
$ gcloud config set project <YOUR PROJECT ID>
$ gcloud preview app versions start main
We were tried to integrate HP ALM with Jenkins to Execute our test cases as post build actions. It will triggered after Build and deployment.
Here Jenkins runs in unix machine as Master and we make our windows machine as slave. we installed ALM plugin also and configured job accordingly to Execute Test set in ALM.
While Build Job in Jenkins we got the below error log in console output as Authentication failure.
But it got success when we have Windows machine as Master and slave in Jenkins.
Started by upstream project "HPTest" build number 11
originally caused by:
Started by user tibco
Building remotely on Slave in workspace C:\Users\C887755\workspace\Jenkins\workspace\HPTest\label\Slave
[Slave] $ C:\Users\C887755\workspace\Jenkins\workspace\HPTest\label\Slave\HpToolsLauncher.exe -paramfile props03072015154620620.txt
"Started..."
Timeout is set to: 3000000
Run mode is set to: RUN_LOCAL
Failed to login. Please contact system administrator for help.
Description: Authentication failed. Verify your user name and password.
Error: Cannot Login to QC: Authorization failed.
Build step 'Execute HP functional tests from HP ALM' changed build result to FAILURE
Finished: FAILURE
Kindly Help us to get rid of it. Searched lot in web Can't right path to solve it.
Your help is much Appreciated.
Thanks,
Madhan
Looks like a similar issue like this. Do you have HP passport login? If yes, then you can download the fix.
Hope it helps.