why airflow's webserver is exiting after filling up the Dagbag - airflow

I have been using airflow for a month without any such issues. But suddenly the webserver stopped working. It is exiting after filling up the DagBag. No errors are displayed either.
i even tried killing all the airflow and gunicorn processes. Still no luck.
This is what i see when executing "airflow webserver"
[uname#airflow airflow]$ airflow webserver
[2019-03-28 07:51:54,946] {settings.py:174} INFO - settings.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800, pid=4128
[2019-03-28 07:51:55,356] {__init__.py:51} INFO - Using executor LocalExecutor
[2019-03-28 07:51:55,459] {configuration.py:255} WARNING - section/key [rest_api_plugin/rest_api_plugin_http_token_header_name] not found in config
[2019-03-28 07:51:55,459] {configuration.py:255} WARNING - section/key [rest_api_plugin/rest_api_plugin_expected_http_token] not found in config
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2019-03-28 07:51:55,705] {models.py:273} INFO - Filling up the DagBag from /home/uname/airflow/dags
[uname#airflow airflow]$

I have the same error. In this case for me, this problem is caused by the default memory allocated for Docker. It is not enough for starting all of the services of Airflow. My solution is to allocate more memory for Docker, and that works for me.
DockerDesktop > Setting > Resources > Advanced > Memory (Provide more memory here)
I hope this will help.

Related

Airflow standalone sqlite3 Integrity Error

I'm trying to run airflow standalone after following these instructions https://airflow.apache.org/docs/apache-airflow/stable/start/local.html on "Ubuntu on Windows". I already placed the AirflowHome folder inside C:/Users/my_user_name/ and that's esentially all the changes I did. However, I'm getting an IntegrityError and the documentation seems very cryptic. Could you guys help me out?
standalone | Starting Airflow Standalone
standalone | Checking database is initialized
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [airflow.models.crypto] empty cryptography key - values will not be stored encrypted.
standalone | Database ready
[2022-05-26 10:21:49,812] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 10:21:49,885] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 10:21:50,076] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 10:21:50,127] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
triggerer | ____________ _____________
triggerer | ____ |__( )_________ __/__ /________ __
triggerer | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
triggerer | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
triggerer | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
triggerer | [2022-05-26 10:21:58,355] {triggerer_job.py:101} INFO - Starting the triggerer
scheduler | ____________ _____________
scheduler | ____ |__( )_________ __/__ /________ __
scheduler | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
scheduler | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
scheduler | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Starting gunicorn 20.1.0
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Listening at: http://0.0.0.0:8793 (233)
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Using worker: sync
scheduler | [2022-05-26 10:21:58 -0400] [234] [INFO] Booting worker with pid: 234
scheduler | [2022-05-26 10:21:58,614] {scheduler_job.py:693} INFO - Starting the scheduler
scheduler | [2022-05-26 10:21:58,614] {scheduler_job.py:698} INFO - Processing each file at most -1 times
scheduler | [2022-05-26 10:21:58,619] {executor_loader.py:106} INFO - Loaded executor: SequentialExecutor
scheduler | [2022-05-26 10:21:58,622] {manager.py:156} INFO - Launched DagFileProcessorManager with pid: 235
scheduler | [2022-05-26 10:21:58,624] {scheduler_job.py:1218} INFO - Resetting orphaned tasks for active dag runs
scheduler | [2022-05-26 10:21:58,639] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
scheduler | [2022-05-26 10:21:58 -0400] [236] [INFO] Booting worker with pid: 236
scheduler | [2022-05-26 10:21:58,709] {manager.py:399} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1.
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Starting gunicorn 20.1.0
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Listening at: http://0.0.0.0:8080 (231)
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Using worker: sync
webserver | [2022-05-26 10:21:59 -0400] [239] [INFO] Booting worker with pid: 239
webserver | [2022-05-26 10:21:59 -0400] [240] [INFO] Booting worker with pid: 240
webserver | [2022-05-26 10:21:59 -0400] [241] [INFO] Booting worker with pid: 241
webserver | [2022-05-26 10:22:00 -0400] [242] [INFO] Booting worker with pid: 242
webserver | [2022-05-26 10:22:02,638] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
webserver | [2022-05-26 10:22:02,644] {manager.py:587} ERROR - Remove Permission to Role Error: DELETE statement on table 'ab_permission_view_role' expected to delete 1 row(s); Only 0 were matched.
webserver | [2022-05-26 10:22:02,776] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
webserver | /home/carlos/.local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py:1461 SAWarning: DELETE statement on table 'ab_permission_view' expected to delete 1 row(s); 0 were matched. Please set confirm_deleted_rows=False within the mapper configuration to prevent this warning.
webserver | [2022-05-26 10:22:02,817] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
webserver | /home/carlos/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:2459 SAWarning: Identity map already had an identity for (<class 'airflow.www.fab_security.sqla.models.Permission'>, (185,), None), replacing it with newly flushed object. Are there load operations occurring inside of an event handler within the flush?
webserver | [2022-05-26 10:22:03,097] {manager.py:508} INFO - Created Permission View: menu access on Permissions
webserver | [2022-05-26 10:22:03,168] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
webserver | [2022-05-26 10:22:03,175] {manager.py:570} ERROR - Add Permission to Role Error: (sqlite3.IntegrityError) UNIQUE constraint failed: ab_permission_view_role.permission_view_id, ab_permission_view_role.role_id
webserver | [SQL: INSERT INTO ab_permission_view_role (permission_view_id, role_id) VALUES (?, ?)]
webserver | [parameters: (185, 1)]
webserver | (Background on this error at: http://sqlalche.me/e/14/gkpj)
webserver | [2022-05-26 10:22:03,188] {manager.py:570} ERROR - Add Permission to Role Error: (sqlite3.IntegrityError) UNIQUE constraint failed: ab_permission_view_role.permission_view_id, ab_permission_view_role.role_id
webserver | [SQL: INSERT INTO ab_permission_view_role (permission_view_id, role_id) VALUES (?, ?)]
webserver | [parameters: (185, 1)]
webserver | (Background on this error at: http://sqlalche.me/e/14/gkpj)
standalone |
standalone | Airflow is ready
standalone | Login with username: admin password: qhbDVvxz9ARPaWQt
standalone | Airflow Standalone is for development purposes only. Do not use this in production!
When trying to open airflow, I always get the following:
And the same for http://0.0.0.0:8793/
Almost exactly the same happens when I try to run airflow webserver.
Thanks!

AWS Lambda throws IllegalStateException: Missing HTTP method for Quarkus Amazon Lambda HTTP Api

I am creating a Quarkus amazon lambda HTTP to do some stuff but I am getting the error
For routing, I am using Funqy and the start-API using lambda is working perfectly but when I invoke the API with sam invoke and was already pushed and published in AWS Lambda is still getting this error.
My goal is to create an HTTP API quarkus amazon lambda, thanks for the help
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2022-04-01 20:12:47,190 INFO [io.quarkus] (main) json-transposition-tool-quarkus-http-aws-lambda 1.0-SNAPSHOT on JVM (powered by Quarkus 2.7.0.Final) started in 4.149s.
2022-04-01 20:12:47,192 INFO [io.quarkus] (main) Profile prod activated.
2022-04-01 20:12:47,192 INFO [io.quarkus] (main) Installed features: [amazon-lambda, cdi, funqy-http, security, smallrye-context-propagation, vertx]
2022-04-01 20:12:47,225 ERROR [qua.ama.lam.http] (main) Request Failure: java.lang.IllegalStateException: Missing HTTP method in request event
at io.quarkus.amazon.lambda.http.LambdaHttpHandler.nettyDispatch(LambdaHttpHandler.java:176)
at io.quarkus.amazon.lambda.http.LambdaHttpHandler.handleRequest(LambdaHttpHandler.java:63)
at io.quarkus.amazon.lambda.http.LambdaHttpHandler.handleRequest(LambdaHttpHandler.java:43)
at io.quarkus.amazon.lambda.runtime.AmazonLambdaRecorder.handle(AmazonLambdaRecorder.java:79)
at io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler.handleRequest(QuarkusStreamHandler.java:58)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at lambdainternal.EventHandlerLoader$StreamMethodRequestHandler.handleRequest(EventHandlerLoader.java:375)
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:899)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:262)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:199)
at lambdainternal.AWSLambda.main(AWSLambda.java:193)

Unable to start Apache Airflow webserver due to dagbag /dev/null error

I have installed apache airflow V 2.1.0 on Ubuntu running on Windows Linux subsystem(WSL).
After installation, I have created an admin user and also set the AIRFLOW_HOME variable in the ~/.bashrc file as below
export AIRFLOW_HOME=~/airflow
However, when I'm trying to start the airflow webserver it is not working and I'm getting below text
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2021-06-12 03:29:19,807] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Access Logformat:
=================================================================
[2021-06-12 03:29:23 +0530] [12464] [INFO] Starting gunicorn 20.1.0
[2021-06-12 03:29:23 +0530] [12464] [INFO] Listening at: http://0.0.0.0:8080 (12464)
[2021-06-12 03:29:23 +0530] [12464] [INFO] Using worker: sync
[2021-06-12 03:29:23 +0530] [12466] [INFO] Booting worker with pid: 12466
[2021-06-12 03:29:23 +0530] [12467] [INFO] Booting worker with pid: 12467
[2021-06-12 03:29:23 +0530] [12468] [INFO] Booting worker with pid: 12468
[2021-06-12 03:29:23 +0530] [12469] [INFO] Booting worker with pid: 12469
Can anyone please help me to get rid of this issue?
Check if you have a user airflow. Basically, it looks like airflow is trying to find a path /home/airflow and it couldn't find it or that mount path is missing. I am not sure how ubuntu as a windows subsystem is being configured.

Airflow webserver not starting while using helm chart on minikube

I'm trying to run Airflow locally (to test it before deployment) using minikube and helm chart stable/airflow. But airflow-webserver doesn't start due to gunicorn issue.
Helm: v2.14.3
Kubernetes: v1.15.2
Minikube: v1.3.1
Helm chart image: puckel/docker-airflow
These are the steps:
minikube start
helm install --namespace "airflow" --name "airflow" stable/airflow
Logs are:
Thu Sep 12 07:29:54 UTC 2019 - waiting for Postgres... 1/20
Thu Sep 12 07:30:00 UTC 2019 - waiting for Postgres... 2/20
waiting 60s...
executing webserver...
[2019-09-12 07:31:05,745] {{settings.py:213}} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=1
/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2019-09-12 07:31:06,030] {{__init__.py:51}} INFO - Using executor CeleryExecutor
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2019-09-12 07:31:06,585] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
[2019-09-12 07:31:07,676] {{settings.py:213}} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=21
/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2019-09-12 07:31:07 +0000] [21] [INFO] Starting gunicorn 19.9.0
[2019-09-12 07:31:07 +0000] [21] [INFO] Listening at: http://0.0.0.0:8080 (21)
[2019-09-12 07:31:07 +0000] [21] [INFO] Using worker: sync
[2019-09-12 07:31:07 +0000] [25] [INFO] Booting worker with pid: 25
[2019-09-12 07:31:07 +0000] [26] [INFO] Booting worker with pid: 26
[2019-09-12 07:31:07 +0000] [27] [INFO] Booting worker with pid: 27
[2019-09-12 07:31:07 +0000] [28] [INFO] Booting worker with pid: 28
[2019-09-12 07:31:08,444] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,446] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,545] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,669] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:10,047] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:20,932] {{cli.py:825}} ERROR - [0 / 0] some workers seem to have died and gunicorndid not restart them as expected
[2019-09-12 07:31:22,095] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:22 +0000] [25] [INFO] Parent changed, shutting down: <Worker 25>
[2019-09-12 07:31:22 +0000] [25] [INFO] Worker exiting (pid: 25)
[2019-09-12 07:31:32 +0000] [28] [INFO] Parent changed, shutting down: <Worker 28>
[2019-09-12 07:31:32 +0000] [28] [INFO] Worker exiting (pid: 28)
[2019-09-12 07:31:33,289] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:33,324] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:35 +0000] [26] [INFO] Parent changed, shutting down: <Worker 26>
[2019-09-12 07:31:35 +0000] [26] [INFO] Worker exiting (pid: 26)
[2019-09-12 07:31:35 +0000] [27] [INFO] Parent changed, shutting down: <Worker 27>
[2019-09-12 07:31:35 +0000] [27] [INFO] Worker exiting (pid: 27)
[2019-09-12 07:33:32,017] {{cli.py:832}} ERROR - No response from gunicorn master within 120 seconds
[2019-09-12 07:33:32,018] {{cli.py:833}} ERROR - Shutting down webserver
I can run that docker image locally with docker-compose with no issues, but no luck using helm, it fails and restarts constantly.
Turns out that the issue was that the minikube configuration wasn't making postgres' pod available, editing the pod deployment with the ip of the postgres instance it worked.

cordapp-example on a VM does not appear to start correctly

I'm attempting to deploy cordapp-example to a Google Compute Engine VM (Ubuntu 16.04). I am using OpenJDK (I know but I'm not able to use the Oracle JDK). I've attempted to follow the pre-reqs.
However, I think at least one problem results from the advice to "Do not click while 8 additional terminal windows start up." (this isn't going to occur through ssh'ing to a remote VM).
runnodes never results in Webserver started up in XX.X sec or Node for “NodeC” started up and registered in XX.XX sec and (therefore) does not result in a process listening on :10007.
Console output:
Starting nodes in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes
Starting corda.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyB on debug port 5005
Starting corda-webserver.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyB on debug port 5006
Starting corda.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/Controller on debug port 5007
Starting corda.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyC on debug port 5008
Starting corda-webserver.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyC on debug port 5009
Starting corda.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyA on debug port 5010
Starting corda-webserver.jar in /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyA on debug port 5011
Started 7 processes
Finished starting nodes
Listening for transport dt_socket at address: 5011
Listening for transport dt_socket at address: 5009
Listening for transport dt_socket at address: 5006
Listening for transport dt_socket at address: 5005
Listening for transport dt_socket at address: 5008
Listening for transport dt_socket at address: 5010
Listening for transport dt_socket at address: 5007
Unknown command line arguments: no-local-shell is not a recognized option
Unknown command line arguments: no-local-shell is not a recognized option
Unknown command line arguments: no-local-shell is not a recognized option
______ __
/ ____/ _________/ /___ _
/ / __ / ___/ __ / __ `/ I had an account with a bank in the
/ /___ /_/ / / / /_/ / /_/ / North Pole, but they froze all my assets
\____/ /_/ \__,_/\__,_/
--- Corda Open Source 1.0.0 (31be2a4) -----------------------------------------------
______ __
/ ____/ _________/ /___ _
/ / __ / ___/ __ / __ `/ I had an account with a bank in the
/ /___ /_/ / / / /_/ / /_/ / North Pole, but they froze all my assets
\____/ /_/ \__,_/\__,_/
--- Corda Open Source 1.0.0 (31be2a4) -----------------------------------------------
______ __
/ ____/ _________/ /___ _
/ / __ / ___/ __ / __ `/ What you can buy for a dollar these
/ /___ /_/ / / / /_/ / /_/ / days is absolute non-cents!
\____/ /_/ \__,_/\__,_/
--- Corda Open Source 1.0.0 (31be2a4) -----------------------------------------------
______ __
/ ____/ _________/ /___ _
/ / __ / ___/ __ / __ `/ It's not who you know, it's who you know
/ /___ /_/ / / / /_/ / /_/ / knows what you know you know.
\____/ /_/ \__,_/\__,_/
--- Corda Open Source 1.0.0 (31be2a4) -----------------------------------------------
Logs can be found in : /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyA/logs
Logs can be found in : /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyB/logs
Logs can be found in : /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/Controller/logs
Logs can be found in : /home/dazwilkin/cordapp-example/kotlin-source/build/nodes/PartyC/logs
Database connection url is : jdbc:h2:tcp://10.138.0.5:33219/node
Database connection url is : jdbc:h2:tcp://10.138.0.5:41313/node
Database connection url is : jdbc:h2:tcp://10.138.0.5:36079/node
Database connection url is : jdbc:h2:tcp://10.138.0.5:38015/node
Incoming connection address : localhost:10002
Incoming connection address : localhost:10008
Incoming connection address : localhost:10005
Incoming connection address : localhost:10011
Listening on port : 10002
RPC service listening on port : 10003
Providing network services : corda.notary.validating
Loaded CorDapps : kotlin-source-0.1, corda-finance-1.0.0, corda-core-1.0.0
Node for "Controller" started up and registered in 46.26 sec
Listening on port : 10008
RPC service listening on port : 10009
Listening on port : 10011
RPC service listening on port : 10012
Listening on port : 10005
RPC service listening on port : 10006
Loaded CorDapps : kotlin-source-0.1, corda-finance-1.0.0, corda-core-1.0.0
Node for "PartyB" started up and registered in 51.99 sec
Loaded CorDapps : kotlin-source-0.1, corda-finance-1.0.0, corda-core-1.0.0
Node for "PartyC" started up and registered in 52.75 sec
Loaded CorDapps : kotlin-source-0.1, corda-finance-1.0.0, corda-core-1.0.0
Node for "PartyA" started up and registered in 53.48 sec
and ss --tcp --listening filtered and sorted results in:
*:5005
*:5007
*:5008
*:5010
*:ssh
:::10002
:::10003
:::10005
:::10006
:::10008
:::10009
:::10011
:::10012
:::33219
:::36079
:::38015
:::41313
:::ssh
For what it's worth:
the debug ports aren't reported by ss: 5006, 5009, 5011
there's no reference by ss to 10007 being used; can't browse to it
this error looks not good no-local-shell is not a recognized option
It's unclear to me what I can debug. runnodes is opaque and I don't see obvious errors suggesting nodes are missing.
Any pointers would be appreciated.
P.S. There do appear to be 8 java processes running 4 of which include a flag --no-local-shell so perhaps that's a difference with OpenJDK and perhaps a|the problem?
I have also experienced this issue. It's something I'm looking into but in the meantime, a simple workaround is to run the webservers manually by running the following in the nodes directory (kotlin-source/build/nodes I think?):
find . -name corda-webserver.jar -execdir sh -c 'java -jar {} &' \;
As an aside, Corda is tested against the Azul build of OpenJDK in addition to Oracle JDK. I don't believe the problem is related to OpenJDK, I think it's a timing issue in the runnodes.jar.

Resources