I'm trying to run airflow standalone after following these instructions https://airflow.apache.org/docs/apache-airflow/stable/start/local.html on "Ubuntu on Windows". I already placed the AirflowHome folder inside C:/Users/my_user_name/ and that's esentially all the changes I did. However, I'm getting an IntegrityError and the documentation seems very cryptic. Could you guys help me out?
standalone | Starting Airflow Standalone
standalone | Checking database is initialized
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
WARNI [airflow.models.crypto] empty cryptography key - values will not be stored encrypted.
standalone | Database ready
[2022-05-26 10:21:49,812] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
[2022-05-26 10:21:49,885] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
[2022-05-26 10:21:50,076] {manager.py:508} INFO - Created Permission View: menu access on Permissions
[2022-05-26 10:21:50,127] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
triggerer | ____________ _____________
triggerer | ____ |__( )_________ __/__ /________ __
triggerer | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
triggerer | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
triggerer | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
triggerer | [2022-05-26 10:21:58,355] {triggerer_job.py:101} INFO - Starting the triggerer
scheduler | ____________ _____________
scheduler | ____ |__( )_________ __/__ /________ __
scheduler | ____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
scheduler | ___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
scheduler | _/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Starting gunicorn 20.1.0
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Listening at: http://0.0.0.0:8793 (233)
scheduler | [2022-05-26 10:21:58 -0400] [233] [INFO] Using worker: sync
scheduler | [2022-05-26 10:21:58 -0400] [234] [INFO] Booting worker with pid: 234
scheduler | [2022-05-26 10:21:58,614] {scheduler_job.py:693} INFO - Starting the scheduler
scheduler | [2022-05-26 10:21:58,614] {scheduler_job.py:698} INFO - Processing each file at most -1 times
scheduler | [2022-05-26 10:21:58,619] {executor_loader.py:106} INFO - Loaded executor: SequentialExecutor
scheduler | [2022-05-26 10:21:58,622] {manager.py:156} INFO - Launched DagFileProcessorManager with pid: 235
scheduler | [2022-05-26 10:21:58,624] {scheduler_job.py:1218} INFO - Resetting orphaned tasks for active dag runs
scheduler | [2022-05-26 10:21:58,639] {settings.py:55} INFO - Configured default timezone Timezone('UTC')
scheduler | [2022-05-26 10:21:58 -0400] [236] [INFO] Booting worker with pid: 236
scheduler | [2022-05-26 10:21:58,709] {manager.py:399} WARNING - Because we cannot use more than 1 thread (parsing_processes = 2) when using sqlite. So we set parallelism to 1.
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Starting gunicorn 20.1.0
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Listening at: http://0.0.0.0:8080 (231)
webserver | [2022-05-26 10:21:59 -0400] [231] [INFO] Using worker: sync
webserver | [2022-05-26 10:21:59 -0400] [239] [INFO] Booting worker with pid: 239
webserver | [2022-05-26 10:21:59 -0400] [240] [INFO] Booting worker with pid: 240
webserver | [2022-05-26 10:21:59 -0400] [241] [INFO] Booting worker with pid: 241
webserver | [2022-05-26 10:22:00 -0400] [242] [INFO] Booting worker with pid: 242
webserver | [2022-05-26 10:22:02,638] {manager.py:585} INFO - Removed Permission menu access on Permissions to role Admin
webserver | [2022-05-26 10:22:02,644] {manager.py:587} ERROR - Remove Permission to Role Error: DELETE statement on table 'ab_permission_view_role' expected to delete 1 row(s); Only 0 were matched.
webserver | [2022-05-26 10:22:02,776] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
webserver | /home/carlos/.local/lib/python3.8/site-packages/sqlalchemy/orm/persistence.py:1461 SAWarning: DELETE statement on table 'ab_permission_view' expected to delete 1 row(s); 0 were matched. Please set confirm_deleted_rows=False within the mapper configuration to prevent this warning.
webserver | [2022-05-26 10:22:02,817] {manager.py:543} INFO - Removed Permission View: menu_access on Permissions
webserver | /home/carlos/.local/lib/python3.8/site-packages/sqlalchemy/orm/session.py:2459 SAWarning: Identity map already had an identity for (<class 'airflow.www.fab_security.sqla.models.Permission'>, (185,), None), replacing it with newly flushed object. Are there load operations occurring inside of an event handler within the flush?
webserver | [2022-05-26 10:22:03,097] {manager.py:508} INFO - Created Permission View: menu access on Permissions
webserver | [2022-05-26 10:22:03,168] {manager.py:568} INFO - Added Permission menu access on Permissions to role Admin
webserver | [2022-05-26 10:22:03,175] {manager.py:570} ERROR - Add Permission to Role Error: (sqlite3.IntegrityError) UNIQUE constraint failed: ab_permission_view_role.permission_view_id, ab_permission_view_role.role_id
webserver | [SQL: INSERT INTO ab_permission_view_role (permission_view_id, role_id) VALUES (?, ?)]
webserver | [parameters: (185, 1)]
webserver | (Background on this error at: http://sqlalche.me/e/14/gkpj)
webserver | [2022-05-26 10:22:03,188] {manager.py:570} ERROR - Add Permission to Role Error: (sqlite3.IntegrityError) UNIQUE constraint failed: ab_permission_view_role.permission_view_id, ab_permission_view_role.role_id
webserver | [SQL: INSERT INTO ab_permission_view_role (permission_view_id, role_id) VALUES (?, ?)]
webserver | [parameters: (185, 1)]
webserver | (Background on this error at: http://sqlalche.me/e/14/gkpj)
standalone |
standalone | Airflow is ready
standalone | Login with username: admin password: qhbDVvxz9ARPaWQt
standalone | Airflow Standalone is for development purposes only. Do not use this in production!
When trying to open airflow, I always get the following:
And the same for http://0.0.0.0:8793/
Almost exactly the same happens when I try to run airflow webserver.
Thanks!
Related
I am creating a Quarkus amazon lambda HTTP to do some stuff but I am getting the error
For routing, I am using Funqy and the start-API using lambda is working perfectly but when I invoke the API with sam invoke and was already pushed and published in AWS Lambda is still getting this error.
My goal is to create an HTTP API quarkus amazon lambda, thanks for the help
__ ____ __ _____ ___ __ ____ ______
--/ __ \/ / / / _ | / _ \/ //_/ / / / __/
-/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2022-04-01 20:12:47,190 INFO [io.quarkus] (main) json-transposition-tool-quarkus-http-aws-lambda 1.0-SNAPSHOT on JVM (powered by Quarkus 2.7.0.Final) started in 4.149s.
2022-04-01 20:12:47,192 INFO [io.quarkus] (main) Profile prod activated.
2022-04-01 20:12:47,192 INFO [io.quarkus] (main) Installed features: [amazon-lambda, cdi, funqy-http, security, smallrye-context-propagation, vertx]
2022-04-01 20:12:47,225 ERROR [qua.ama.lam.http] (main) Request Failure: java.lang.IllegalStateException: Missing HTTP method in request event
at io.quarkus.amazon.lambda.http.LambdaHttpHandler.nettyDispatch(LambdaHttpHandler.java:176)
at io.quarkus.amazon.lambda.http.LambdaHttpHandler.handleRequest(LambdaHttpHandler.java:63)
at io.quarkus.amazon.lambda.http.LambdaHttpHandler.handleRequest(LambdaHttpHandler.java:43)
at io.quarkus.amazon.lambda.runtime.AmazonLambdaRecorder.handle(AmazonLambdaRecorder.java:79)
at io.quarkus.amazon.lambda.runtime.QuarkusStreamHandler.handleRequest(QuarkusStreamHandler.java:58)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at lambdainternal.EventHandlerLoader$StreamMethodRequestHandler.handleRequest(EventHandlerLoader.java:375)
at lambdainternal.EventHandlerLoader$2.call(EventHandlerLoader.java:899)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:262)
at lambdainternal.AWSLambda.startRuntime(AWSLambda.java:199)
at lambdainternal.AWSLambda.main(AWSLambda.java:193)
I have installed apache airflow V 2.1.0 on Ubuntu running on Windows Linux subsystem(WSL).
After installation, I have created an admin user and also set the AIRFLOW_HOME variable in the ~/.bashrc file as below
export AIRFLOW_HOME=~/airflow
However, when I'm trying to start the airflow webserver it is not working and I'm getting below text
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2021-06-12 03:29:19,807] {dagbag.py:487} INFO - Filling up the DagBag from /dev/null
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
Access Logformat:
=================================================================
[2021-06-12 03:29:23 +0530] [12464] [INFO] Starting gunicorn 20.1.0
[2021-06-12 03:29:23 +0530] [12464] [INFO] Listening at: http://0.0.0.0:8080 (12464)
[2021-06-12 03:29:23 +0530] [12464] [INFO] Using worker: sync
[2021-06-12 03:29:23 +0530] [12466] [INFO] Booting worker with pid: 12466
[2021-06-12 03:29:23 +0530] [12467] [INFO] Booting worker with pid: 12467
[2021-06-12 03:29:23 +0530] [12468] [INFO] Booting worker with pid: 12468
[2021-06-12 03:29:23 +0530] [12469] [INFO] Booting worker with pid: 12469
Can anyone please help me to get rid of this issue?
Check if you have a user airflow. Basically, it looks like airflow is trying to find a path /home/airflow and it couldn't find it or that mount path is missing. I am not sure how ubuntu as a windows subsystem is being configured.
I'm trying to run Airflow locally (to test it before deployment) using minikube and helm chart stable/airflow. But airflow-webserver doesn't start due to gunicorn issue.
Helm: v2.14.3
Kubernetes: v1.15.2
Minikube: v1.3.1
Helm chart image: puckel/docker-airflow
These are the steps:
minikube start
helm install --namespace "airflow" --name "airflow" stable/airflow
Logs are:
Thu Sep 12 07:29:54 UTC 2019 - waiting for Postgres... 1/20
Thu Sep 12 07:30:00 UTC 2019 - waiting for Postgres... 2/20
waiting 60s...
executing webserver...
[2019-09-12 07:31:05,745] {{settings.py:213}} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=1
/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2019-09-12 07:31:06,030] {{__init__.py:51}} INFO - Using executor CeleryExecutor
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2019-09-12 07:31:06,585] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
Running the Gunicorn Server with:
Workers: 4 sync
Host: 0.0.0.0:8080
Timeout: 120
Logfiles: - -
=================================================================
[2019-09-12 07:31:07,676] {{settings.py:213}} INFO - settings.configure_orm(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=21
/usr/local/lib/python3.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: <http://initd.org/psycopg/docs/install.html#binary-install-from-pypi>.
""")
[2019-09-12 07:31:07 +0000] [21] [INFO] Starting gunicorn 19.9.0
[2019-09-12 07:31:07 +0000] [21] [INFO] Listening at: http://0.0.0.0:8080 (21)
[2019-09-12 07:31:07 +0000] [21] [INFO] Using worker: sync
[2019-09-12 07:31:07 +0000] [25] [INFO] Booting worker with pid: 25
[2019-09-12 07:31:07 +0000] [26] [INFO] Booting worker with pid: 26
[2019-09-12 07:31:07 +0000] [27] [INFO] Booting worker with pid: 27
[2019-09-12 07:31:07 +0000] [28] [INFO] Booting worker with pid: 28
[2019-09-12 07:31:08,444] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,446] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,545] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:08,669] {{__init__.py:51}} INFO - Using executor CeleryExecutor
[2019-09-12 07:31:10,047] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:20,932] {{cli.py:825}} ERROR - [0 / 0] some workers seem to have died and gunicorndid not restart them as expected
[2019-09-12 07:31:22,095] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:22 +0000] [25] [INFO] Parent changed, shutting down: <Worker 25>
[2019-09-12 07:31:22 +0000] [25] [INFO] Worker exiting (pid: 25)
[2019-09-12 07:31:32 +0000] [28] [INFO] Parent changed, shutting down: <Worker 28>
[2019-09-12 07:31:32 +0000] [28] [INFO] Worker exiting (pid: 28)
[2019-09-12 07:31:33,289] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:33,324] {{dagbag.py:90}} INFO - Filling up the DagBag from /usr/local/airflow/dags
[2019-09-12 07:31:35 +0000] [26] [INFO] Parent changed, shutting down: <Worker 26>
[2019-09-12 07:31:35 +0000] [26] [INFO] Worker exiting (pid: 26)
[2019-09-12 07:31:35 +0000] [27] [INFO] Parent changed, shutting down: <Worker 27>
[2019-09-12 07:31:35 +0000] [27] [INFO] Worker exiting (pid: 27)
[2019-09-12 07:33:32,017] {{cli.py:832}} ERROR - No response from gunicorn master within 120 seconds
[2019-09-12 07:33:32,018] {{cli.py:833}} ERROR - Shutting down webserver
I can run that docker image locally with docker-compose with no issues, but no luck using helm, it fails and restarts constantly.
Turns out that the issue was that the minikube configuration wasn't making postgres' pod available, editing the pod deployment with the ip of the postgres instance it worked.
I have been using airflow for a month without any such issues. But suddenly the webserver stopped working. It is exiting after filling up the DagBag. No errors are displayed either.
i even tried killing all the airflow and gunicorn processes. Still no luck.
This is what i see when executing "airflow webserver"
[uname#airflow airflow]$ airflow webserver
[2019-03-28 07:51:54,946] {settings.py:174} INFO - settings.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800, pid=4128
[2019-03-28 07:51:55,356] {__init__.py:51} INFO - Using executor LocalExecutor
[2019-03-28 07:51:55,459] {configuration.py:255} WARNING - section/key [rest_api_plugin/rest_api_plugin_http_token_header_name] not found in config
[2019-03-28 07:51:55,459] {configuration.py:255} WARNING - section/key [rest_api_plugin/rest_api_plugin_expected_http_token] not found in config
____________ _____________
____ |__( )_________ __/__ /________ __
____ /| |_ /__ ___/_ /_ __ /_ __ \_ | /| / /
___ ___ | / _ / _ __/ _ / / /_/ /_ |/ |/ /
_/_/ |_/_/ /_/ /_/ /_/ \____/____/|__/
[2019-03-28 07:51:55,705] {models.py:273} INFO - Filling up the DagBag from /home/uname/airflow/dags
[uname#airflow airflow]$
I have the same error. In this case for me, this problem is caused by the default memory allocated for Docker. It is not enough for starting all of the services of Airflow. My solution is to allocate more memory for Docker, and that works for me.
DockerDesktop > Setting > Resources > Advanced > Memory (Provide more memory here)
I hope this will help.
Can someone please help me why this call is no longer working in Apache Karaf 3.0.2. I verified that it was working in version 3.0.1. All instances are up and running, but I am unable to connect to one of my instances directly from the command line.
su - karaf -c " client -h localhost -a 8101 -u karaf -r 50 -d 2 \" instance:connect -u karaf -p karaf test1 \\\" feature:repo-list \\\" \" "
Logging in as karaf
455 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, b6:f6:d6:3f:8b:2f:ad:a4:0f:3f:3d:c3:7b:96:fd:ae] presented unverified {} key: {}
Connecting to host localhost on port 8103
Connecting to unknown server. Automatically adding to known hosts.
Storing the server key in known_hosts.
Error executing command: Authentication failed
The call is part of an automated process and I cannot connect to a specific instance directly. Is there any specific configuration required, that was not necessary in 3.0.1?
UPDATE #1:
I have added the verbose option... Does it give you any hints what to do?
client -v -h localhost -a 8101 -u karaf -r 50 -d 2 " instance:connect -u karaf test1 \" feature:repo-list \" "
39 [main] INFO org.apache.sshd.common.util.SecurityUtils - BouncyCastle not registered, using the default JCE provider
Logging in as karaf
367 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Client session created
380 [main] INFO org.apache.sshd.client.session.ClientSessionImpl - Start flagging packets as pending until key exchange is done
383 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Server version string: SSH-2.0-SSHD-CORE-0.12.0
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: server->client [aes128-ctr, hmac-sha1, none] {} {}
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: client->server [aes128-ctr, hmac-sha1, none] {} {}
444 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, 22:8b:f8:9d:bc:c6:40:d8:fe:52:aa:90:c0:f2:70:ec] presented unverified {} key: {}
457 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Dequeing pending packets
524 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_FAILURE
568 [sshd-SshClient[bea319b]-nio2-thread-2] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_SUCCESS
Connecting to host localhost on port 8102
Error executing command: Authentication failed
UPDATE #2:
I switched the logger to DEBUG and I found this exception:
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_SERVICE_ACCEPT
2015-01-15 11:28:48,920 | INFO | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_USERAUTH_FAILURE
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Authentications that can continue: keyboard-interactive, password, publickey
2015-01-15 11:28:48,922 | DEBUG | 5]-nio2-thread-1 | Nio2Session | 28 - org.apache.sshd.core - 0.12.0 | Caught exception, now calling handler
2015-01-15 11:28:48,922 | WARN | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Exception caught
java.lang.IllegalStateException: No SSH_AUTH_SOCK environment variable set
at org.apache.karaf.shell.ssh.KarafAgentFactory.createClient(KarafAgentFactory.java:71)
at org.apache.sshd.client.auth.UserAuthPublicKey.init(UserAuthPublicKey.java:78)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.tryNext(ClientUserAuthServiceNew.java:212)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.processUserAuth(ClientUserAuthServiceNew.java:178)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.process(ClientUserAuthServiceNew.java:131)
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:80)
at org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:399)
at org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:295)
at org.apache.sshd.client.session.ClientSessionImpl.handleMessage(ClientSessionImpl.java:256)
at org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:731)
at org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:277)
at org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:187)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:53)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:46)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker$2.run(Invoker.java:218)[:1.7.0_65]
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_65]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_65]