Authentication failed using instance:connect in Apache Karaf 3.0.2 - apache-karaf

Can someone please help me why this call is no longer working in Apache Karaf 3.0.2. I verified that it was working in version 3.0.1. All instances are up and running, but I am unable to connect to one of my instances directly from the command line.
su - karaf -c " client -h localhost -a 8101 -u karaf -r 50 -d 2 \" instance:connect -u karaf -p karaf test1 \\\" feature:repo-list \\\" \" "
Logging in as karaf
455 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, b6:f6:d6:3f:8b:2f:ad:a4:0f:3f:3d:c3:7b:96:fd:ae] presented unverified {} key: {}
Connecting to host localhost on port 8103
Connecting to unknown server. Automatically adding to known hosts.
Storing the server key in known_hosts.
Error executing command: Authentication failed
The call is part of an automated process and I cannot connect to a specific instance directly. Is there any specific configuration required, that was not necessary in 3.0.1?
UPDATE #1:
I have added the verbose option... Does it give you any hints what to do?
client -v -h localhost -a 8101 -u karaf -r 50 -d 2 " instance:connect -u karaf test1 \" feature:repo-list \" "
39 [main] INFO org.apache.sshd.common.util.SecurityUtils - BouncyCastle not registered, using the default JCE provider
Logging in as karaf
367 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Client session created
380 [main] INFO org.apache.sshd.client.session.ClientSessionImpl - Start flagging packets as pending until key exchange is done
383 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Server version string: SSH-2.0-SSHD-CORE-0.12.0
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: server->client [aes128-ctr, hmac-sha1, none] {} {}
384 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Kex: client->server [aes128-ctr, hmac-sha1, none] {} {}
444 [sshd-SshClient[bea319b]-nio2-thread-1] WARN org.apache.sshd.client.keyverifier.AcceptAllServerKeyVerifier - Server at [localhost/127.0.0.1:8101, DSA, 22:8b:f8:9d:bc:c6:40:d8:fe:52:aa:90:c0:f2:70:ec] presented unverified {} key: {}
457 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientSessionImpl - Dequeing pending packets
524 [sshd-SshClient[bea319b]-nio2-thread-1] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_FAILURE
568 [sshd-SshClient[bea319b]-nio2-thread-2] INFO org.apache.sshd.client.session.ClientUserAuthServiceNew - Received SSH_MSG_USERAUTH_SUCCESS
Connecting to host localhost on port 8102
Error executing command: Authentication failed
UPDATE #2:
I switched the logger to DEBUG and I found this exception:
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_SERVICE_ACCEPT
2015-01-15 11:28:48,920 | INFO | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Received SSH_MSG_USERAUTH_FAILURE
2015-01-15 11:28:48,920 | DEBUG | 5]-nio2-thread-1 | ClientUserAuthServiceNew | 28 - org.apache.sshd.core - 0.12.0 | Authentications that can continue: keyboard-interactive, password, publickey
2015-01-15 11:28:48,922 | DEBUG | 5]-nio2-thread-1 | Nio2Session | 28 - org.apache.sshd.core - 0.12.0 | Caught exception, now calling handler
2015-01-15 11:28:48,922 | WARN | 5]-nio2-thread-1 | ClientSessionImpl | 28 - org.apache.sshd.core - 0.12.0 | Exception caught
java.lang.IllegalStateException: No SSH_AUTH_SOCK environment variable set
at org.apache.karaf.shell.ssh.KarafAgentFactory.createClient(KarafAgentFactory.java:71)
at org.apache.sshd.client.auth.UserAuthPublicKey.init(UserAuthPublicKey.java:78)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.tryNext(ClientUserAuthServiceNew.java:212)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.processUserAuth(ClientUserAuthServiceNew.java:178)
at org.apache.sshd.client.session.ClientUserAuthServiceNew.process(ClientUserAuthServiceNew.java:131)
at org.apache.sshd.client.session.ClientUserAuthService.process(ClientUserAuthService.java:80)
at org.apache.sshd.common.session.AbstractSession.doHandleMessage(AbstractSession.java:399)
at org.apache.sshd.common.session.AbstractSession.handleMessage(AbstractSession.java:295)
at org.apache.sshd.client.session.ClientSessionImpl.handleMessage(ClientSessionImpl.java:256)
at org.apache.sshd.common.session.AbstractSession.decode(AbstractSession.java:731)
at org.apache.sshd.common.session.AbstractSession.messageReceived(AbstractSession.java:277)
at org.apache.sshd.common.AbstractSessionIoHandler.messageReceived(AbstractSessionIoHandler.java:54)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:187)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:189)
at org.apache.sshd.common.io.nio2.Nio2Session$1.onCompleted(Nio2Session.java:173)
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker.invokeDirect(Invoker.java:157)[:1.7.0_65]
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.implRead(UnixAsynchronousSocketChannelImpl.java:553)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:275)[:1.7.0_65]
at sun.nio.ch.AsynchronousSocketChannelImpl.read(AsynchronousSocketChannelImpl.java:296)[:1.7.0_65]
at java.nio.channels.AsynchronousSocketChannel.read(AsynchronousSocketChannel.java:407)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2Session.startReading(Nio2Session.java:173)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:53)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2Connector$1.onCompleted(Nio2Connector.java:46)[28:org.apache.sshd.core:0.12.0]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler$1.run(Nio2CompletionHandler.java:32)
at java.security.AccessController.doPrivileged(Native Method)[:1.7.0_65]
at org.apache.sshd.common.io.nio2.Nio2CompletionHandler.completed(Nio2CompletionHandler.java:30)[28:org.apache.sshd.core:0.12.0]
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:126)[:1.7.0_65]
at sun.nio.ch.Invoker$2.run(Invoker.java:218)[:1.7.0_65]
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_65]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_65]
at java.lang.Thread.run(Thread.java:745)[:1.7.0_65]

Related

Problems using MariaDB in Docker Swarm with NFS

I have problems using MariaDB within a Docker Swarm using an nfs share. The database suddenly stops accepting new connections after fdatasync() failed. This happens randomly. Aftera few hours or after a few days. If I remove the service and start it again, everything ist running fine. The service seems not to repair itself. But I think this error should not even occur, even if the service should heal itself. I run the database as a persistence layer for the nextcloud app.
This is my docker-compose file:
version: '3.3'
services:
nextcloud_db:
image: mariadb:10.7.4
#container_name: nextcloud-db
command:
- "--transaction-isolation=READ-COMMITTED"
- "--log-bin=ROW"
- "--innodb_read_only_compressed=OFF"
- "--character-set-server=utf8mb4"
- "--collation-server=utf8mb4_unicode_ci"
#- "--innodb-rollback-on-timeout=ON" # Tested this but did not help
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
labels:
- traefik.enable=false
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- db:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=myrootpassword
- MYSQL_PASSWORD=mymysqlpassword
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_INITDB_SKIP_TZINFO=1
networks:
- nextcloud
### other services for running nextcloud ###
volumes:
db:
driver_opts:
type: "nfs"
o: "addr=<storage-server-ip>,nolock,soft,rw"
device: ":/mnt/storage/nextcloud/db"
networks:
traefik-public:
external: true
nextcloud:
driver: overlay
# driver_opts:
# encrypted: "true"
These are the logs from the moment the db died:
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 19:51:17 4671 [ERROR] [FATAL] InnoDB: fdatasync() returned 5
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 220629 19:51:17 [ERROR] mysqld got signal 6 ;
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | This could be because you hit a bug. It is also possible that this binary
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | or one of the libraries it was linked against is corrupt, improperly built,
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | or misconfigured. This error can also be caused by malfunctioning hardware.
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 |
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | To report this bug, see https://mariadb.com/kb/en/reporting-bugs
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 |
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | We will try our best to scrape up some info that will hopefully help
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | diagnose the problem, but since we have already crashed,
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | something is definitely wrong and this may fail.
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 |
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | Server version: 10.7.4-MariaDB-1:10.7.4+maria~focal-log
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | key_buffer_size=134217728
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | read_buffer_size=131072
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | max_used_connections=10
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | max_threads=153
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | thread_count=11
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | It is possible that mysqld could use up to
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467995 K bytes of memory
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | Hope that's ok; if not, decrease some variables in the equation.
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 |
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | Thread pointer: 0x55d81db99108
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | Attempting backtrace. You can use the following information to find out
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | where mysqld died. If you see no messages after this, something went
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | terribly wrong...
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | stack_bottom = 0x7fcf10137d98 thread_stack 0x49000
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | mariadbd(my_print_stacktrace+0x32)[0x55d81b24de52]
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | mariadbd(handle_fatal_signal+0x485)[0x55d81ad282b5]
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 21:49:49 4673 [Warning] Aborted connection 4673 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 21:49:49 4672 [Warning] Aborted connection 4672 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 21:49:49 4674 [Warning] Aborted connection 4674 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 22:16:02 4676 [Warning] Aborted connection 4676 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 22:18:13 4678 [Warning] Aborted connection 4678 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-06-29 22:24:46 4679 [Warning] Aborted connection 4679 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
nc_nextcloud_db.1.1mfx9xkwd1sd#v220210169548138574 | 2022-07-01 21:49:02 7148 [Warning] Aborted connection 7148 to db: 'nextcloud' user: 'nextcloud' host: '10.0.7.189' (Got an error reading communication packets)
I found no other logs related to the isse.
Anyone has a clue what's going on here?
Maybe the NFS share is unavailable for a few seconds and so the database has problems reading/writing? Is it possible to self-heal the mariadb service after this error occurs? There are no other problems as long as the database service is running. I can upload and delete files etc. So it is not a permissions issue on the nfs share.
Further MariaDB metrics:
https://jpst.it/2TX-F
Host system info:
Docker node VM with Ubuntu:
Ubuntu 20.04.4 LTS
2 vCPUs
8 GB RAM
160 GB SSD System-Storage (Raid 10)
MySQL does not support mounting NFS to initialize data

Stanford coreNLP Dedicated server not running

I am trying to run a dedicated coreNLP server using a CentOS7 server hosted on Linode.com. I have followed all the instructions from https://stanfordnlp.github.io/CoreNLP/corenlp-server.html#dedicated-server and then read and consequently updated my scripts and user permissions using this post: Dedicated CoreNLP Server Control Issues but unfortunately I still can't connect to the service through any means. Every time it says 'connection refused' if I try and connect through localhost, or load the ip in my browser, or try and connect through he android app I am attempting to develop.
Prior to starting the coreNLP service I disabled all firewalls using iptables scripts and disabling firewalld. I am running as user nlp on port 80. I tried to icmp ping the server from another machine which worked fine.
I changed access permissions in the /opt/corenlp directory although I'm not sure that made a difference. Here is my terminal output. I used systemctl although the sudo service command also produced the same thing.
sudo systemctl start corenlp
[nlp#aqghost /]$ sudo systemctl status corenlp -l
● corenlp.service - (null)
Loaded: loaded (/etc/rc.d/init.d/corenlp; bad; vendor preset: disabled)
Active: active (exited) since Thu 2021-06-17 13:53:59 BST; 3s ago
Docs: man:systemd-sysv-generator(8)
Process: 9413 ExecStart=/etc/rc.d/init.d/corenlp start (code=exited, status=0/SUCCESS)
Jun 17 13:53:59 aqghost systemd[1]: Starting (null)...
Jun 17 13:53:59 aqghost corenlp[9413]: CoreNLP server is already running!
Jun 17 13:53:59 aqghost corenlp[9413]: If you are sure this is a mistake, delete the file:
Jun 17 13:53:59 aqghost corenlp[9413]: /opt/corenlp/corenlp.shutdown
Jun 17 13:53:59 aqghost systemd[1]: Started (null).
[nlp#aqghost /]$ sudo systemctl start corenlp
[nlp#aqghost /]$ sudo systemctl status
● aqghost
State: running
Jobs: 0 queued
Failed: 0 units
Since: Thu 2021-06-17 10:35:33 BST; 3h 18min ago
CGroup: /
├─1 /usr/lib/systemd/systemd --switched-root --system --deserializ
├─user.slice
│ ├─user-0.slice
│ │ └─session-25.scope
│ │ ├─9263 sshd: root#pts/1
│ │ └─9267 -bash
│ └─user-1000.slice
│ └─session-21.scope
│ ├─8745 sshd: nlp [priv]
│ ├─8751 sshd: nlp#pts/0
│ ├─8752 -bash
│ ├─9426 sudo systemctl status
│ ├─9428 systemctl status
│ └─9429 less
└─system.slice
├─tuned.service
│ └─799 /usr/bin/python2 -Es /usr/sbin/tuned -l -P
├─rsyslog.service
│ └─797 /usr/sbin/rsyslogd -n
├─sshd.service
│ └─796 /usr/sbin/sshd -D
├─crond.service
│ └─577 /usr/sbin/crond -n
├─systemd-logind.service
│ └─573 /usr/lib/systemd/systemd-logind
├─NetworkManager.service
│ └─572 /usr/sbin/NetworkManager --no-daemon
├─dbus.service
│ └─564 /usr/bin/dbus-daemon --system --address=systemd: --nofor
├─polkit.service
│ └─563 /usr/lib/polkit-1/polkitd --no-debug
├─irqbalance.service
│ └─559 /usr/sbin/irqbalance --foreground
├─auditd.service
│ └─512 /sbin/auditd
├─systemd-udevd.service
│ └─430 /usr/lib/systemd/systemd-udevd
├─system-serial\x2dgetty.slice
│ └─serial-getty#ttyS0.service
│ └─583 /sbin/agetty --keep-baud 115200,38400,9600 ttyS0 vt220
├─system-getty.slice
│ └─getty#tty1.service
│ └─584 /sbin/agetty --noclear tty1 linux
└─systemd-journald.service
└─399 /usr/lib/systemd/systemd-journald
[nlp#aqghost /]$ sudo ps aux | grep java
nlp 9431 0.0 0.0 112812 964 pts/0 S+ 13:54 0:00 grep --color=auto java
(Note. I seemed to make the server work ONCE, for about 20 seconds. I'm not sure how and why but I could connect via my app that i'm dveloping and via localhost on the same server. This is also what showed in my stderr.log and stdout.log: (ignore the fact that theres four outputs here, i'm still leanring JSON formatting and i think i sent the same thing))
sudo cat /opt/corenlp/stdout.log
the quick brown fox jumped over the lazy dog
{"inputFormat":"text","data":"I am going to the shops.","annotators":"tokenize, ssplit, pos, lemma, ner, parse, dcoref","outputFormat":"json"}
{"inputFormat":"text","data":"I am going to the shops.","annotators":"tokenize, ssplit, pos, lemma, ner, parse, dcoref","outputFormat":"json"}
{"inputFormat":"text","data":"I am going to the shops.","annotators":"tokenize, ssplit, pos, lemma, ner, parse, dcoref","outputFormat":"json"}
{"inputFormat":"text","data":"I am going to the shops.","annotators":"tokenize, ssplit, pos, lemma, ner, parse, dcoref","outputFormat":"json"}
$ sudo cat /opt/corenlp/stderr.log
[main] INFO CoreNLP - --- StanfordCoreNLPServer#main() called ---
[main] INFO CoreNLP - Server default properties:
(Note: unspecified annotator properties are English defaults)
inputFormat = text
outputFormat = json
prettyPrint = false
[main] INFO CoreNLP - Threads: 4
[main] INFO CoreNLP - Starting server...
[main] INFO CoreNLP - StanfordCoreNLPServer listening at /0.0.0.0:80
[pool-1-thread-1] INFO CoreNLP - [/127.0.0.1:60614] API call w/annotators tokenize,ssplit,pos
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-1] INFO edu.stanford.nlp.tagger.maxent.MaxentTagger - Loading POS tagger from edu/stanford/nlp/models/pos-tagger/english-left3words-distsim.tagger ... done [1.0 sec].
[pool-1-thread-1] INFO CoreNLP - [/128.243.2.12:48192] API call w/annotators <unknown>
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Searching for resource: StanfordCoreNLP.properties ... found.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator tokenize
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ssplit
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator pos
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator lemma
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator ner
[pool-1-thread-1] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz ... done [1.2 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.muc.7class.distsim.crf.ser.gz ... done [0.8 sec].
[pool-1-thread-2] INFO CoreNLP - [/128.243.2.12:48196] API call w/annotators <unknown>
[pool-1-thread-1] INFO edu.stanford.nlp.ie.AbstractSequenceClassifier - Loading classifier from edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz ... done [1.1 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.time.JollyDayHolidays - Initializing JollyDayHoliday for SUTime from classpath edu/stanford/nlp/models/sutime/jollyday/Holidays_sutime.xml as sutime.binder.1.
[pool-1-thread-1] INFO edu.stanford.nlp.time.TimeExpressionExtractorImpl - Using following SUTime rules: edu/stanford/nlp/models/sutime/defs.sutime.txt,edu/stanford/nlp/models/sutime/english.sutime.txt,edu/stanford/nlp/models/sutime/english.holidays.sutime.txt
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 580705 unique entries out of 581864 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_caseless.tab, 0 TokensRegex patterns.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 4867 unique entries out of 4867 from edu/stanford/nlp/models/kbp/english/gazetteers/regexner_cased.tab, 0 TokensRegex patterns.
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.TokensRegexNERAnnotator - ner.fine.regexner: Read 585572 unique entries from 2 files
[pool-1-thread-3] INFO CoreNLP - [/128.243.2.12:48228] API call w/annotators <unknown>
[pool-1-thread-4] INFO CoreNLP - [/128.243.2.12:48230] API call w/annotators <unknown>
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.NERCombinerAnnotator - numeric classifiers: true; SUTime: true [no docDate]; fine grained: true
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator depparse
[pool-1-thread-1] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Loading depparse model: edu/stanford/nlp/models/parser/nndep/english_UD.gz ... Time elapsed: 67.3 sec
[pool-1-thread-1] INFO edu.stanford.nlp.parser.nndep.Classifier - PreComputed 20000 vectors, elapsed Time: 1.244 sec
[pool-1-thread-1] INFO edu.stanford.nlp.parser.nndep.DependencyParser - Initializing dependency parser ... done [68.6 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator coref
[pool-1-thread-1] INFO edu.stanford.nlp.coref.statistical.SimpleLinearClassifier - Loading coref model edu/stanford/nlp/models/coref/statistical/ranking_model.ser.gz ... done [1.5 sec].
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.CorefMentionAnnotator - Using mention detector type: dependency
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.StanfordCoreNLP - Adding annotator kbp
[pool-1-thread-1] INFO edu.stanford.nlp.pipeline.KBPAnnotator - Loading KBP classifier from: edu/stanford/nlp/models/kbp/english/tac-re-lr.ser.gz
Killed
Here is my service script modified according to the linked post:
#!/bin/bash
#
# A script to start/stop the CoreNLP server on port 80, made
# in particular for the configuration running at corenlp.run.
# This script should be placed into:
#
# /etc/init.d/corenlp
#
# To run it at startup, link to the script using:
#
# ln -s /etc/init.d/conenlp /etc/rc.2/S75corenlp
#
# Usage:
#
# service corenlp [start|stop]
# ./corenlp [start|stop]
#
#
# Set this to the username you would like to use to run the server.
# Make sure that this user can authbind to port 80!
#
SERVER_USER="nlp"
CORENLP_DIR="/opt/corenlp"
do_start()
{
if [ -e "$CORENLP_DIR/corenlp.shutdown" ]; then
echo "CoreNLP server is already running!"
echo "If you are sure this is a mistake, delete the file:"
echo "$CORENLP_DIR/corenlp.shutdown"
else
export CLASSPATH=""
for JAR in `find "$CORENLP_DIR" -name "*.jar"`; do
CLASSPATH="$CLASSPATH:$JAR"
done
nohup su nlp -c "sudo /usr/bin/authbind --deep java -Djava.net.preferIPv4Stack=true -Djava.io.tmpdir=/opt/corenlp -cp "$CLASSPATH" -XX:+UseG1GC -XX:MaxGCPauseMillis=500 -XX:+UseStringDeduplication -XX:SoftRefLRUPolicyMSPerMB=100 -mx20g edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 80" \
> "$CORENLP_DIR/stdout.log" 2> "$CORENLP_DIR/stderr.log" &
echo "CoreNLP server started."
fi
}
do_stop()
{
if [ ! -e "$CORENLP_DIR/corenlp.shutdown" ]; then
echo "CoreNLP server is not running"
else
KEY=`cat "$CORENLP_DIR/corenlp.shutdown"`
curl "localhost/shutdown?key=$KEY"
echo "CoreNLP server stopped"
fi
}
do_restart()
{
do_stop
sleep 3
do_start
}
case $1 in
start) do_start
;;
stop) do_stop
;;
restart) do_restart
;;
esac
Sorry for all the info. I am new to linux and running a server and initialising scripts so please be easy on me.

Upload Stem Cell Error - Keystone connection timed out

I am getting a connection timeout error while uploading the stemcell to bosh director. I am using bosh cli v2. The following is myerror logs.
> bosh -e sdp-bosh-env upload-stemcell https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12 --fix
Using environment '10.82.73.8' as client 'admin'
Task 13
Task 13 | 05:02:40 | Update stemcell: Downloading remote stemcell (00:00:51)
Task 13 | 05:03:31 | Update stemcell: Extracting stemcell archive (00:00:03)
Task 13 | 05:03:34 | Update stemcell: Verifying stemcell manifest (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Checking if this stemcell already exists (00:00:00)
Task 13 | 05:03:35 | Update stemcell: Uploading stemcell bosh-openstack-kvm-ubuntu-trusty-go_agent/3541.12 to the cloud (00:10:41)
L Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 | 05:14:16 | Error: CPI error 'Bosh::Clouds::CloudError' with message 'Unable to connect to the OpenStack Keystone API http://10.81.102.5:5000/v2.0/tokens
Connection timed out - connect(2) for 10.81.102.5:5000 (Errno::ETIMEDOUT)' in 'create_stemcell' CPI method
Task 13 Started Sat Apr 7 05:02:40 UTC 2018
Task 13 Finished Sat Apr 7 05:14:16 UTC 2018
Task 13 Duration 00:11:36
Task 13 error
Uploading remote stemcell 'https://bosh.io/d/stemcells/bosh-openstack-kvm-ubuntu-trusty-go_agent?v=3541.12':
Expected task '13' to succeed but state is 'error'
Exit code 1
Check OpenStack's security group for the BOSH Director machine.
SG should contain ALLOW IPv4 to 0.0.0.0/0, if it doesn't add at least egress TCP to 10.81.102.5 on port 5000.
Check connection using ssh:
bbl ssh --director
nc -tvn 10.81.102.5 5000
If it doesn't help check network/firewall configuration.
https://bosh.io/docs/uploading-stemcells/
https://github.com/cloudfoundry/bosh-bootloader/blob/master/terraform/openstack/templates/resources.tf

Sonatype nexus: unable to use http proxy after update from 2.2 to 2.10.0-02

We use sonatype nexus on Windows Server 2008 R2. To be able to access external repositories we use an corporate http proxy server. Therefore we entered in Nexus GUI->Server the values "proxy host", "proxy port", "username" and "password". After migration from Nexus 2.2 to 2.10.0-02 the nexus server is not able to access external repositories.
If now I go to Nexus GUI-> Repositories->Central->Browse Remote->Refresh the remote repository is not visible. The wrapper.log contains following log entries (original data was replaced by <proxyhost>:<proxyport> and <proxyuser>):
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.apachehttpclient.Hc4ProviderImpl - <proxyhost>:<proxyport> proxy authentication setup for remote storage with username '<proxyuser>'
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.apachehttpclient.Hc4ProviderImpl - http proxy setup with host '<proxyhost>'
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.plugins.rrb.MavenRepositoryReader - remotePath=
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.plugins.rrb.MavenRepositoryReader - Requesting: GET http://repo1.maven.org/maven2/?delimiter=/ HTTP/1.1
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.apachehttpclient.Hc4ProviderImpl$ManagedClientConnectionManager - Connection request: [route: {}->http://<proxyhost>:<proxyport>->http://repo1.maven.org:80][total kept alive: 1; route allocated: 1 of 20; total allocated: 2 of 200]
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.apachehttpclient.Hc4ProviderImpl$ManagedClientConnectionManager - Connection leased: [id: 25][route: {}->http://<proxyhost>:<proxyport>->http://repo1.maven.org:80][total kept alive: 0; route allocated: 1 of 20; total allocated: 2 of 200]
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.plugins.rrb.MavenRepositoryReader - Status code: 407
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.apachehttpclient.Hc4ProviderImpl$ManagedClientConnectionManager - Connection [id: 25][route: {}->http://<proxyhost>:<proxyport>->http://repo1.maven.org:80] can be kept alive for 30.0 seconds
| 2014-11-25 08:55:25 DEBUG [qtp949677682-69] - org.sonatype.nexus.apachehttpclient.Hc4ProviderImpl$ManagedClientConnectionManager - Connection released: [id: 25][route: {}->http://<proxyhost>:<proxyport>->http://repo1.maven.org:80][total kept alive: 1; route allocated: 1 of 20; total allocated: 2 of 200]
| 2014-11-25 08:55:25 TRACE [qtp949677682-69] - org.sonatype.nexus.plugins.rrb.MavenRepositoryReader - <HEAD><TITLE>Proxy Authorization Required</TITLE></HEAD>
| <BODY BGCOLOR="white" FGCOLOR="black"><H1>Proxy Authorization Required</H1><HR>
| <FONT FACE="Helvetica,Arial"><B>
| Description: Authorization is required for access to this proxy</B></FONT>
| <HR>
| <!-- default "Proxy Authorization Required" response (407) -->
| </BODY>
Wireshark capture looks like this:
43 1.803445000 <nexus> <proxyhost> HTTP 278 GET http://repo1.maven.org/maven2/?delimiter=/ HTTP/1.1
51 1.814045000 <nexus> <proxyhost> HTTP 278 GET http://repo1.maven.org/maven2/?delimiter=/ HTTP/1.1
55 1.819731000 <proxyhost> <nexus> HTTP 1014 HTTP/1.1 407 Proxy Authorization Required (text/html)
all GET requests doesn't have any authentication headers.
Why Nexus does not repeat the GET request with credentials after HTTP 407?
Does anyone have similar issue?
It sounds like the proxy server might be configured to use NTLM authentication? Try entering the "NT LAN Manager Domain" in the proxy authentication also.

nova ERROR: [Errno 111] Connection refused

I'm using Centos 6.5 x86_64 to setup Openstack Havana and all services work well. But when I've rebooted the operating system, I've founded that the nova service does not work properly, the following error triggered:
nova flavor-list
ERROR: [Errno 111] Connection refused
Reviewing the log files in / var / log / nova gives the following error:
2014-03-24 12:24:04.293 6275 INFO nova.osapi_compute.wsgi.server [-] (6275) wsgi starting up
2014-03-24 12:24:04.297 6267 CRITICAL nova [-] [Errno 98] Address already in use
2014-03-24 12:24:04.412 6275 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-03-24 12:24:04.412 6274 INFO nova.openstack.common.service [-] Parent process has died unexpectedly, exiting
2014-03-24 12:24:04.412 6275 INFO nova.wsgi [-] Stopping WSGI server.
2014-03-24 12:24:04.412 6274 INFO nova.wsgi [-] Stopping WSGI server.
The state of my OpenStack server
nova-manage service list
Binary Host Zone Status State Updated_At
nova-cert controller internal enabled :-) 2014-03-24 14:28:03
nova-consoleauth controller internal enabled :-) 2014-03-24 14:28:01
nova-scheduler controller internal enabled :-) 2014-03-24 14:28:00
nova-conductor controller internal enabled :-) 2014-03-24 14:27:59
nova-compute controller nova enabled :-) 2014-03-24 14:28:06
nova-network controller internal enabled :-) 2014-03-24 14:27:58
keystone service-list
+----------------------------------+----------+----------+---------------------------+
| id | name | type | description |
+----------------------------------+----------+----------+---------------------------+
| 7ce108d652ee48d7897127045a371795 | cinder | volume | Cinder Volume Service |
| 9452b875328f4763b7766eb533bd75c4 | cinderv2 | volumev2 | Cinder Volume Service V2 |
| e9607d1a308140298f8364fd2a0e62a8 | glance | image | Glance Image Service |
| b7ac07f69e2e41f684d6470c69db4781 | keystone | identity | Keystone Identity Service |
| cbdfa73329094d7d94c7464b9bf0ef7d | nova | compute | Nova Compute service |
+----------------------------------+----------+----------+---------------------------+
ps -ef | grep "nova-api"
nova 2522 1 0 11:22 ? 00:00:00 /usr/bin/python /usr/bin/nova-api-metadata --logfile /var/log/nova/metadata-api.log
root 11909 6217 0 15:11 pts/1 00:00:01 gedit nova-api.log
root 12644 3832 0 15:31 pts/0 00:00:00 grep nova-api
netstat -napo | grep 877
tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 2522/python off (0.00/0/0)
Any pointers would be extremely helpful.
Thanks
firstly, i strongly recommend you to find or ask for answer on ask.openstack.org
then from what you described, it may caused by: you've enabled nova-api-metadata and nova-api service in the same time.
from the default configuration we know that: ['ec2', 'osapi_compute', 'metadata'] are enabled, see https://github.com/openstack/nova/blob/stable/havana/nova/service.py#L55
so it will start each service one by one when nova-api service is called, see https://github.com/openstack/nova/blob/stable/havana/nova/cmd/api.py#L45
since nova-api-metadata service is running, which cause the 8775 port is used, then one service launched by nova-api will die and since this exception is not caught, then the other two will die too, then you get what you see in the log
If what I've assumed is right, please cancel the nova-api-metadata service and use nova-api service only, which means 'chkconfig openstack-nova-api-metadata off; chkconfig openstack-nova-api on', i'm not sure about the specific service name on your system, but should be something like that, correct it if i'm wrong
Connection refused is a common error encountered everytime. One of the case is keystone is refusing the connection for the nova service.
make sure SERVICE_PASSWORD for nova and quantum are same while creating the keystone services.Go to quantum and nova config files and verify the SERVICE_PASSWORD are same.
Njoy!!

Resources