Errors when running the soccer simulation 2d - 2d

I use fedora and just installed the soccer simulation server 2d(rcssserver-15.0.0) and another piece of software beside it. Then I downloaded a base code. When I want to run it
I type ./start.sh in the base code directory. For first time there was no problem, but when I wanted to run it the second time by the same base code I got this error:
[reza#localhost WrightEagleBASE-3.0.0]$ ./start.sh
cd Release; make -j3 all
make[1]: Entering directory `/home/reza/Soccer/WrightEagleBASE-3.0.0/Release'
make[1]: Nothing to be done for `all'.
make[1]: Leaving directory `/home/reza/Soccer/WrightEagleBASE-3.0.0/Release'
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Goalie: 1
WrightEagleBASE: Connect Server Error ...
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 2
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 3
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 4
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 5
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 6
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 7
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 8
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 9
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 10
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Player: 11
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
WrightEagleBASE: Connect Server Error ...
>>>>>>>>>>>>>>>>>>>>>> WrightEagleBASE Coach
WrightEagleBASE: Connect Server Error ...

You have to change team name second time. By default it's WrightEagleBASE. When you run it second time the same name is used. You could pass this argument to binary file:
./WrightEagleBASE -team_name SOMENAME to add members to the second team.

Related

MPI error while loading shared libraries: libpgftnrtl.so in another machine

I built MPI in one of my linux (ubuntu) machine (SOL), it works OK on this machine. Only SOL has all complier libs
./mpiexec -n 8 -f machinefile /usr/apps/lib/mpich-3.3.2-pgf_gcc/examples/cpi
Process 0 of 8 is on SOL
Process 6 of 8 is on SOL
Process 1 of 8 is on SOL
Process 7 of 8 is on SOL
Process 3 of 8 is on SOL
Process 2 of 8 is on SOL
Process 4 of 8 is on SOL
Process 5 of 8 is on SOL
pi is approximately 3.1415926544231247, Error is 0.0000000008333316
wall clock time = 0.001447
machine file
SOL: 8
However, when I tried running MPI on two machines
machine file
SOL: 4
Corona:4
Corona cannot find lib needed,
stilocal#SOL:/usr/apps/lib/mpich-install/bin$ ./mpiexec -n 8 -f machinefile /usr/apps/lib/mpich-3.3.2-pgf_gcc/examples/cpi
/usr/apps/lib/mpich-3.3.2-pgf_gcc/examples/.libs/lt-cpi: error while loading shared libraries: libpgftnrtl.so: cannot open shared object file: No such file or directory
/usr/apps/lib/mpich-3.3.2-pgf_gcc/examples/.libs/lt-cpi: error while loading shared libraries: libpgftnrtl.so: cannot open shared object file: No such file or directory
/usr/apps/lib/mpich-3.3.2-pgf_gcc/examples/.libs/lt-cpi: error while loading shared libraries: libpgftnrtl.so: cannot open shared object file: No such file or directory
/usr/apps/lib/mpich-3.3.2-pgf_gcc/examples/.libs/lt-cpi: error while loading shared libraries: libpgftnrtl.so: cannot open shared object file: No such file or directory
^C[mpiexec#SOL] Sending Ctrl-C to processes as requested
[mpiexec#SOL] Press Ctrl-C again to force abort
[mpiexec#SOL] HYDU_sock_write (/usr/apps/lib/mpich-3.3.2/src/pm/hydra/utils/sock/sock.c:256): write error (Bad file descriptor)
[mpiexec#SOL] HYD_pmcd_pmiserv_send_signal (/usr/apps/lib/mpich-3.3.2/src/pm/hydra/pm/pmiserv/pmiserv_cb.c:178): unable to write data to proxy
[mpiexec#SOL] ui_cmd_cb (/usr/apps/lib/mpich-3.3.2/src/pm/hydra/pm/pmiserv/pmiserv_pmci.c:77): unable to send signal downstream
[mpiexec#SOL] HYDT_dmxu_poll_wait_for_event (/usr/apps/lib/mpich-3.3.2/src/pm/hydra/tools/demux/demux_poll.c:77): callback returned error status
[mpiexec#SOL] HYD_pmci_wait_for_completion (/usr/apps/lib/mpich-3.3.2/src/pm/hydra/pm/pmiserv/pmiserv_pmci.c:196): error waiting for event
[mpiexec#SOL] main (/usr/apps/lib/mpich-3.3.2/src/pm/hydra/ui/mpich/mpiexec.c:336): process manager error waiting for completion
how can I fix the issue?

Can't delete flannel plugin

I'm trying to switch network plug-in from flannel to something else just for educational purposes.
The flannel had been installed via:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
So to remowe it I'm trying to
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
As a result I've got:
k8s#k8s-master:~$ kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterroles.rbac.authorization.k8s.io "flannel" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterrolebindings.rbac.authorization.k8s.io "flannel" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": serviceaccounts "flannel" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": configmaps "kube-flannel-cfg" not found
error when stopping "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": timed out waiting for the condition
It's strange cause several hours earlier I've made such operations with weave & worked fine.
I have got the similar errors in output:
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": podsecuritypolicies.policy "psp.flannel.unprivileged" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterroles.rbac.authorization.k8s.io "flannel" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": clusterrolebindings.rbac.authorization.k8s.io "flannel" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": serviceaccounts "flannel" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": configmaps "kube-flannel-cfg" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-amd64" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-arm64" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-arm" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-ppc64le" not found
Error from server (NotFound): error when deleting "https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml": daemonsets.apps "kube-flannel-ds-s390x" not found
Solution were to change the sequence of steps, first reinstall kubernetes environment after that redeploy flannel over broken one and only then delete it.
kubeadm reset
systemctl daemon-reload && systemctl restart kubelet
kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Note: You may need also manually unload flannel driver which is the vxlan module:
rmmod vxlan

Cloudify manager bootsrapping - rest service failed

I followed the steps in http://docs.getcloudify.org/4.1.0/installation/bootstrapping/#option-2-bootstrapping-a-cloudify-manager to bootstrap the cloudify manager using option 2, and getting the following error repeatedly:
Workflow failed: Task failed 'fabric_plugin.tasks.run_script' -> restservice
error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
The command is able to install a verify a lot of things like rabbitmq, postgresql etc, but always fails at rest service. Create and configure of rest service is successful, but verification fails. It looks like the service never starts.
2017-08-22 04:23:19.700 CFY <manager> [rest_service_cyd4of.start] Task started 'fabric_plugin.tasks.run_script'
2017-08-22 04:23:20.506 LOG <manager> [rest_service_cyd4of.start] INFO: Starting Cloudify REST Service...
2017-08-22 04:23:21.011 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is running...
2017-08-22 04:23:21.403 LOG <manager> [rest_service_cyd4of.start] INFO: Verifying Rest service is working as expected...
2017-08-22 04:23:21.575 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 3 seconds...
2017-08-22 04:23:24.691 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 6 seconds...
2017-08-22 04:23:30.815 LOG <manager> [rest_service_cyd4of.start] WARNING: <urlopen error [Errno 111] Connection refused>, Retrying in 12 seconds...
[10.0.2.15] out: restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>
[10.0.2.15] out: Traceback (most recent call last):
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 71, in <module>
[10.0.2.15] out: verify_restservice(restservice_url)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3", line 34, in verify_restservice
[10.0.2.15] out: utils.verify_service_http(SERVICE_NAME, url, headers=headers)
[10.0.2.15] out: File "/tmp/cloudify-ctx/scripts/utils.py", line 1734, in verify_service_http
[10.0.2.15] out: ctx.abort_operation('{0} error: {1}: {2}'.format(service_name, url, e))
[10.0.2.15] out: File "/tmp/cloudify-ctx/cloudify.py", line 233, in abort_operation
[10.0.2.15] out: subprocess.check_call(cmd)
[10.0.2.15] out: File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call
[10.0.2.15] out: raise CalledProcessError(retcode, cmd)
[10.0.2.15] out: subprocess.CalledProcessError: Command '['ctx', 'abort_operation', 'restservice error: http: //127.0.0.1:8100: <urlopen error [Errno 111] Connection refused>']' returned non-zero exit status 1
[10.0.2.15] out:
Fatal error: run() received nonzero return code 1 while executing!
Requested: source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3
Executed: /bin/bash -l -c "cd /tmp/cloudify-ctx/work && source /tmp/cloudify-ctx/scripts/env-tmp4BXh2m-start.py-VHYZP1K3 && /tmp/cloudify-ctx/scripts/tmp4BXh2m-start.py-VHYZP1K3"
I am using CentOS 7.
Any suggestion to address the issue or debug will be appreciated
Can you please try the same bootstrap option using these instructions and let me know if it works for you?
Do you have the python-virtualenv package installed? If you do, try uninstalling it.
The version of virtualenv in CentOS repositories is too old and causes problems with the REST service installation. Cloudify will install its own version of virtualenv while bootstrapping, but only if one is not already present in the system.

Logging file appender is not able to create file

I'm getting this error message when logback is trying to create file to log:
-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[file-appender] -
Failed to create parent directories for [/var/log/living/commty/logFile.log]
-ERROR in ch.qos.logback.core.rolling.RollingFileAppender[file-appender] -
openFile(/var/log/living/commty/logFile.log,true) call failed.
java.io.FileNotFoundException: /var/log/living/commty/logFile.log (Permission denied)
Nevertheless:
ls /var/log/living/commty/ -lha
total 8.0K
drw-r----- 2 wildfly wildfly 4.0K May 23 09:29 . <<<<<<<<<<<<
drw-r----- 6 wildfly wildfly 4.0K May 23 09:29 ..
As you can see /var/log/living/commty is wildfly:wildfly owned and rw-r----- (read-write access).
Any ideas?

Supervisord - NGINX stop OSError

I ran into an error when trying to stop NGINX using supervisord.
To start NGINX without error from supervisord I had to prepend sudo to the nginx command in supervisord.conf:
[supervisord]
[program:nginx]
command=sudo nginx -c %(ENV_PWD)s/configs/nginx.conf
When I run this:
$ supervisord -n
2017-02-09 12:26:06,371 INFO RPC interface 'supervisor' initialized
2017-02-09 12:26:06,372 INFO RPC interface 'supervisor' initialized
2017-02-09 12:26:06,372 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2017-02-09 12:26:06,373 INFO supervisord started with pid 22152
2017-02-09 12:26:07,379 INFO spawned: 'nginx' with pid 22155
2017-02-09 12:26:08,384 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
^C # SIGINT: Should stop all processes
2017-02-09 13:59:08,550 WARN received SIGINT indicating exit request
2017-02-09 13:59:08,551 CRIT unknown problem killing nginx (22155):Traceback (most recent call last):
File "/Users/ocervell/.virtualenvs/ndc-v3.3/lib/python2.7/site-packages/supervisor/process.py", line 432, in kill
options.kill(pid, sig)
File "/Users/ocervell/.virtualenvs/ndc-v3.3/lib/python2.7/site-packages/supervisor/options.py", line 1239, in kill
os.kill(pid, signal)
OSError: [Errno 1] Operation not permitted
Same when using supervisorctl to stop the process:
$ supervisorctl stop nginx
FAILED: unknown problem killing nginx (22321):Traceback (most recent call last):
File "/Users/ocervell/.virtualenvs/ndc-v3.3/lib/python2.7/site-packages/supervisor/process.py", line 432, in kill
options.kill(pid, sig)
File "/Users/ocervell/.virtualenvs/ndc-v3.3/lib/python2.7/site-packages/supervisor/options.py", line 1239, in kill
os.kill(pid, signal)
OSError: [Errno 1] Operation not permitted
Is there a workaround for this ?
If a process created by supervisord creates its own child processes, supervisord cannot kill them.
...
The pidproxy program is put into your configuration’s $BINDIR when supervisor is installed (it is a “console script”).[1]
So what you have to do is changing your supervisord configuration like this:
[program:nginx]
command=/path/to/pidproxy /path/to/nginx-pidfile sudo nginx -c %(ENV_PWD)s/configs/nginx.conf
This may not work either, since the nginx process is create by sudo. But let's try it first.

Resources