ERROR Async loop died! org.zeromq - networking

I'm new to ubuntu and storm , i need to solve this problem
[ERROR] Async loop died!
org.zeromq.ZMQException: Address already in use(0x62)
at org.zeromq.ZMQ$Socket.bind(Native Method)
it appeared in worker log file due to supervisor still hasn't start and by searching found someone wrote that is due to
ephemeral port range was messed
up on the machines
tried to increase /proc/sys/net/ipv4/ip_local_port_range 1024 65000
but not working

This issue is related to already used ports in your system. A port can only be used by a single application. Using lsof -i you can check what application are using which ports. You should spot an conflicting port number. Either terminate this application of change the configuration of Zookeeper or Storm to use a different port.

Related

How to solve "Connection refused" error in MPJ Express?

I run my MPJ program using 5 PCs with the same name (DESKTOP-J49PIF5) but it has different IP address. It run successfully in a laboratory. But when I tried to run the same program with the same configuration in a new laboratory (different place), I got "Connection refused" error.
More info that may help.
The same problem happened to my Apache Spark program, but I can solve the problem by adding "--conf “spark.driver.host=<<master_ip>>”" in the configuration. Someone said that the program can not find the driver host so we have to add that extra line in the configuration. Please note that in the previous laboratory I didn't add that line and either my MPJ and Spark program are working.
<<
Now, my problem is why I got "Connection refused" error in my MPJ program? If the problem is the same as Apache Spark then how can I configure the MPJ? Perhaps by adding master_ip similar to Apache Spark? But I don't know how to do it.
...
this error is repeated for 5 PCs.
After struggling for a few days, finally I found the answer. The problem was in the hostnames. Each PC has a different IP address and I can ping them. But, for cluster computing instead of using IP address, it uses the hostname to contact each other so we have to give a unique hostname for every PC. I changed all hostnames and the program is running fine.

nginx worker process always run only 1

I have following configuration with
worker_process 4;
But I noticed that it always hits only 1 worker.
I am testing on a local Centos VM. I am doing curl http call on specific port and added a file with 1000 curl requests and ran them from multiple terminal windows.
But see alll of them hit only 1 worker. Is there a way that I can have atleast more than 1 worker started. Can someone please share their knowledge on this.
https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/
In the epoll-and-accept the load balancing algorithm differs: Linux seems to choose the last added process, a LIFO-like behavior. The process added to the waiting queue most recently will get the new connection. This behavior causes the busiest process, the one that only just went back to event loop, to receive the majority of the new connections. Therefore, the busiest worker is likely to get most of the load.

Disallow MPI to run on the headnode

I have a Rpi cluster MPI runs perfect on, one issue I am having is that MPI is using the master node as a compute node, how do I configure MPI so it only runs on the compute node. I tried removing the head node's IP address from the file that I use to run with mpirun, but I get back:
HYDU_sock_connect (./utils/sock.c:171): unable to get host address for mastern (2)
main (./pm/pmiserv/pmip.c:209): unable to connect to server mastern at port 42525
thanks in advance
Even if you don't start up an MPI rank on the node that you launch from, I believe most basic launchers will still start up some sort of daemon process on that node. If that's a problem, you might need to launch from a different node.

How to cleanup sockets after mono-process crashes?

I am creating a chat server in mono that should be able to have many sockets open. Before deciding on the architecture, I am doing a load test with mono. Just for a test, I created a small mono-server and mono-server that opens 100,000 sockets/connections and it works pretty well.
I tried to hit the limit and at sometime the process crashes (of course).
But what worries me is that if I try to restart the process, it directly gives "Unhandled Exception: System.Net.Sockets.SocketException: Too many open files".
So I guess that somehow the filedescriptions(sockets) are kept open even when my process ends. Even several hours later it still gives this error, the only way I can deal with it is to reboot my computer. We cannot run into this kind of problem if we are in production without knowing how to handle it.
My question:
Is there anything in Mono that keeps running globally regardless of which mono application is started, a kind of service I can restart without rebooting my computer?
Or is this not a mono problem but a unix problem, that we would run into even if we would program it in java/C++?
I checked the following, but no mono processes alive, no sockets open and no files:
localhost:~ root# ps -ax | grep mono
1536 ttys002 0:00.00 grep mono
-
localhost:~ root# lsof | grep mono
(nothing)
-
localhost:~ root# netstat -a
Active Internet connections (including servers)
(no unusual ports are open)
For development I run under OSX 10.7.5. For production we can decide which platform to use.
This sounds like you need to set (or unset) the Linger option on the socket (using Socket.SetSocketOption). Depending on the actual API you're using there might be better alternatives (TcpClient has a LingerState property for instance).

Remote process communication

How can I do inter-process communication between two remote process on unix C/C++? Currently, popen works for two process on same host? Product need to be capability to call remote process and send /receive the data.
As you mentioned popen you may not realize this already allows you to use ssh to remotely execute a process and treat exactly the same as a locally spawned one.
popen ("ssh user#remotehost /usr/bin/cal", "r")
And a pre-emptive link for further questions on ssh:
https://serverfault.com/questions/117007/ssh-key-questions
why would you nut just open the wild card % in the IP so that they could access the host.. remorely..
192.168.1.% something like that...:D

Resources