How to cleanup sockets after mono-process crashes? - unix

I am creating a chat server in mono that should be able to have many sockets open. Before deciding on the architecture, I am doing a load test with mono. Just for a test, I created a small mono-server and mono-server that opens 100,000 sockets/connections and it works pretty well.
I tried to hit the limit and at sometime the process crashes (of course).
But what worries me is that if I try to restart the process, it directly gives "Unhandled Exception: System.Net.Sockets.SocketException: Too many open files".
So I guess that somehow the filedescriptions(sockets) are kept open even when my process ends. Even several hours later it still gives this error, the only way I can deal with it is to reboot my computer. We cannot run into this kind of problem if we are in production without knowing how to handle it.
My question:
Is there anything in Mono that keeps running globally regardless of which mono application is started, a kind of service I can restart without rebooting my computer?
Or is this not a mono problem but a unix problem, that we would run into even if we would program it in java/C++?
I checked the following, but no mono processes alive, no sockets open and no files:
localhost:~ root# ps -ax | grep mono
1536 ttys002 0:00.00 grep mono
-
localhost:~ root# lsof | grep mono
(nothing)
-
localhost:~ root# netstat -a
Active Internet connections (including servers)
(no unusual ports are open)
For development I run under OSX 10.7.5. For production we can decide which platform to use.

This sounds like you need to set (or unset) the Linger option on the socket (using Socket.SetSocketOption). Depending on the actual API you're using there might be better alternatives (TcpClient has a LingerState property for instance).

Related

Can I configure Celery Flower to run after I close my Unix shell?

I have inherited a corporate server & application that consists of several python scripts, html files, and Unix services from an IT employee that recently left my company. He left absolutely no documentation, so I'm struggling to support this application for my work group--I am not an IT professional (though I can read/write python, html, and a few other languages). I'm extremely unfamiliar with servers in general and Unix specifically.
From what I can tell from digging around, our application uses the following:
nginx
circus / gunicorn
rabbitmq-server
celery
celery flower
I've finally got most of these services running, but I'm struggling with Celery Flower. I've been able to launch Flower from my PuTTY SSH connection with the command:
/miniconda3/envs/python2/bin/flower start
but it appears to stop whenever I disconnect (server:5555 no longer shows the monitor web page). Is it possible to configure it to run in the background so I don't have to keep my SSH connection open 24/7? I saw in the Flower documentation that there is a persistence mode, but I'm not sure what does.
Thanks for any suggestions!
Tom,
I assume you are using a Linux platform. If this is the case I suggest you use screen (or even tmux) to run Flower. It will keep the application running in the background as well as offer the additional benefit of allowing you to connect back to the process if you need to inspect output, stop the process, etc.
To start the application use screen -S Flower -d -m /miniconda3/envs/python2/bin/flower start.
To see if the process is still running use screen -ls which will list the processes out like;
There is a screen on:
17256.Flower (02/09/16 08:01:16) (Detached)
1 Socket in /var/run/screen/S-hooligan.
To connect back to it, use screen -r Flower.
If you have connected back to the screen then disconnect with ^a ^d, assuming the escape character has not been changed from the default. To see a full list of key bindings look the the man page, it's pretty straight forward.
You might also consider adding this command to the system crontab with a #REBOOT directive so that it starts when the system boots.

What is the status of long-running remote R sessions in ESS/Emacs?

I routinely run R remotely and have had great success with RStudio server to do so. However, Emacs/ESS is still preferable in many cases, particularly since I often work on multiple projects simultaneously. What is the start-of-the-art when running ESS/R in emacs when the expectation is that the connection will be broken? To be more concrete, I'd love to run a tmux session in Emacs so that I can connect to a long-running R session running in tmux (or screen). What is the status of ESS/Emacs to support such a scenario? This seems to be changing over time and I haven't found the "definitive" approach (perhaps there isn't one).
I do that all the time. At both home, and work.
Key components:
Start emacs in daemon mode: emacs --daemon &. Now emacs is long-running and persistent as it is disconnected from the front-end.
Connect using emacsclient -nw in text mode using tmux (or in my case, the byobu wrapper around tmux). As tmux persists, I can connect, disconnect, reconnect,... at will while having several tabs, split panes, ... from byobu/tmux.
When nearby -- on home desktop connecting to home server, or at work with several servers -- connect via emacsclient -c. Now I have the standard X11 goodness, plotting etc pp. That is my default 'working' mode.
But because each emacs session has an R session (or actually several, particularly at work) I can actually get to them as I can ssh into the tmux/byobu session too.
Another nice feature is tramp-mode allowing you to edit a remote file (possibly used by a remote R session) in a local Emacs buffer as tramp wraps around ssh and scp making the remote file appear local.
Last but not least mosh is very nice on the (Ubuntu) laptop as it automagically resumes sessions when I am back on the local network at home or work. In my case mosh from Debian/Ubuntu on server and client; may also work for you OS X folks.
In short, works like a dream, but may require the extra step of "disconnecting" emacs from the particularly tmux shell in which you launch. Daemon mode is key. Some of these sessions run on for weeks.
I started working like this maybe half a decade ago. Possibly longer. But using ESS to connect to remote Emacs session is much older -- I think the ESS manual already had entries for it when I first saw it in the late 1990s.
But I find this easier as it gives me "the whole emacs" including whatever other buffers and session I may need.
Edit: And just to be plain, I also use RStudio (Server) at home and work, but generally spend more time in Emacs for all the usual reasons.
More Edits: In follow-up to #kjhealy I added that I am also a fan of both tramp-mode (edit remote files locally in Emacs thanks to the magic that are ssh and scp) as well as mosh (sessions that magically resume when I get to work or back home).

ZEO deadlocks on uWSGI in master mode

Good day!
I am migrating to uWSGI deployment. The project is half on ZOPE3 and uses ZODB with ZEO for multiple access. If I start the uwsgi daemon like this:
uwsgi_python27 --http :9090 --wsgi-file /path/to/file
Everything runs OK. It's the Single Process mode. No blocks or locks.
When I start the app like this:
uwsgi_python27 --http :9090 --wsgi-file /path/to/file -p 3
Everythig runs. It's the Preforking mode. We have good results. But some requests block. I suspect that the app blocks 1 request when the new instance starts. I have 2-3 locked request. All other work good.
But when I start like this:
uwsgi_python27 --http :9090 --wsgi-file /path/to/file --master
The app launches, but no requests are served. When I go curl localhost:9090/some_page it never loads anything. No CPU no disk usage. It just locks.
Does someone know any specific ZEO behavior, that could result into this? If I run just FileStorage it runs normally without any deadlocks.
Any details about the master mode of uWSGI behavior would also be appreciated.
Ok. So I managed to launch the damn thing. I suspect, that ZEO's rpc works bad with linux forking. So you need to start the app only in the forked process, not before forking.
See lazy or lazy-apps configuration options for uwsgi.
ref: http://uwsgi-docs.readthedocs.org/en/latest/ThingsToKnow.html

Amazon EC2 / RStudio : Is there are a way to run a job without maintaining a connection?

I have a long running job that I'd like to run using EC2 + RStudio. I setup the EC2 instance then setup RStudio as a page on my web browser. I need to physically move my laptop that I use to setup the connection and run the web browser throughout the course of the day and my job gets terminated in RStudio but the instance is still running on the EC2 dashboard.
Is there a way to keep a job running without maintaining an active connection?
Does it have to be started / controlled via RStudio?
If you make your task a "normal" R script, executed via Rscript or littler, then you can run them from the shell ... and get to
use old-school tools like nohup, batch or at to control running in the background
use tools like screen, tmux or byobu to maintain one or multiple sessions in which you launch the jobs, and connect / disconnect / reconnect at leisure.
RStudio Server works in similar ways but AFAICT limits you to a single user per user / machine -- which makes perfect sense for interactive work but is limiting if you have a need for multiple sessions.
FWIW, I like byobu with tmux a lot for this.
My original concern that it needed to maintain a live connection was incorrect. It turns out the error was from running out of memory, it just coincided with being disconnected from the internet connection.
An instance is started from the AWS dashboard and stopped or terminated from there also. As long as it is still running it can be accessed from an RStudio tab by copying the public DNS to the address bar on the web page and logging in again.

Get process occupying a port in Solaris 10 (alternative for pfiles)

I am currently using pfiles to get the process occupying certain port in Solaris10,
but it causes problem when run parallely.
problem is pfiles can't be run parallely for the same pid.
the second one will return with error message.
pfiles: process is traced :
Is there any alternative to pfiles to get the process occupying a port in Solaris.
OR any information on OS API's to get port/process information on Solaris could help.
A workaround would be to use some lock mechanism to avoid this.
Alternatively, you might install lsof from a freeware repository and see if it supports concurrency (I think it does).
I just tested Solaris 11 Express pfiles and it doesn't seem to exhibit this issue.

Resources