Is it bad to always run nginx-debug? - nginx

I'm reading the debugging section of NGINX and it says to turn on debugging, you have to compile or start nginx a certain way and then change a config option. I don't understand why this is a two step process and I'm inferring that it means, "you don't want to run nginx in debug mode for long, even if you're not logging debug messages because it's bad".
Since the config option (error_log) already sets the logging level, couldn't I just always compile/run in debug mode and change the config when I want to see the debug level logs? What are the downsides to this? Will nginx run slower if I compile/start it in debug mode even if I'm not logging debug messages?

First off, to run nginx in debug you need to run the nginx-debug binary, not the normal nginx, as described in the nginx docs. If you don't do that, it won't mater if you set the error_log to debug, as it won't work.
If you want to find out WHY it is a 2 step process, I can't tell you why exactly the decision was made to do so.
Debug spits out a lot of logs, fd info and so much more, so yes it can slow down your system for example as it has to write all that logs. On a dev server, that is fine, on a production server with hundreds or thousands of requests, you can see how the disk I/O generated by that log can cause the server to slow down and other services to get stuck waiting on some free disk I/O. Also, disk space can run out quickly too.
Also, what would be the reason to run always in debug mode ? Is there anything special you are looking for in those logs ? I guess i'm trying to figure out why would you want it.
And it's maybe worth mentioning that if you do want to run debug in production, at least use the debug_connection directive and log only certain IPs.

Related

Shiny server disconnecting app after adding SSL and apache?

I have a shiny app deployed on a virtual machine with the free version of shiny server. It works without any issues locally, and when accessed through the localhost (same intranet).
However, after adding Apache and an SSL certificate to be able to access to the app from anywhere is when some issues with disconnecting have appeared.
The app disconnects when it needs to do longer calculation (~ 1 minute long). However, before disconnecting it shows the result of that calculation, (in this case a plot with plotly).
I get these errors:
Firefox can’t establish a connection to the server at https://*****/websocket
Connection closed. Info: {"type":"close","code":4704,
"reason":"Protocol error handling message: Error: Discard position id too big","wasClean":true}
The log file on /var/log/shiny-service/.log does not show any errors:
This is the last line: Please specify in ggplotly() or plot_ly(). A message that doesn't cause any error.
I have already tried everything I could find like:
Apache Configuration:
keepAlive On
MaxKeepAliveRequests 0
Shiny Server Configuration:
app_init_timeout 300;
app_idle_timeout 300;
I have no idea what else to try to solve this, and any help would be highly appreciated.
Edit
This is how the app looks after it disconnects, the plot has been generated, after a minute, however it still disconnects automatically.
I believe you can remediate the problem by increasing the values of app_init_timeout and app_idle_timeout. Please see this SO thread and these related docs. Also see this (regarding the Apache MaxKeepAliveRequests option).
I would find the best values by trial and error, but based on other users comments it may need to be as high as 1800. I imagine the reason it works on your LAN is that the latency that way would be much lower.
Having said all that, if the time taken is so high then there's probably something in the app that needs to be recoded in some way or which relies on some data set that is too large. You can probably test this by reducing the size of the data sent to plot.ly or the scope of the calculations being computed then see if the problem goes away.

Does collectgarbage in Lua also include memory consumed by lua_shared_dict?

I am trying to debug OOM error that I get in my application using openresty. I see that adding collectgarbage("collect") followed by collectgarbage("count") in my lua scripts might be one of the possibilities to debug the issue. But does collectgarbage("count") count memory consumed by lua_shared_dict as well?
If not, what would be alternative to check memory consumed by lua_shared_dict(s)?
Suggestions to debug OOMs in openresty app are also appreciated.

Disable internet access when calling java -jar

I'm testing six distinct .jar-files that all need to handle the possibility of no online access.
Unfortunately, I am on a network disc, so disabling the network connection or pulling the ethernet cable does not work unless I move all the files to /tmp or /scratch and change my $HOME environment variable, all of which I'd rather not have to do as it ends up being a lot of work.
Is there a way to invoke java -jar and disable the process from accessing the internet? I have not found any such flag in the man-pages. Is there perhaps a UNIX-way of doing this, as in:
disallowinternetaccess java -jar Foo.jar
Tell your Java program to access the network through a proxy. For all internet access this would be a SOCKS5 proxy.
java -DsocksProxyHost=socks.example.com MyMain
I believe that if no proxy is running you should get an appropriate exception in your program. If you need full control of what is happening, you can look into - and possibly modify - http://jsocks.sourceforge.net/
See http://docs.oracle.com/javase/7/docs/technotes/guides/net/proxies.html for details.
Note: You can do this without any native Unix stuff, so this question fits perfectly fine on SO.
You need just turn on SecurityManager: -Djava.security.manager=default
see details - https://stackoverflow.com/a/4645781/814304
With this solution you can even handle which resource you want to show and which to hide.

Best way to collect logs from a remote server

I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.

php5-fpm craches

I have a webserver (nginx) running debian and php5-fpm randomly seems to crach, it replys with 504 bad gateway if i call php files.
when it is in a crashed state and i do sudo /etc/init.d/php5-fpm it says that it is running, but it will still it gives 504 bad gateway until i do sudo /etc/init.d/php5-fpm
I'm thinking that it has maybe to do with one of my php files which is in a infinity loop until a certain event occurs (change in mysql database) or until it will be time-outed. I don't know if generally that is a good thing or if i should make the loop quit itself before a timeout occurs.
Thanks in advice!
First look at the nginx error.log for the actual error. I don't think PHP crashed, just your loop is using all available php-fpm processes, so there is none free to serve your next request from nginx. That should produce Timeout error in the logs (nginx will wait for some time for available php-fpm process).
Regarding your second question. You should not use infinite loops for this. And if you do, insert sleep() command inside the loop - otherwise you will overload your CPU with that loop and also database with queries.
Also I guess it is enough to have one PHP process in that loop waiting for a event. In that case use some type of semaphore (file or info in db) to let other processes know that one is already waiting for that event. Otherwise you will always eat up all available PHP processes.

Resources