I have a shiny app deployed on a virtual machine with the free version of shiny server. It works without any issues locally, and when accessed through the localhost (same intranet).
However, after adding Apache and an SSL certificate to be able to access to the app from anywhere is when some issues with disconnecting have appeared.
The app disconnects when it needs to do longer calculation (~ 1 minute long). However, before disconnecting it shows the result of that calculation, (in this case a plot with plotly).
I get these errors:
Firefox can’t establish a connection to the server at https://*****/websocket
Connection closed. Info: {"type":"close","code":4704,
"reason":"Protocol error handling message: Error: Discard position id too big","wasClean":true}
The log file on /var/log/shiny-service/.log does not show any errors:
This is the last line: Please specify in ggplotly() or plot_ly(). A message that doesn't cause any error.
I have already tried everything I could find like:
Apache Configuration:
keepAlive On
MaxKeepAliveRequests 0
Shiny Server Configuration:
app_init_timeout 300;
app_idle_timeout 300;
I have no idea what else to try to solve this, and any help would be highly appreciated.
Edit
This is how the app looks after it disconnects, the plot has been generated, after a minute, however it still disconnects automatically.
I believe you can remediate the problem by increasing the values of app_init_timeout and app_idle_timeout. Please see this SO thread and these related docs. Also see this (regarding the Apache MaxKeepAliveRequests option).
I would find the best values by trial and error, but based on other users comments it may need to be as high as 1800. I imagine the reason it works on your LAN is that the latency that way would be much lower.
Having said all that, if the time taken is so high then there's probably something in the app that needs to be recoded in some way or which relies on some data set that is too large. You can probably test this by reducing the size of the data sent to plot.ly or the scope of the calculations being computed then see if the problem goes away.
Related
I use CheckPoint VPN to log in to my place of work's servers to work remotely. The VPN has been working (mostly) fine all year, and I haven't changed any of the settings, but this morning, when I tried to log in, it's giving me the "Arg_NullReferenceException." I can't seem to find anything on this particular error on google.
I have tried restarting my computer, because it's not the first issue I've had with CheckPoint VPN (though it is the first time I've seen that error message), and a restart usually resolves whatever issue I'm having. I've also tried creating a new connection with the same settings, but I'm getting the same error with that one, too.
I'm not entirely sure what other information I would need to provide. I'm also not sure if it's a problem on my end, or on the company servers. I have already emailed tech support, but I thought I should be thorough.
This is a known issue. I have been jumping through hoops trying to get the capsule client to work. Raise a ticket with TAC if you have support. If not then you can download the E86 Endpoint connect client and run it. That has been my work around for this issue.
They just issued an update to the Capsule via the Microsoft Store. It seems one of the recent Windows Security Update broke the L2TP protocol within windows.
I'm reading the debugging section of NGINX and it says to turn on debugging, you have to compile or start nginx a certain way and then change a config option. I don't understand why this is a two step process and I'm inferring that it means, "you don't want to run nginx in debug mode for long, even if you're not logging debug messages because it's bad".
Since the config option (error_log) already sets the logging level, couldn't I just always compile/run in debug mode and change the config when I want to see the debug level logs? What are the downsides to this? Will nginx run slower if I compile/start it in debug mode even if I'm not logging debug messages?
First off, to run nginx in debug you need to run the nginx-debug binary, not the normal nginx, as described in the nginx docs. If you don't do that, it won't mater if you set the error_log to debug, as it won't work.
If you want to find out WHY it is a 2 step process, I can't tell you why exactly the decision was made to do so.
Debug spits out a lot of logs, fd info and so much more, so yes it can slow down your system for example as it has to write all that logs. On a dev server, that is fine, on a production server with hundreds or thousands of requests, you can see how the disk I/O generated by that log can cause the server to slow down and other services to get stuck waiting on some free disk I/O. Also, disk space can run out quickly too.
Also, what would be the reason to run always in debug mode ? Is there anything special you are looking for in those logs ? I guess i'm trying to figure out why would you want it.
And it's maybe worth mentioning that if you do want to run debug in production, at least use the debug_connection directive and log only certain IPs.
After we deployed the new version of our ASP.NET C# app with a MySQL DB we are having issues with the connections.
Yesterday I got the "Too many connections" error and I'm watching the open connections with
SHOW FULL PROCESSLIST and they keep increasing during the day.
Is there a good way to figure out where our bug could be? Like checking the last query that a sleeping connection made?
Make sure your app is closing connection to db properly....if your app not closing connections then you will get above error
Connection pooling usually solved this problem. In your case, connections seem to stay open much too long, which means that in some branches of your software, there's not definitive finally that closes the connection after it has been used. It's especially useful to diagnose problematic connection usage at a central point, because it can keep an eye on the number of open connections at any given time and maybe alert somebody to make a dump to analyze later.
You could also increase the number of connections allowed to your MySQL instance. The settings in your my.cnf is "max_connections".
Lastly, if you want you can try to decrease the "wait_timeout" or "interactive_timeout" properties of your instance. These settings regulate atomatic closing of connections after certain amounts of time.
I'm not sure if this is the right place to ask this, but I'll do it anyway.
I have developed an uploader for the Umbraco CMS that lets people upload a queue of files in one go. This uses some simple flash app that just calls a .NET ashx to upload the files one at a time. When one is done, the next one starts.
Recently I've had a user hit a problem where 1 or 2 uploads will go up fine, but then the rest fail. This happens for himself and a client of his. After some debugging, he thinks he's found the problem, but it seems weird so was wondering if anyone else has had this problem?
Both him and his client are on a fibre optic broadband connection so have got really fast upload speeds. When it was tested on a lesser speed broadband connection, all the files were uploaded no problem. According to one of his developer friends, apparently they had come across it before and had to put a slight delay in the upload script to make it work.
Does this sound possible? Had anyone else hit this problem? Is there a known workaround to prevent the uploads from failing?
I have not struck this precise problem before, but I have done a lot of diagnosis of DSL and broadband troubleshooting before, so will do my best to answer this.
There are 2 possible causes for this particular symptom, both generally outside of your network control (I would have thought).
1) Packet loss
Of course where some links receive a very high volume of traffic then they can chose to just dump a lot of data (eg all that is over that link maximum set size), but TCP/IP should be controlling that, and also expecting that sort of thing to drop from time to time, so this seems less likely.
2) The receiving server
May have some HTTP bottlenecks into that server or even the receiving server CPU / RAM etc, may be at capacity.
From a troubleshooting perspective, even if these symptoms shouldn't (in theory) exist, the fact that they do, and you have a specific
Next steps if you really need to understand how it is all working might be to get some sort of packet sniffer (like WireShark) to try to work out at a packet level what exactly is happening.
Also Socket programming can often program directly to the TCP/IP sockets, so you would be processing at the lower network layers, and seeing the responses and timeouts etc.
Also if you control the receiving server, then you can do the same from that end, or at least review the error logs to see what is getting thrown up as a problem.
A really basic method could be to send a pathping to the receiving server if that is possible, and that might highlight slow nodes getting the server, or packet loss between your local machine and the end server.
The upshot? Put in a slow down function in the upload code, and that should at least make the code work.
Get in touch if you need any analysis of the WireShark stuff.
I have run into a similar problem with an MVC2 website using Flash uploader and Firefox. The servers were load balanced with a Big-IP load balancer. What we found out in debugging this is that Flash, in Firefox, did not send the session ID on continuation requests and the load balancer would send continuation requests off to another server. Because the user had no session on the new server, the request failed.
If a file could be sent in one chunk, it would upload fine. If it required a second chunk, it failed. Because of this the upload would fail after an undetermined number of files being uploaded.
To fix it, I wrote a Silverlight uploader.
I've got a number of ASP.Net websites (.Net v3.5) running on a server with a SQL 2000 database backend. For several months, I've been receiving seemingly random InvalidOperationExceptions with the message "Internal connection fatal error". Sometimes there's a few days in between, while other times there are multiple errors per day.
The exception is not limited to one site in particular, though they share business and data access assemblies. The error seems to always be thrown from SqlClient.TdsParser.Run(). It sometimes is thrown from old-school direct SqlCommand.Execute() calls, while other times it is thrown from Linq2Sql code.
I've been assured by the network guys that there are no errors or packets lost on their end. Has anyone else experienced this? Could it be a driver problem? We have been unable as of yet to pinpoint a specific trigger for this exception.
We're running II6 on Windows Server 2003.
After a few months of ignoring this issue, it started to reach a critical mass as traffic gradually increased. Under heavy load, including some crawlers, things got crazy and these errors poured in nonstop.
Through trial and error, we eventually tracked down a handful of SqlCommand or LINQ queries whose SqlConnection wasn't closed immediately after use. Instead, through some sloppy programming originating from a misunderstanding of LINQ connections, the DataContext objects were disposed (and connections closed) only at the end of a request rather than immediately.
Once we refactored these methods to immediately close the connection with a C# "using" block (freeing up that pool for the next request), we received no more errors. While we still don't know the underlying reason that a connection pool would get so mixed up, we were able to cease all errors of this type. This problem was resolved in conjunction with another similar error I posted, found here: Why is my SqlCommand returning a string when it should be an int?
Sounds like the database connection is getting dropped or timing out.
We recently had similar issues moving to IIS 6 from IIS 5 connecting to SQL 2000. Our issue was solved by increasing number of ephemeral ports available.
Look at the usage of the ephemeral ports by the IIS server. The default max no. of ports available is normally 4000. You might want to consider increasing this if the sites on your server are particularly busy or your application is making a lot of database calls.
You can monitor these first to see if going over max limit.
Search Microsoft Knowledge base for "MaxUserPort" and "TcpTimedWaitDelay" and make necessary registry changes. Make sure you back up registry or snapshot server before making the changes. Will need to reboot for changes to take effect.
You should double check your database and recordset connection are being closed after use. Not closing will use up this port range unnecessarily.
Check the efficiency of your stored procedures anyway as they might be taking longer than they need too.
"If you rapidly open and close 4000 sockets in less than four minutes, you will reach the default maximum setting for client anonymous ports, and new socket connection attempts fail until the existing set of TIME_WAIT sockets times out." - from http://support.microsoft.com/kb/328476
Check your server's LOG folder (\program files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG or similar) for files named SqlDump*.mdmp and SqlDump*.txt. If you do find any you'll have to take it to Product Support.
I was creating a new EF Core project and was trying to create the database to an external Linux server instead of a Windows Server or local one. After hours of searching I found out that I am using MySQL instead of the Microsoft SQL server.
I found it weird that everyone was using 1433 instead of the usual 3306. So to fix my 'Internal connection fatal error' I had to set up a docker instance of SQL Server bound to its default port of 1433.
It literally was that simple. In the docker repo look for "microsoft-mssql-server" and run the image as described neatly in the description below. Everything works now and I am able to push my database from my EF Core project to an external server.