I was currently using Sapper and I just noticed a strange network behavior. As soon as I reload my page, all files are doubled included.
Is that normal?
Thank you and have a nice day
I think it might be because there is a service worker that intercepts the request.
You can find a good explanation about this here: Service worker sends two requests
Related
Hey so I have three GCE instances set-up which all run the same code. They're cloned from the same snapshot so I'm pretty positive that they're exactly the same.
For some reason, only one of these GCE instances is able to receive connections from external sources. The other two can't. I keep getting a "Connection timedout" error in Firefox.
These instances all have the same network-tags, so they should have the same firewall rules. That is, if you're hitting this problem too, make sure you have the right firewall rules set in the networking tab of your google-cloud center before reading on.
Since they're running the same code and have the same ports open, I have no idea what the problem could be, or how to figure out what it might be.
I was wondering what the best way to debug this might be? I believe they were working earlier but now are no longer working.
Rebooting the instance seemed to fix this. This is not an adequate solution however. I'll update my answer over the coming weeks if it happens again.
I'm having a lot of trouble with an EC2 instance and I can't figure out what's going on. We're using it as a web server and it seems to work fine for single connection stuff - loading a simple page, RDP connection, ping etc. But as soon as a single client computer has more than one connection active with the server (a good example is if I try to browse the web site while I'm also logged into the server via RDP) the whole connection becomes incredibly unstable.
The biggest most annoying consequence of this is that the ASP.NET site that we're running consistently fails to load some pages since those pages use more than one connection. This wasn't a problem up until a few days ago when we were forced to migrate to different hardware because our hardware was apparently being retired by Amazon. Ever since then it's been tricky like this. Is it possible that there's a kink in Amazon's network and that it could potentially be resolved by stopping and starting the instance (and thus getting a different server?)
It turns out the problem was an underlying issue on Amazon's end. They investigated the issue and found a problem that they're correcting. I hope I haven't wasted too much StackOverflow brainpower with this dead-end of a question!
I'm not sure if this is the right place to ask this, but I'll do it anyway.
I have developed an uploader for the Umbraco CMS that lets people upload a queue of files in one go. This uses some simple flash app that just calls a .NET ashx to upload the files one at a time. When one is done, the next one starts.
Recently I've had a user hit a problem where 1 or 2 uploads will go up fine, but then the rest fail. This happens for himself and a client of his. After some debugging, he thinks he's found the problem, but it seems weird so was wondering if anyone else has had this problem?
Both him and his client are on a fibre optic broadband connection so have got really fast upload speeds. When it was tested on a lesser speed broadband connection, all the files were uploaded no problem. According to one of his developer friends, apparently they had come across it before and had to put a slight delay in the upload script to make it work.
Does this sound possible? Had anyone else hit this problem? Is there a known workaround to prevent the uploads from failing?
I have not struck this precise problem before, but I have done a lot of diagnosis of DSL and broadband troubleshooting before, so will do my best to answer this.
There are 2 possible causes for this particular symptom, both generally outside of your network control (I would have thought).
1) Packet loss
Of course where some links receive a very high volume of traffic then they can chose to just dump a lot of data (eg all that is over that link maximum set size), but TCP/IP should be controlling that, and also expecting that sort of thing to drop from time to time, so this seems less likely.
2) The receiving server
May have some HTTP bottlenecks into that server or even the receiving server CPU / RAM etc, may be at capacity.
From a troubleshooting perspective, even if these symptoms shouldn't (in theory) exist, the fact that they do, and you have a specific
Next steps if you really need to understand how it is all working might be to get some sort of packet sniffer (like WireShark) to try to work out at a packet level what exactly is happening.
Also Socket programming can often program directly to the TCP/IP sockets, so you would be processing at the lower network layers, and seeing the responses and timeouts etc.
Also if you control the receiving server, then you can do the same from that end, or at least review the error logs to see what is getting thrown up as a problem.
A really basic method could be to send a pathping to the receiving server if that is possible, and that might highlight slow nodes getting the server, or packet loss between your local machine and the end server.
The upshot? Put in a slow down function in the upload code, and that should at least make the code work.
Get in touch if you need any analysis of the WireShark stuff.
I have run into a similar problem with an MVC2 website using Flash uploader and Firefox. The servers were load balanced with a Big-IP load balancer. What we found out in debugging this is that Flash, in Firefox, did not send the session ID on continuation requests and the load balancer would send continuation requests off to another server. Because the user had no session on the new server, the request failed.
If a file could be sent in one chunk, it would upload fine. If it required a second chunk, it failed. Because of this the upload would fail after an undetermined number of files being uploaded.
To fix it, I wrote a Silverlight uploader.
I'm experiencing long page load times on my local development environment. Page loads time of up to 25 minutes. As far as I can tell, the browser is not waiting for the server to respond but waiting for some http request that is being kept alive by something. If I run a php script that times out, the response is immediate so I know it's not my scripts that are taking forever (and firebug confirms this). I suspect my local site is trying to connect to a third party that won't respond. It doesn't matter what browser I use, the behaviour is the same.
My questions are:
How can I tell what is taking a long time? Firebug tells me nothing.
What do I look for in the http headers that tells whatever is making the request to wait forever?
How can I get the site to fail early and feed back to me what is causing the delay?
Can I watch the requests in real time?
Any help is much appreciated.
you could install a proxy level debugging tool, like Fiddler, and watch all of your http traffic go by.
otherwise, you're deep into networking diagnostics stuff?
What does Firebug or Chrome developer tools have to say? Look in the "Net" or "Network" sections respectively. They should be able to break down exactly what is or is not loading and when.
You can also get down in the weeds using Speed Tracer if you want to see low level stuff like JS parsing or DOM events.
Even on big-time sites such as Google, I sometimes make a request and the browser just sits there. The hourglass will turn indefinitely until I click again, after which I get a response instantly. So, the response or request is simply getting lost on the internet.
As a developer of ASP.NET web applications, is there any way for me to mitigate this problem, so that users of the sites I develop do not experience this issue? If there is, it seems like Google would do it. Still, I'm hopeful there is a solution.
Edit: I can verify, for our web applications, that every request actually reaching the server is served in a few seconds even in the absolute worst case (e.g. a complex report). I have an email notification sent out if a server ever takes more than 4 seconds to process a request, or if it fails to process a request, and have not received that email in 30 days.
It's possible that a request made from the client took a particular path which happened to not work at that particular moment. These are unavoidable - they're simply a result of the internet, which is built upon unstable components and which TCP manages to ensure a certain kind of guarantee for.
Like someone else said - make sure when a request hits your server, you'll be ready to reply. Everything else is out of your hands.
They get lost because the internet is a big place and sometimes packets get dropped or servers get overloaded. To give your users the best experience make sure you have plenty of hardware, robust software, and a very good network connection.
You cannot control the pipe from the client all the way to your server. There could be network connectivity issues anywhere along the pipeline, including from your PC to your ISP's router which is a likely place to look first.
The bottom line is if you are having issues bringing Google.com up in your browser then you are guaranteed to have the same issue with your own web application at least as often.
That's not to say an ASP application cannot generate the same sort of downtime experience completely on it's own... Test often and code defensively are the key phrases to keep in mind.
Let's not forget browser bugs. They aren't nearly perfect applications themselves...
This problem/situation isn't only ASP related, but it covers the whole concept of keeping your apps up and its called informally the "5 nines" or "99.999% availability".
The wikipedia article is here
If you lookup the 5 nines you'll find tons of useful information, which you can apply as needed to your apps.