My server receives two GET requests when I use Ctrl+Shift+R in the browser. But when I use Ctrl+r, my server only receives 1 GET request. Why does my server receive 2 requests when using shift? (I believe using the shift key will clear cookies or something).
Google chrome sends multiple requests to fetch a page, and that's -apparently- not a bug, but a feature. And we as developers just have to deal with it.
As far as I could dig out in five minutes, chrome does that just to make the surfing faster, so if one connection gets lost, the second will take over.
I guess if the website is well developed, then it's functionality won't break by this, because multiple requests are just not new.
Related
Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,
I'm using Burp suite to see the requests my computer sends out when I go to www.google.com, and noticed that there were a lot of different requests sent. Why is this the case? Shouldn't it just be one GET request to Google's server, and then done? Instead it's sending maybe 10 GET requests and a handful of POST requests.
There's one GET request for the page (and more for every image, CSS, and JavaScript file), and then there can be many other AJAX GET/POST requests that get done afterward for things like updating the suggestions as you type things in, sending location information, or doing stuff with the cookies on your computer. Pretty much any time new information is displayed without reloading the page, there's an AJAX request going on. AJAX is also used to make expensive requests so the page can load faster. There are many uses.
Here's a tutorial for how AJAX works if you would like to do it yourself: AJAX Tutorial
Note: AJAX is a method of sending requests, it's not its own programming language. It stands for "Asynchronous JavaScript and XML."
while it is hard to come up with a 100% answer to your question (I can not tell which requests your computer sends to Google) one possibility is that after the first GET request Google sends back a bunch of HTML/CSS/JavaScript. JavaScript is then executed on your computer (Client side) and might trigger another request towards Google servers. However, this is just one possibility.
Cheers,
Christian
Normally every element of a page is requestet with a separate GET. (css, images, scripts)
So you'll hardly (never) find a site which is being loaded by one single GET-request.
I'm trying to implement some push technology on an app of mine. I intend to use node.js for that but I don't think it is relevant for my question. What I will do is basically long-polling to the server, and as I understand the event driven way nodejs works, I don't have to care much about the server side of the stuff.
My only worry is on the client side: after how much time will the browser stop waiting for the answer ? It is a programming question because I need to release a response before this time is spent, so that the long-polling is reloaded.
Side question : when the browser stops waiting, what answer does it give to the request ?
I have done stuff like this before, and the answer to the question is quite simple: it depends on the browser, and how the user has configured it.
In FF there is a setting somewhere in about:config that controls this (I forget what the setting is, exactly). IE's default timeout is controlled in the registry, and is documented here. I never found the answer for Chrome or Opera - I don't think it's controllable. Opera seems to give up after no data has been received for around 20 seconds, but it also seems to vary somewhat - no idea why.
I concluded that the best thing to do here is to design your architecture so the page is reloaded periodically or if your using AJAX, periodically cancel the request and start a new one (I found that 1 minute works well). Also, keep pushing little bits of data to the browser every few seconds, as this will prevent Opera from giving up. You can simply push a javascript:void(0); event to keep the connection alive but not actually do anything at the client side.
To answer your side question: Nothing. The browser simply closes the TCP connection and no further data is exchanged. How the server handles this is no longer the concern of the browser.
It is quite easy to update the interface by sending jQuery ajax request and updating with new content. But I need something more specific.
I want to send the response to client without their having requested it and update the content when they have found something new on the server. No need to send an ajax request every time. When the server has new data it sends a response to every client.
Is there any way to do this using HTTP or some specific functionality inside the browser?
Websockets, Comet, HTTP long polling.
It has name server push (you can also find it under name Comet technology). Do search using these keywords and you will find bunch examples, tools and so on. No special protocol is required for that.
Aaah! You are trying to break the principles of the web :) You see if the web was pure MVC (model-view-controller) the 'server' could actually send messages to the client(s) and ask them to update. The issue is that the server could be load balanced and the same request could be sent to different servers. Now if you were to send a message back to the client you'll have to know who all are connected to the server. Let's say the site is quite popular and you have about 100,000 people connecting to it every day. You'll actually have to store the IPs of each of them to know where on the internet they are located and to be able to "push" them a message.
Caveats:
What if they are no longer browsing your website? You see currently there is no way to log out automatically if you close your browser. The server needs to check after a fixed timeout if you have logged out (or you send a new nonce with every response to prevent the server from doing that check)
What about a system restart/crash etc? You'd lose all the IPs that you were keeping track of and you are back to square one - you have people connected to you but until you receive new requests you can't really "send" them data when they may be expecting it as per your model.
Let's take an example of facebook's news feeds or "Most recent" link close to the top right - sometimes while you are browsing your wall you see the number next to most recent has gone up or a new 'feed' has come to the top of your wall post! It's the client sending periodic requests to the server to find out what was updated rather than the other way round
You see, it keeps it simple and restful. You may feel it's inefficient for the client to "poll" the server to pull the data and you'd prefer push, but the design of the server gets simplified :)
I suggest ajax-pulling is the best way to go - you are distributing computation to the client and keeping it simple (KIS principle :)
Of course you can get around it, the question is, is it worth it?
Hope this helps :)
RFC 6202 might be a good read.
I'm encountering a situation where it takes a long time for ASP.NET to generate reply with the web page (more than 2 hours). It due to the codebehind running for a while (very long, slow loop).
Browser (both IE & Firefox) stops waiting for the reply (after about an hour) and gives generic cannot display webpage error (similar to what you would see if you'd try to navige to non-existing server).
At the same time asp.net app keeps going (I can see it in debugger) and eventually completes.
Why does this happen? Are there any settings in web.config to influence this? I'm hoping there's a timeout setting that I'm missing that's causing this.
Maybe a settings in IE or Firefox? But I think they wait while the server is keeping connection alive.
I'm experiencing this even when I launch app in debug mode (with compilation debug="true") on my local machine from VS (so it's not running on IIS, but on ASP.NET Dev Server).
I know it's bad that it takes so long to generate the page, but it doesn't matter at this stage. Speeding it up would take a lot of extra work and the delay doesn't really matter. This is used internally.
I realize I can redesign around this issue running logic to a background process and getting notified when it's done through AJAX, or pull it to a desktop app or service or whatever. Something along those lines will be done eventually, but that's not what I'm asking about right now.
Sounds like you're using IE and it is timing out while waiting for a response from the server.
You can find a technet article to adjust this limit:
http://support.microsoft.com/kb/181050
CAUSE
By design, Internet Explorer imposes a
time-out limit for the server to
return data. The time-out limit is
five minutes for versions 4.0 and 4.01
and is 60 minutes for versions 5.x, 6,
and 7. As a result, Internet Explorer
does not wait endlessly for the server
to come back with data when the server
has a problem. Back to the top
RESOLUTION
In general, if a page does not return within a few
minutes, many users perceive that a
problem has occurred and stop the
process. Therefore, design your server
processes to return data within 5
minutes so that users do not have to
wait for an extensive period of time.
The entire paradigm of the Web is of request/response. Not request, wait two hours, response!
If the work takes so long to do, then have the page request trigger the work, and then not wait for it. Put the long-running code into a Windows service, and have the service listen to an MSMQ queue (or use WCF with an MSMQ endpoint). Have the page send requests for work to this queue. The service will read a request, maybe start up a new thread to process it, then write a response to another queue, file, or whatever.
The same page, or a different, "progress" page can poll the response queue or file for responses, and update the user, assuming the user still cares after two hours.
For something that takes this long, I would figure out a way to kick it off via AJAX and then periodically check on it's status. The background process should update some status variable on a regular basis and store it's data in the cache or session when complete. When it completes and the browser detects this (via AJAX), have the browser do a real postback (or get by changing location.href), pick up the saved data, and generate the page.
I have a process that can take a few minutes so I spin off a separate thread and send the result via ftp. If an error occures in the process I send myself an error message including the stack trace. You may want to consider sending the results via email or some other place then the browser and use a thread as well.