I am testing a comet app using the forever frame technique. The problem I am having in Firefox is that when an update command issued from firefox (using an AJAX post updating a DB which in turn triggers a DB listener to raise an event which prints the script tags to the iframes of listening clients) if there are multiple script prints, only 1 or a few of them gets processed, never all. But I can see that they are all in the iframe.
Chrome and even IE6 do not suffer from this.
But here is the real puzzler: If the update is triggered from another browser, firefox will work, even though it is the exact same content that has been printed out into the iFrame.
So to sum up: if firefox issues the ajax query causing the update, it does not process all of the script tags.
If another browser issues the ajax query, the firefox browser will process all of the tags as it should.
Any ideas?
Hope I was clear enough.
Thanks
I ran into the same issue implementing our comet solution. It apeared that firefox would only execute one script at a time. In the end I went with two iframes, one for the long polling/server push and a second one for commands being sent to the server.
Related
We are having an issue in a Blazor WebAssembly app, which is using PayPal buttons for users to Add Funds to the system's wallet.
The issue is random, sometimes the popup closes immediately after opening, and does not allow users to even login into paypal to continue with the checkout process.
This is the js file: https://github.com/efonsecab/FairPlayTube/blob/main/src/FairPlayTubeSln/FairPlayTube.Client/wwwroot/js/paypal/paypal.js
This is the Blazor component which calls the js to render the buttons
https://github.com/efonsecab/FairPlayTube/tree/main/src/FairPlayTubeSln/FairPlayTube.Client/CustomComponents/Paypal
It happens constantly in dev, but not that much in prod.
This is the error shown in the console:
No ack for postMessage wn() in https://www.sandbox.paypal.com in 10000ms
The thing is that sometimes everything works successfully on the first try, but other show this.
Any ideas how to solve it?
If it closes immediately after opening, the createOrder function's invocation of actions.order.create is failing, likely due to an invalid request. Log everything, look at the console errors/output, and find out what you are doing wrong.
The object you are passing to actions.order.create can first be saved to a variable, logged with console.log(JSON.stringify(somevar,null,2)), and then return actions.order.create(somevar)
After several hours searching and trying numerous suggestions, I finally found the answer with this post: https://stackoverflow.com/a/66473740
For me, this solution works 100% of the time so far. I disabled Javascript debugging.
I'm hosting a website serves global regions, and recently there's a weird issue came up.
Already checked other posts on the Internet including the one in stackoverflow with a lot of discussions:Chrome net::ERR_HTTP2_PROTOCOL_ERROR 200 after a reconnect , but none of the answers helped.
Website is building on ASP.NET webform legacy "website" (not web application).
There's a important function which performs several process once user click a button on website.
Let's say there are 100 lines of code in that function, and I've added some flags to log which steps have been hit and processed.
Weird situation is:
Only China users are facing the issue. (website is not hosted in China)
Some users are using firefox and it returned below, in English it is "Secure Connection Failed"
But checked several posts including firefox documents, there should be error code on screen like
ssl_error_no_cypher_overlap but there is nothing.
Firefox error
Some users are using other browsers which is Chrome based, it returns:
Chrome error
In additionally, I checked the process log in these user feedbacks, most of them does not finish all the code, in other words, if there are 100 lines of codes and some of them just stopped in line 50.
Website has TLS 1.2 enabled, also http2 protocol (h2) is applied when I checked via Chrome-Network tab.
I'm wondering if it is possible if client browser shut down the connection in some reasons, it will end with the result I see (stopped at the middle of entire code flow), from my opinion if a request is posted to server then no matter what client does, the process should finish entire flow.
Any ideas or thoughts will be appreciated!
I was just dealing with that exact situation.
From what I read in various posts on the HTTP2_PROTOCOL_ERROR, I think what happens is the response is started but code problem(s) prevent the server from completing the response. The incomplete response gives the protocol error in Chrome, and, because it's over TLS, Firefox sees it as a security error. (I'd share links, but I've already closed all those windows - sorry.)
Somehow my code was preventing the server from completing the response without causing an exception.
I was able to track down the offending code by commenting out the body of every code-behind procedure on the page and then bringing them back one at a time.
Good luck to you!
I can't give you a concrete example, but in my case, there was no problem on the application side.
Have you recently added settings to your in-house infrastructure engineer?
For example, have you added WAF settings? You may want to check.
FYI
Have PHP/mySQL/JS-JQuery based web site that records finish times for racers, then sends the time back to the server. The server inserts the finish time in the db, Calculates the finish place based on a handicapping formula. Stores that and send the finish place back to the web page and it is updated on the screen.
It uses Jquery Ajax calls so the page doesn't get reloaded at all.
Everything works fine if the data connection is good.
If the data connection is bad my first version of this page would put a message up that the connection was bad.
Now I am trying to make it a bit smarter, so I have started with the HTML5 feature that tells the browser if it is on or offline(i realize this may not be the best way yet but it works for concept testing)
When a new finish time is recorded(or updated) and we are offline the JS just adds a class of notSent to the tag of the finish time. The finish place and all of the finish places would normally come from the sever are greyed out indicating the data is no longer valid(until it can communicate with the server).
When the browser finds itself back online, A simple jQuery each loop on each notSent class starts re-sending the AJAX requests and if they all get completed it processes the return finish place information and display it as up to date.
It also disables all external links on the page when the browser is offline. This keeps the user from losing the data entry page by accident by clicking a link that will give them a page not found button.
So my last issue, is the browsers reload and close buttons, if the user click these when it is offline they will lose the data entry screen and are out of luck until the connection comes back.
Can I disable these functions as well? A quick Stack-overflow search of this indicates it can be done but most answers give the old, "you really shouldn't and if you think you need to you should rethink your design." warning.
So rethinking my design I start learning about;
HTML 5 local storage (decide I don't need it, since my data is stored already in a input box)
App-cache Manifest for controlling the cache of the page so if reloaded in the browser off line if would get that cached version. After much reading came to the conclusion that this could work on a static page but not mine where the data is updated all the time. Then found that most browsers are deprecating this anyways.
Service Workers seems to be the possible future for contorlling offline caching, but not all browsers support it, it is pretty cumbersome to learn and still very new.
Now I am stuck, Leaning towards preventing browser reloads and defering learning service worker till more support and better examples for a dynamic content pages like mine.
Bottom line- am I missing something here? Is there a easy solution?
I think the best option is to use PouchDB to sync between the client and server and use Background Sync to awake a Service Worker when you regain connectivity. If Service Worker is not present in your browser, it can sync the next time your user open the browser.
You have a similar example of deferred requests explained in the Service Worker Cookbook,
we have a bug with ie8(under XP), and ie9 (under 7) doing several RPC GWT calls. Eventually, one of the calls fires but server responds with a reset (RST) and the application keeps waiting the result until a 12002 http error is received (12002 seems to mean TimeOut) after some minutes.
Some keys:
We can not reproduce the bug in Other browsers.
We have fired hundreds of the same RPC calls using prototype.js in ie8 and it never fails! So, we can only reproduce it inside gwt code.
it is a random thing. some time it happens three seconds after the first call others three minutes after.
in a client with windows server 2003 the http error is 12030 instead of 12002 and it happens immediatly.
Any Idea?
This is not a GWT problem. This is an AJAX problem.
In addition I've seen it can happen in FF too.
But IE with nested callbacks really aggravated this.
The link below really helped, but it did not solve the problem 100%.
Why does IE issue random XHR 408/12152 responses using jQuery post?
It suggests that the problem will be solved if you close the HTTP connection for each request on the servlet.
The problem disappeared when we moved to Server 2008/Tomcat.
With Server 2003 the IIS was full of errors.
Also this link is useful
I'm trying to implement some push technology on an app of mine. I intend to use node.js for that but I don't think it is relevant for my question. What I will do is basically long-polling to the server, and as I understand the event driven way nodejs works, I don't have to care much about the server side of the stuff.
My only worry is on the client side: after how much time will the browser stop waiting for the answer ? It is a programming question because I need to release a response before this time is spent, so that the long-polling is reloaded.
Side question : when the browser stops waiting, what answer does it give to the request ?
I have done stuff like this before, and the answer to the question is quite simple: it depends on the browser, and how the user has configured it.
In FF there is a setting somewhere in about:config that controls this (I forget what the setting is, exactly). IE's default timeout is controlled in the registry, and is documented here. I never found the answer for Chrome or Opera - I don't think it's controllable. Opera seems to give up after no data has been received for around 20 seconds, but it also seems to vary somewhat - no idea why.
I concluded that the best thing to do here is to design your architecture so the page is reloaded periodically or if your using AJAX, periodically cancel the request and start a new one (I found that 1 minute works well). Also, keep pushing little bits of data to the browser every few seconds, as this will prevent Opera from giving up. You can simply push a javascript:void(0); event to keep the connection alive but not actually do anything at the client side.
To answer your side question: Nothing. The browser simply closes the TCP connection and no further data is exchanged. How the server handles this is no longer the concern of the browser.