Where can I find the default timeout settings for all browsers? - http

I'm looking for some kind of documentation that specifies how much time each browser (IE6/IE7/FF2/FF3, etc) will wait on a request before it just gives up and times out.
I haven't had any luck trying to get this.
Any pointers?

I managed to find network.http.connect.timeout for much older versions of Mozilla:
This preference was one of several
added to allow low-level tweaking of
the HTTP networking code. After a
portion of the same code was
significantly rewritten in 2001, the
preference ceased to have any effect
(as noted in all.js as early as
September 2001).
Currently, the timeout is determined
by the system-level connection
establishment timeout. Adding a way to
configure this value is considered
low-priority.
It would seem that network.http.connect.timeout hasn't done anything for some time.
I also saw references to network.http.request.timeout, so I did a Google search. The results include lots of links to people recommending that others include it in about:config in what appears to be a mistaken belief that it actually does something, since the same search turns up this about:config entries article:
Pref removed (unused).
Previously: HTTP-specific network
timeout. Default value is 120.
The same page includes additional information about network.http.connect.timeout:
Pref removed (unused).
Previously: determines how long to
wait for a response until registering
a timeout. Default value is 30.
Disclaimer: The information on the MozillaZine Knowledge Base may be incorrect, incomplete or out-of-date.

After the last Firefox update we had the same session timeout issue and the following setting helped to resolve it.
We can control it with network.http.response.timeout parameter.
Open Firefox and type in ‘about:config’ in the address bar and press Enter.
Click on the "I'll be careful, I promise!" button.
Type ‘timeout’ in the search box and network.http.response.timeout parameter will be displayed.
Double-click on the network.http.response.timeout parameter and enter the time value (it is in seconds) that you don't want your session not to timeout, in the box.

firstly I don't think there is just one solution to your problem....
As you know each browser is vastly differant.
But lets see if we can get any closer to the answer you need....
I think IE Might be easy...
Check this link
http://support.microsoft.com/kb/181050
For Firefox try this:
Open Firefox, and in the address bar, type "about:config" (without quotes). From there, scroll down to the Network.http.keep-alive and make sure that is set to "true". If it is not, double click it, and it will go from false to true. Now, go one below that to network.http.keep-alive.timeout -- and change that number by double clicking it. if you put in, say, 500 there, you should be good. let us know if this helps at all

For Google Chrome (Tested on ver. 62)
I was trying to keep a socket connection alive from the google chrome's fetch API to a remote express server and found the request headers have to match Node.JS's native <net.socket> connection settings.
I set the headers object on my client-side script with the following options:
/* ----- */
head = new headers();
head.append("Connnection", "keep-alive")
head.append("Keep-Alive", `timeout=${1*60*5}`) //in seconds, not milliseconds
/* apply more definitions to the header */
fetch(url, {
method: 'OPTIONS',
credentials: "include",
body: JSON.stringify(data),
cors: 'cors',
headers: head, //could be object literal too
cache: 'default'
})
.then(response=>{
....
}).catch(err=>{...});
And on my express server I setup my router as follows:
router.head('absolute or regex', (request, response, next)=>{
req.setTimeout(1000*60*5, ()=>{
console.info("socket timed out");
});
console.info("Proceeding down the middleware chain link...\n\n");
next();
});
/*Keep the socket alive by enabling it on the server, with an optional
delay on the last packet sent
*/
server.on('connection', (socket)=>socket.setKeepAlive(true, 10))
WARNING
Please use common sense and make sure the users you're keeping the socket connection open to is validated and serialized. It works for Firefox as well, but it's really vulnerable if you keep the TCP connection open for longer than 5 minutes.
I'm not sure how some of the lesser known browsers operate, but I'll append to this answer with the Microsoft browser details as well.

Related

Got the "My test app isn't responding right now. Try again soon." error message even a clean start from Google Assistant Simulator

I am still quite new to this topic, so sorry if I didn't provide enough information.
For the first time, I copoed everything from https://developers.google.com/actions/dialogflow/first-app to learn about it, which works great.
After, I tried to create my own one, then at the end, I got this message "My test app isn't responding right now. Try again soon." from https://console.actions.google.com/project/[[PROJECT-ID]]/simulator/.
Therefore, I tried to delete everything and make a complete new start, including all the projects on https://console.actions.google.com/ and https://console.dialogflow.com.
I then copied the exact same thing from https://developers.google.com/actions/dialogflow/first-app again, but this time, I still got "My test app isn't responding right now. Try again soon." from https://console.actions.google.com/project/[[PROJECT-ID]]/simulator/.
I tried to look at firebase log, no error indeed
I tried to use the web demo from the integration tab, everything works (which means the server side code or the connection have no problem) as expected, firebase also logged the request.
I tried to use a different browser (chrome -> firefox) still not working.
Here is the response code from the Google Assistant Simulator (its kinda nothing):
{
"audioResponse": "//NExAARqQ...",
"conversationToken": "GidzaW11bG...",
"response": "My test app isn't responding right now. Try again soon.",
"visualResponse": {
"visualElements": []
}
}
And here is the debug message (yes, its nothing in there, so I'm stuck):
{
"agentToAssistantDebug": {},
"assistantToAgentDebug": {}
}
Any help would be appreciated. Thanks!
In Actions Console..
Go to Develop -> Invocation
Set a display name (Eg: Hello World) and click Save
Go to Test and type "Talk to Hello World"
Fixed the issue for me.
Make sure your Actions on Google project has a name.
I spent almost 2 days scratching my head on this. Just go to Activity controls of the relevant google account (The account that you are using for the simulator) and turn on all those switches (You may leave out Youtube related stuff).
And.....Voila, it works!
Usually, these are turned off for non-personal accounts.
Faced the same issue when I tried to change the language of app to a locale.
Try the following,
Check if the welcome intent and fallback intents have responses and training phrases
All contexts are mapped
Disable and enable testing
At least in my case, I've added 'Suggestions' for an ending scene, like below:
You can see the error on the right side log of 'Test' page:
Fix is to remove 'Suggestions' in ending scene.
I had the exact same issue and after struggling for hours I found the stupid error on my side: In my Dialogflow Agent settings, I accidentally turned on the V2 API. So my firebase function kept complaining about null intent. Hope this help.

Google reCAPTCHA response success: false, no error codes

UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.

Can't figure how phone number reveal works

I am pretty new to web-scraping and recently I am trying to automatically scrap phone number for pages like this. I am not supposed to use Selenium/headless url browser libraries and I am trying to find the a way to actually request the phone number using let say a web service or any other possible solution that could give me the phone number hopefully directly without having to go through the actual button press by selenium.
I totally understand that it may not even be possible to automatically reveal the phone number in one shut as it is meant not be accessible by nosy newbie web-scraper like me; but I still like to raise the question for my information to get detailed answer from an expert point of view.
If I search the "Reveal" button DOM element, it shows some tags which I have never seen before. I have two main questions which I believe could be helpful for newbies like me.
1) Given a set of unknown tags/attribues (ie. data-q and data-reveal in the blow button), how is one able to find out which scripts in the page are actually using them?
2) I googled the button element's tag like: data-q and data-reveal the only relevant I could find was this which for some reason I don't have access two even-if I use proxy.
Any clue particularly on the first question is much appreciate it.
Regards,
Below is the href-button code
Reveal
Ok, according to your demand there are several steps before you finally get a solution.
1st step : open your own browser and enter your target page(https://www.gumtree.com/p/vans/2015-ford-transit-custom-2.2tdci-290-l1-h1/1190345514)
2nd step : (Assume you are using Chrome as your favorite browser) Press Ctrl+Shift+I to open the console, and then select 'Network' tag in the console.
3rd step : Press the 'Reveal' button on that page, watch the console carefully, catch the http request which is sent immediately when you press the 'Reveal' button. You can see the request contains a long string of number in Query String Parameters, actually it is a timestamp.
4th step : Also you can see there is a part named 'Request Headers' in that http request, and you should copy the values of referer , user-agent , x-gumtree-token.
5th step : Try to construct your request (I am a fan of Python, So I am going to show you my example code in Python)
import time
import requests
import json
headers = {
'referer': 'please enter the value you just copied from that specific request',
'user-agent': 'please enter the value you just copied from that specific request',
'x-gumtree-token': 'please enter the value you just copied from that specific request'
}
url = 'https://www.gumtree.com/ajax/account/seller/reveal/number/1190345514?_='
current_time = time.time()
current_time = str(current_time)
current_time = current_time.split('.')[0] + current_time.split('.')[1] + '0'
url += current_time
response = requests.get(url=url,headers=headers)
response_result = json.loads(response.content)
phone_number = response_result['data']

Receiving unexpected server calls

In adobe analytics I try to implement link tracking for all links can be found in a page using this:
$(document).on('click', 'a', function() {
s.tl(this, 'e', 'external', null, 'navigate');
return false;
});
Try to test it using a page like this
The extra calls are likely coming from how you have Adobe Analytics configured. There are a handful of config variables that will cause extra requests depending on how you set them (on their own and/or in relation to each other).
Here is a listing of Adobe Analytics variables for reference. These are the ones for you to look at:
s.trackDownloadLinks - If this is enabled, any standard links with href value ending in value(s) specified in s.linkDownloadFileTypes will trigger a request on click. Generally, this is to enable automatic tracking for links that prompt a visitor to download something (e.g. a pdf file).
s.trackExternalLinks - If this is enabled, any standard links with href NOT matched in s.linkInternalFilters OR matched with s.linkExternalFilters will trigger a request on click. Generally, this is to enable automatic tracking for links you count as visitor navigating off your site(s).
s.linkInternalFilters - If you have either of the above enabled, clicking on links may trigger a request, depending on values here vs. what you enabled above vs. what you have in s.linkExternalFilters. Generally, this should include values that represent links you do NOT want to count as navigating off your site(s).
s.linkExternalFilters - If you have either of the above enabled, clicking on links may trigger a request, depending on values here vs. what you enabled above vs. what you have in s.linkInternalFilters. Generally, you should never set this. It's intended for edge-use-cases for people who know what they are doing and have a complex site eco-system and definitions of what counts as internal vs. external.
s.trackInlineStats - This is for clickmap/heatmap tracking. This may or may not trigger an extra request, depending on how a lot of different stars align.
In addition to these, you may already have some plugins or other custom code that triggers click tracking. For example, there are linkHandler, exitLinkTracker, and downloadLinkTracker plugins that you may have included in your code that may play a part in extra requests being triggered.
Finally, more recent versions of Adobe Analytics code may trigger multiple requests depending on how much data you are trying to send in the request (whereas older versions just truncated the request, which resulted in data loss).
In any case, the long story short here is if you are looking to roll your own custom link tracking, you should make sure the above variables/plugins are removed or otherwise disabled.
But on the note of rolling your own custom link tracking.. I'm getting a sense of de ja vu here, like I already made a comment about this relatively recently in another post, over this exact same code... but generally speaking, this is not a good idea:
$(document).on('click', 'a', function() {
s.tl(this, 'e', 'external', null, 'navigate');
return false;
});
You are wholesale implementing exit link tracking on every single link of your page. And you are giving them all the same generic "external" label. And the native exit link reports are pretty limited and useless to begin with, so ideally you should also pop an eVar or something with the exit url or something.
But more importantly.. unless literally every single link on your pages are links that navigate your visitor off-site, this is not going to be useful to you in reports in general, and it's even going to ruin a lot of your reports.
I can't believe (or accept) that you really want to count every link on your pages as exit links..
I assume s.tl does an ajax call.
It should then forward the link to the href of the link - if the link is allowed to be followed immediately, the ajax call will be interrupted which seems to be what you see
You may want to change to
$(document).on('click', 'a', function(e) {
e.preventDefault();
s.tl(this, 'e', 'external', null, 'navigate');
});
I found this article when looking to see what s.tl is https://marketing.adobe.com/developer/forum/general-topic-forum/difference-between-s-t-and-s-tl-function

ShellExecute fails for local html or file URLs

Our company is migrating our help systems over to HTML5 format under Flare. We've also added Topic based access to the help systems using Flare CSHID's on the URI command line for accessing the topic directly, such as index.html#CSHID=GettingStarted to launch the GettingStarted.html help page.
Our apps are written in C++ and leverage the Win32 ShellExecute() function to spawn the default application associated with HTTP to display the help system. We've noticed that ShellExecute() works fine when no hashtag is specified, such as
ShellExecute(NULL, _T("open"), _T("c:\\Help\\index.html"), NULL, NULL, SW_SHOWNORMAL);
This function will launch the default browser associated with viewing HTML pages and in this case, the File:/// protocol handler will kick in, the browser will launch and you will see file:///c:/Help/index.html in the address bar.
However, once you add the # information for the topic, ShellExecute() fails to open the page
ShellExecute(NULL,_T("open"),_T("c:\\Help\\index.html#cshid=GettingStarted"),NULL,NULL,SW_SHOWNORMAL);
If the browser opens at all, you'll be directed to file:///c:/Help/index.html without the #cshid=GettingStarted topic identification.
Note that this is only a problem if the File protocol handler is engaged through ShellExecute(), if the help system lives out on the web, and the Http or Https protocol handler is engaged, everything works great.
For our customers, some of whom are on a private LAN, we cannot always rely on Internet access, so our help systems must ship with the application.
After some back-and-forth with Microsoft's MSDN team, they reviewed the source code to the ShellExecute() call and it was determined that yes, when processing File:/// based URLs in ShellExecute(), the ShellExecute() call will strip off the # and any data it finds after the # before launching the default browser and sending in the HTML page to open. MS's stance is that they do this deliberately to prevent injections into the function.
The solution was to beef up the ShellExecute() call by searching the URL for a # and if one was found, then we would manually launch the default browser with the URL. Here's the pseudocode
void WebDrive_ShellExecute(LPCTSTR szURL)
{
if ( _tcschr(szURL,_T('#')) )
{
//
//Get Default Browser from Registry, then launch it.
//
::RegGetStr(HKCR,_T("HTTP\\Shell\\Open\\Command"),szBrowser);
::CreateProcess ( NULL, szBrowser + _T(" ") + szURL, NULL, NULL, FALSE, 0, NULL, NULL, &sui, &pi);
}
else
ShellExecute(NULL,_T("open"),szURL,NULL,NULL,SW_SHOWNORMAL);
}
Granted there's a bit more to the c++ code, but this general design worked for us.
I tried WebDrive's solution and it didn't really work on Windows 10.
"HTTP\Shell\Open\Command" default value is set to Internet Explorer path, regardless of what my default browser setting. However, for Internet Explorer that solution DOES work.
Process to fetch default browser path on Windows 10 is a bit different (How to determine the Windows default browser (at the top of the start menu)) but even then the solution is not guaranteed to work, depending on the browser. E.g. for me it didn't work with Edge.
To get it to work with Edge I had to add "file:///" to the URL -- but that also makes the URL work with ShellExecute(). So, at least on Windows 10, all I needed to do was this:
ShellExecute(NULL,_T("open"),_T("file:///c:/Help/Default.html#cshid=1648"),NULL,NULL,NULL);
UPDATE:
The above stopped working months ago. What I eventually did was go through temporary file, as described here: https://forums.madcapsoftware.com/viewtopic.php?f=9&t=28376#p130613
Use FindExecutable() to get the default browser and pass the full help file path with its queries (?) and fragments (#) as the lpParameters parameter to ShellExecute(). They won't get stripped off there.
Then handle the case if it is a Store App (most likely Microsoft Edge).
Pseudo C code:
if (FindExecutable(_T("c:\Help\index.html"), NULL, szBrowser)
{
if (szBrowser == _T("C:\WINDOWS\system32\LaunchWinApp.exe"))
{
// default browser is a Windows Store App
szBrowser = _T("shell:AppsFolder\Microsoft.MicrosoftEdge_8wekyb3d8bbwe!MicrosoftEdge");
}
}
else
{
szBrowser = szURL;
szURL = NULL;
}
ShellExecute(NULL, NULL, szBrowser, szURL, NULL, SW_SHOWNORMAL);
I solved the problem without using any method other than ShellExecute in a Qt Application
QString currentpath = QDir::currentPath();
QString url = "/help//html/index.html#current";
QString full_url = "file:///" + currentpath + url;
QByteArray full_url_arr= full_url.toLocal8Bit();
LPCSTR lp = LPCSTR(full_url_arr.constData());
ShellExecute(NULL, "open", lp, NULL, NULL, SW_SHOWNORMAL);

Resources