Weird History.js Security error - history.js

I get this error in my page.
SecurityError: The operation is insecure
At first, I thought that it was just a matter of Same Origin Policy problem. To test this, I have commented out all the code that deals with History.js
But I still get that error in that page.
It seems that I get this error by simply including jquery.history.js in a page.
Any ideas why this is the case?

I had the same problem - apparently I was missing 'www.' from the url I was pushing. It really needs to be exact match - otherwise it won't work. If you want to use both www.example.com and example.com you need to write little helper method for that.

This is likely due to pushing state with a URL that is of a different domain name. Check to see you are not using pushState or replaceState with a different domain.

Related

HTTP to HTTPS issues

I have a question, I am a bit confused, I don't really understand why this is happening.
I have a website which works well over http. When I force redirect to https something happens. Even if I replace all my urls in my code, only GET request will work. Anybody has any idea why is this happening?
I also have admin part of the website. it works to login into the admin but it doesn't work to make any requests on it. I am trying to post or delete but I receive a 401 err, even if I am logged in and set the token right...
So bottom line is:
On Https, the website works, it shows all the resources from the db, I can login in the Admin but I can not post or delete.
On Http everything works.
I am in a huge need of advice or ideas.
thanks.
From my experience you cannot serve mixed content, that's my first suggestion is to call all your scripts/dependencies without the prefix; ie: script src="https://blahblah" to "script src="//blahblah"; you're going to make sure you are sticking consistently to one serving source; so that's the first thing I'd check (also look at console logs, they often give hints as to what failed);
Secondly I am unsure of the response or how the server handles traffic from non https, possibly there's a rule in htaccess or some form of redirection trying to force the call via https so http fails? these are all steps in debugging right you need to troubleshoot and play process of eliminations; first though I'd make sure we are serving everything from // or https; when on http I would look at console logs for clues but even more so I would force a redirect to use https exclusively (as most sites do now)
Check for mixed content issues first though, this is something that can have a multitude of solutions based on the many variations of what could be causing this issue.

404 error on my homepage although I can see my site

I am at my wits end with the following problem:
My site www.sebastianthalhammer.com is available under that URL without any problems.
However Google Search Console as well as other external third party test tools return a 404 error.
Status report from Uptrends
It is just the main page that's affected. All the other subpages and blog content isn't affected.
I have been in contact with the server stuff but it seems alright to them. As mentioned. The site can be reached. The site runs on wordpress - latest version.
I have no real clue where to start as this error seems to be quite a tricky one. Does anyone here might have an idea what's going on?
Sebastian
The 4xx class of status code is intended for cases in which the
client seems to have erred. Except when responding to a HEAD
request, the server SHOULD include a representation containing an
explanation of the error situation, and whether it is a temporary or
permanent condition. These status codes are applicable to any
request method. User agents SHOULD display any included
representation to the user.
This leaves me with two possible explanations:
Explanation 1: it's a server error.
the server wrongly returns a 404 status code
the browser thinks the response body contains details about the error and displays it - for the end user this is the actual page
Explanation 2: it's done on purpose to defeat crawlers and page watchers.
the server returns 404 on purpose - non-browser user agents won't process the result as they interpret it as error
browsers are unaffected, the end user doesn't care as long as the page is being displayed
The second one would indeed be kind of clever if you don't want your page to be indexed.
Thanks to your feedback I could think about the problem in a different way.
Ultimately at the unholy depths of a certain plugin I could dig out a setting that caused the error.
It was a redirection plugin that (for whatever reason) sent out a 404 signal when the URL was requested.
I don't know what the purpose would be for something. All I know is that the setting was on default for quite a while now and that caused the weird situation.
thanks guys for getting me on the right track.
Sebastian

GoogleTag Manager transforming root URL

A client of ours currently uses an error reporting service which logs a number of errors thrown across the site and alerts us when a certain threshold is hit. This morning we received a large number of errors to do with the GoogleTag Manager service, however I'm unsure how or why these 404's are occurring and was wondering if anyone had seen similar behavior before.
What appears to be happening is that the GoogleTag iframe URL is being appended to the sites root URL and is thus causing a 404.
To illustrate this more clearly, we have the following iframe code
<iframe src="//www.googletagmanager.com/ns.html?id=GTM-KJHJ"
and we are seeing the following URL returned as a 404
www.website.com//www.googletagmanager.com/ns.html?id=GTM-KJHJ
Has anyone had an experience of this happening? Or would be able to indicate or think of any reason why this would have started to occur suddenly?
As a side note, this has been happening repeatedly from various IPs for the last few hours.
Thanks.
Scheme-relative urls work fine in all common browsers, so I think that these errors might be caused by some crawlers that do not parse such urls correctly. Have you checked the IP addresses in your logs to see where they come from?
Independent of that - if you need to get rid of these 404 errors then adding the protocol should do the trick. So change the GTM code such that both urls start with the protocol (http:// or https://) and the 404 errors should go away.

AJAX Callbacks Allowing HTML

I have a comment form on my website and I would like to stop any HTML from being posted through it. I was under the impression that ASP.NET automatically stops any HTML from being submitted by throwing a "potentially dangerous request" exception, but it's allowing HTML in this case.
All of the settings that relate to validation have been left to default so it should be set to requestValidationMode="4.0".
Anyone know what can cause this? Does it have anything to do with the fact that I am using AJAX callbacks?
Edit: I have gathered some more details:
Validation is correctly working in one sub-folder in my application, but it isn't working in any of the others. I looked into my web.config and this is the only setting I have put regarding page validation:
<pages enableViewStateMac="true" validateRequest="true">
Why is it working in one subfolder but not in the others? Does it have anything to do with the fact that this subfolder has a web.config entry regarding authentication?
Edit: Regular postbacks are being validated, just not callbacks.
Edit again: I was playing around with Fiddler and while doing so I noticed one of the callbacks was blocked by the server. Here is what the blocked request looks like:
And here is the plain text version:
__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=%2FwEPDwUKLTI4MDgzMDI5NA9kFgJmD2QWBAIBDxYCHgdwcm9maWxlBSFodHRwOi8vd3d3LnczLm9yZy8yMDA1LzEwL3Byb2ZpbGVkAgMPZBYOZg8WAh4Fc3R5bGUFkAFiYWNrZ3JvdW5kLXBvc2l0aW9uOnJpZ2h0IGJvdHRvbTtiYWNrZ3JvdW5kLXJlcGVhdDpOby1SZXBlYXQ7YmFja2dyb3VuZC1pbWFnZTp1cmwoaHR0cDovL2xvY2FsaG9zdDo4MS9JbWFnZXMvTGF5b3V0LUltYWdlcy9UaXRsZUJhY2tncm91bmQucG5nKTtkAgIPFgIeCWlubmVyaHRtbAWlAjxzY3JpcHQgdHlwZT0idGV4dC9qYXZhc2NyaXB0Ij48IS0tCmdvb2dsZV9hZF9jbGllbnQgPSAiY2EtcHViLTk2MTM2OTA0OTA1Mjg4MTQiOwovKiBCT0lHIFRvcCAqLwpnb29nbGVfYWRfc2xvdCA9ICI0MjcwOTc3NjY2IjsKZ29vZ2xlX2FkX3dpZHRoID0gNzI4Owpnb29nbGVfYWRfaGVpZ2h0ID0gMTU7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIDDxYCHwIFqgI8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI%2BPCEtLQpnb29nbGVfYWRfY2xpZW50ID0gImNhLXB1Yi05NjEzNjkwNDkwNTI4ODE0IjsKLyogQk9JRyBTaWRlYmFyICovCmdvb2dsZV9hZF9zbG90ID0gIjIyMDAxMDYxMTAiOwpnb29nbGVfYWRfd2lkdGggPSAxNjA7Cmdvb2dsZV9hZF9oZWlnaHQgPSA2MDA7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIEDw8WAh4HVmlzaWJsZWdkZAIFDxYCHwIFEjxwPkxpbmsgMSB0ZXh0PC9wPmQCBg9kFgICAw8WAh8DaGQCBw8WAh4EVGV4dAU%2BPHNwYW4gc3R5bGU9Im1hcmdpbi1yaWdodDogNXB4OyI%2BwqkgQmluZGluZ09mSXNhYWNHdWlkZTwvc3Bhbj5kZKMbP1fMlxWhgVw8zpEPBPGzlw5j&=sdfdsgd%40sfds.com&=%3Cp%3Efghfghfg%3C%2Fp%3E&=%3Cp%3Efghfghfg%3C%2Fp%3E&=%3Cp%3Efghfghfg%3C%2Fp%3E&__CALLBACKID=__Page&__CALLBACKPARAM=sdfdsgd%40sfds.com--%7C%7C--%3Cp%3Efghfghfg%3C%2Fp%3E--%7C%7C--%3Cp%3Efghfghfg%3C%2Fp%3E--%7C%7C--%3Cp%3Efghfghfg%3C%2Fp%3E
Here is a typical request that isn't blocked:
Plain text:
__EVENTTARGET=&__EVENTARGUMENT=&__VIEWSTATE=%2FwEPDwUKLTI4MDgzMDI5NA9kFgJmD2QWBAIBDxYCHgdwcm9maWxlBSFodHRwOi8vd3d3LnczLm9yZy8yMDA1LzEwL3Byb2ZpbGVkAgMPZBYOZg8WAh4Fc3R5bGUFkAFiYWNrZ3JvdW5kLXBvc2l0aW9uOnJpZ2h0IGJvdHRvbTtiYWNrZ3JvdW5kLXJlcGVhdDpOby1SZXBlYXQ7YmFja2dyb3VuZC1pbWFnZTp1cmwoaHR0cDovL2xvY2FsaG9zdDo4MS9JbWFnZXMvTGF5b3V0LUltYWdlcy9UaXRsZUJhY2tncm91bmQucG5nKTtkAgIPFgIeCWlubmVyaHRtbAWlAjxzY3JpcHQgdHlwZT0idGV4dC9qYXZhc2NyaXB0Ij48IS0tCmdvb2dsZV9hZF9jbGllbnQgPSAiY2EtcHViLTk2MTM2OTA0OTA1Mjg4MTQiOwovKiBCT0lHIFRvcCAqLwpnb29nbGVfYWRfc2xvdCA9ICI0MjcwOTc3NjY2IjsKZ29vZ2xlX2FkX3dpZHRoID0gNzI4Owpnb29nbGVfYWRfaGVpZ2h0ID0gMTU7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIDDxYCHwIFqgI8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCI%2BPCEtLQpnb29nbGVfYWRfY2xpZW50ID0gImNhLXB1Yi05NjEzNjkwNDkwNTI4ODE0IjsKLyogQk9JRyBTaWRlYmFyICovCmdvb2dsZV9hZF9zbG90ID0gIjIyMDAxMDYxMTAiOwpnb29nbGVfYWRfd2lkdGggPSAxNjA7Cmdvb2dsZV9hZF9oZWlnaHQgPSA2MDA7Ci8vLS0%2BCjwvc2NyaXB0Pgo8c2NyaXB0IHR5cGU9InRleHQvamF2YXNjcmlwdCIKc3JjPSJodHRwOi8vcGFnZWFkMi5nb29nbGVzeW5kaWNhdGlvbi5jb20vcGFnZWFkL3Nob3dfYWRzLmpzIj4KPC9zY3JpcHQ%2BZAIEDw8WAh4HVmlzaWJsZWdkZAIFDxYCHwIFEjxwPkxpbmsgNSB0ZXh0PC9wPmQCBg9kFgICAw8WAh8DaGQCBw8WAh4EVGV4dAU%2BPHNwYW4gc3R5bGU9Im1hcmdpbi1yaWdodDogNXB4OyI%2BwqkgQmluZGluZ09mSXNhYWNHdWlkZTwvc3Bhbj5kZCu6t45MzsFLBRWYDAvPXYbIXKqE&=&=&=&=&__CALLBACKID=__Page&__CALLBACKPARAM=cfdsdg%40adfds.com--%7C%7C--%3Cp%3Esdfdgdfg%3C%2Fp%3E--%7C%7C--%3Cp%3Esdfdgdfg%3C%2Fp%3E--%7C%7C--%3Cp%3Esdfdgdfg%3C%2Fp%3E
I don't know why the requests are different, nothing changed on the page. I noticed that the first one seems to have split the parameters into the boxes while the second one hasn't. Is this the issue?
I just checked the callbacks being sent from the sub-folder and they are all split into parameters just like the first request. I guess this is the problem... but why is it happening?
I made the inputs runat=server and the request changed a bit, but the values are still not being assigned.
Here is what I found:
___CALLBACKPARAM is not actually validated by the server, only the parameters as seen in the first image are. With this in mind I validated the request myself using the following regex <\/*(p|div|span|a|script|br|b|i|u|h1|h2|h3|ol|ul|li)\s*.*>. You can find more online; hope this helps.

using customErrors for vanity URLs / asp.net url redirection

So, from here...
In ASP.NET, you have a choice about how to respond to that - it's in the web.config as CustomErrors. Turn that on, then redirect to a fancy 404 page (maybe you already do). The fancy 404 page, then, could be checking the requested querystring (which gets passed over to the custom error page as yet another querystring) to see if it's a valid redirect, lives in your database, etc. Just do a Response.Redirect() from there.
Then schooner writes:
Thanks, we do have a 404 now but we would prefer this not to be detected as a 404 in the process. We would like ot handle it directly and seperately if possible.
..and I'd like to know just how bad a practice this is. I don't expect to put my "pretty" URLs on the internet (just business cards) and I have a sample of 404-redirecting-to-a-helpful-site code working, but I don't want to get to production and have an issue with a browser that takes the initial 404 too seriously. Can anyone help me understand more about why I wouldn't want to use customErrors / 404 to flow users to the page they actually wanted?
The main problem with using customeErrors as your 404 error handler is that every time customErrors picks up an errored request rather than throwing a 404 error back to your browser and letting your browser know there was a bad request, it instead returns a 302 which indicates that a page has been relocated to whatever your customErrors page is. This isn't bad for most users because they don't know or even notice the difference, the problem comes from the fact that web crawlers DO know the difference and the status code they receive directly affects how their indexing works.
Consider the scenario where you have a page at http://mysite.com/MyAwesomePageAboutStuff.aspx for some period of time and then one day you decide you no longer need it and delete the file. If Google or some other crawler has already indexed that URL and goes back to it after you delete it the crawler will get a 302 status code instead of a 404 error and because of this status code the crawler will update the page's url to point to your error page rather deleting the non-existent link. Now, whenever someone finds that url by way of a search engine they'll end up at your error page.
It's not really a huge issue, but you can definitely see the headaches this can create for your users in the long run.
Look here for some corroborating data.
I created a vanity url system using the 404 as the handler. There's no need for a 302 on my side as the 404 dynamically loads the content and returns that. I am fully able to handle any and all POST / GET and SERVER data.
Works great. If you are interested TarantulaHawk is up on SourceForge.

Resources