I get HTTP 500 and HTML content, what's wrong? - http

When I got on this page (same with lots of articles on this website) : http://thereasonmag.com/9231-2/
I get an error HTTP 500 (see in the Chrome Dev Tools) AND the article.
Well, I'm a bit lost with this. Do you know why it is designed like this ?
That's a problem for my crawler which is designed to avoid processing HTTP 5xx error responses.

I would say that this is hardly can be called "designed" and possible when somebody has an error in backend code/logic. Actually this is the first time I see anything like this, but I can think only of workaround for you in this case.
Because this response has 500 error AND correct HTTP body with html, you can avoid in your code processing 5xx error WITHOUT body with correct html.. How to determine if this html is correct? This is pretty risky to guess.. You can research their html and find some global variables or some comment tags/classes in html which won't be returned if real error page is returned.
Important: I understand (and sure you too) that my suggestion is absolutely crazy workaround just to make your code work. What I would do in your place, I would write those guys and ask them to fix their backend. Seems like this is the only place with some email at the bottom..
Try to write them, otherwise you will definitely face a case, where you will fail to meet the criteria of if (res.errorCode === 500 && res.body.anyPossiblePredicateYouMayThinkToCheckRightHTMLBody) {// show the post on your page }

1) Looks it is an expected behavior since PHP version 5.2.4.
2) The above url is using X-Powered-By: PHP/5.4.45 (wordpress app)
3) root cause could be,one of the wordpress plugin in the above site is having
wrong string thatt ph eval() could not parse it.
4) more info look at the link a) wordpress discussion
5) from ph forum
Finally, i don't think so you can do anything here.

Related

Does the IIS "Modifying HTTP Response Headers" tutorial use the wrong variable?

I'm trying to work through the IIS tutorial "Modifying HTTP Response Headers" to replace my location headers.
https://www.iis.net/learn/extensions/url-rewrite-module/modifying-http-response-headers
I am quite fresh when it comes to IIS, however, it seems to me that they start by creating a server variable called "ORIGINAL_HOST", but later in the tutorial, start using "ORIGINAL_URL".
So is something funky happening that I dont know about, or should "ORIGINAL_URL" really be "ORIGINAL_HOST"?
Thanks!
It appears to me that they did mess up and that it should be "ORIGINAL_HOST" instead of "ORIGINAL_URL". If you look at the images they have after they start referring to it as "ORIGINAL_URL" you will see that it says "ORIGINAL_HOST" with the Pattern that they said to add for "ORIGINAL_URL".
I don't have enough rep points to comment on your question or else I would have done so.

Linkedin SlideShare API "get_user_leads" 500 Internal Server Error and 410 Gone Error on apiexplorer.slideshare.net

Looking for help from a Linkedin SlideShare engineer on the SlideShare API here. Very frustrated that I was told to use StackExchange after being kicked all over the place by them and now I can't post enough detail (personal account info would be needed and StackExchange is limiting me to 2 links in this message).
Anyhow, I’m trying to install the SlideShare-Marketo Connector (http://launchpoint.marketo.com/assets/Uploads/SlideShare-MarketoConnector-Guide.pdf) on to an ancillary server. I’ve uploaded the PHP files just fine.
The expected output from my page should be “X leads synced from SlideShare.” (where “X” is a number), but instead I get a blank page. I added some echos to see if I could figure out the last spot the code was executing to. I found that it’s getting hung up in SSUtil.php at this line in the function “get_userLeads”:
$data=$this->LeadXMLtoArray($this->get_data("get_user_leads","&username=$user&password=$pass&begin=$begin"));
From what I can tell though the issue isn’t really with this line but when the get_data function tries to get the data at this line:
$res=file_get_contents($this->apiurl.$call."?api_key=$this->key&ts=$ts&hash=$hash".$params);
I echoed the URL to the browser see what it was looking for:
http://www.slideshare.net/api/2/get_user_leads?api_key=XXXXXXXX&ts=XXXXXXXX&hash=XXXXXXXX&username=XXXXXXXX&password=XXXXXXXX&begin=201603311600
Obviously I can't make this link clickable here without give away a bunch of account information but when I click on a real version of the link I get a 500 Internal Server Error.
I used apiexplorer.slideshare.net (but now it seems SlideShare has taken this down in the last day) and the URL it’s using looks the same as what I’ve got above but has a slightly different result: 410 Gone Error. Any idea what’s going wrong?

Determine if requester is an Ajax call and/or is expecting JSON (or another content type)

I have solved a problem with a solution I found here on SO, but I am curious about if another idea I had is as bad as I think it might be.
I am debugging a custom security Attribute we have on/in several of our controllers. The Attribute currently redirects unauthorized users using a RedirectResult. This works fine except when calling the methods with Ajax. In those cases, the error returned to our JS consists of a text string of all the HTML of our error page (the one we redirect to) as well as the HTTP code and text 200/OK. I have solved this issue using the "IsAjaxRequest" method described in the answer to this question. Now I am perfectly able to respond differently to Ajax calls.
Out of curiosity, however, I would like to know what pitfalls might exist if I were to instead have solved the issue by doing the following. To me it seems like a bad idea, but I can't quite figure out why...
The ActionExecutingContext ("filterContext") has an HttpContext, which has a Request, which in turn has an AcceptTypes string collection. I notice that on my Ajax calls, which expect JSON, the value of filterContext.HttpContext.Request.AcceptTypes[0] is "application/json." I am wondering what might go wrong if I were to check this string against one or more expected content types and respond to them accordingly. Would this work, or is it asking for disaster?
I would say it works perfect, and I have been using that for years.
The whole point use request headers is to be able to tell the server what the client accept and expect.
I suggest you read more here about Web API and how it uses exactly that technique.

Getting requests containing [PLM=0][N]

I recently noticed that I've been getting some strange looking requests which after decoding look like
target_url?id=17 [PLM=0][N] GET target_url?id=17 [0,14770,13801] -> [N] POST target_url?id=17 [R=302][8880,0,522]
I know there is an older question concerning that subject, but there is no actual answer so I posted my own, in case there may be some newer member who knows what's going on.
The requests I mentioned do not seem to have any effect as they cause the error page to be displayed. I am however curious to know what they might have been capable of.
target_url only refers to pages where someone posts to the forum. The website uses ASP.NET. The numbers contained in brackets (0,14770,13801 etc) seem to be the same in every request made so far.
Any ideas?
I see things more or less similar on sereval websites and I think it is a code for passing by your captcha in the form you have on the page id=17. My guess would be that :
GET target_url?id=17 [0,14770,13801] = Get the captcha at the position [0,14770,13801] on the page, where the captcha image or computation or else has been detected ;
POST target_url?id=17 [R=302][8880,0,522] = still on the same page, put it back in the field at the position [8880,0,522]. [R=302] is possibly an error redirect management in case it is wrong.

What is the difference between GET and POST in the context of creating an AJAX request?

I have an AJAX request that sends a GET:'getPendingList'. This request should return a JSON string indicating a list pending requests that need to be approved. I'm a little confused about whether I should be using a GET or POST here.
From this website:
GET requests can be cached
GET requests can remain in the browser history
GET requests can be bookmarked
GET requests can be distributed & shared
GET requests can be hacked (ask Jakob!)
So I'm thinking: I don't want the results of this GET to be cached because the pending list could change. On the other hand, using POST doesn't seem to make much sense either.
How should I think about GET and POST? I've been told that GET is the same as a 'read'; it doesn't (or shouldn't) change anything on the server side. This makes sense. What doesn't make sense is the caching part; it wouldn't work for me if someone else cached my GET request because I'm expecting the data to change.
Yahoo's best practices might be worth reading over. They recommend using GET primarily for retrieving information and using POST for updating information. In a separate item, they also recommend that do you make AJAX requests cachable where it makes sense. Check it out, it's a good read.
In short, GET requests should be idempodent. POST requests are not.
If you are altering state, use POST - otherwise use GET.
And don't forget, when talking about caching with GET/POST, that is browser-caching.
Nothing stopping you from caching the data server-side.
Also, in general - JSON calls should be POST (here's why)
So, after some IRC'ing, it looks like the best way to do this is to use GET (in this particular instance), but to prevent caching. There are two ways to do this:
1) Append a random string to your GET request.
This seems like a hacky way to do this but it sounds like it might be the only solution for IE: Prevent browser caching of jQuery AJAX call result.
2) In your response from the server, set the headers to no-cache.
It's not clear what the definitive behavior is on this. Some folks (see the previous link) claim that IE doesn't respect the no-cache directives. Other folks seem to think that this works: Internet Explorer 7 Ajax links only load once.

Resources