apigee console fails with HTTP 500 - apigee

I have 2 API-Proxies running. I am able to call both directly through a browser. But when I am trying to call them through Apigee Console, they are throwing HTTP500.
Here is the link I am using:
http://ritwik_chatterjee-test.apigee.net/v1/yahoo-weather/forecastrss?w=2471390
Response in Apigee Console:
HTTP/1.1 500 Internal Server Error
X-APIGEE-STATUS:
failure
X-APIGEE-ERROR:
internal-error
Content-Length:
199
<?xml version="1.0" encoding="UTF-8"?>
<Error messageid="-4868f6ff:143f9f156c9:-7995">
<reason>An internal error has occurred. Please retry your request.</reason>
</Error>
Please help.

Worked through it and found a bug. The problem is that you have underscores in your org name (ritwik_chatterjee). I tested with a new org named test_my_underscores and got the same problem:
http://test_my_underscores-test.apigee.net/v0/weather
Returns the same 500 error as
http://ritwik_chatterjee-test.apigee.net/v1/yahoo-weather/forcastrss
Email me at michaelb#apigee.com and I'll get you an org with dashes in it.

Per RFC 952, underscores are not allowed in domain names.
While org names can contain underscores, Apigee creates the default hostnames based on the org name, so if an underscore exists, this will always fail without manual manipulation of the hostname and/or organization name.
This can cause user confusion, so I will recommend we simply update the platform to allow only characters allowed in hostnames (a text string up to 24 characters drawn from the alphabet (A-Z), digits (0-9), minus sign (-), and period (.)) upon org creation. I will add additional notes to YTD-3120 mentioned above.

I think i got the issue. There is seems to be bug in Apigee Console. If the host name has a underscore("_") in it the Host header and X-Target-URI go bonkers. For now, try change your URL to ritwik-chatterjee-test.apigee.net if possible.
I will raise a bug and post the ticket number here

There must have been an intermittent issue. This seems to be working now.

Related

Google Cloud Storage inconsistent responses

I have a bucket set up in Google Cloud Storage, with the "object default permissions" set to grant the "User" group "allUsers" with permission "Reader".
In the bucket there are a number of files, and I have a client checking if a particular file is present by trying to access it. Most of the time we get a 404 response, but fairly often we see a 403 response for the first few tries.
The 403 response body is (with my own formatting and replacement of private info):
<?xml version='1.0' encoding='UTF-8'?>
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>Anonymous users does not have storage.objects.get access to object mybucket/myfile.gz.</Details>
</Error>
So my question is why do I sometimes get a 403 and sometimes a 404 response when trying to open a file that doesn't exist?
I know that there will be changes from 29 May 2017, but they are not in effect yet, and so it appears that either something is wrong or Google have been randomly applying the new logic early.
I have a definitive answer on this via an email from Google, so I'm giving the response for completeness.
It has been found out that there was a miscommunication between the Engineers. Originally, the changes were supposed to be scheduled on 5/22 but due to some internal delays, they decided to announce it on a later date which is 5/29. Due to this confusion, the Engineers rolled this feature out on the original date (5/22) instead of 5/29.
TL;DR: Google screwed up and rolled out breaking changes a week early.

HTTP Error 403.0 - ModSecurity Action

I m creating a code in which based on query string the URL is changing when no values are supplied in URL everything is working fine but as i supply values to URL it shows Error HTTP Error 403.0 - ModSecurity Action
Kindly suggest some solution
also the same is working fine in local problem occurs when i upload my webpage to server
I know this is an old thread, but posting the answer so that it can be helpful for others. ModSecurity is an open source, cross-platform web application firewall (WAF) module.
https://modsecurity.org/about.html
So whenever you see the 403 (ModSecurity Action), this means that the mod security firewall has blocked the request. The probable cause could be vulnerable data present in the posted data, or the it could be because of the URL posted as parameter or it could be JavaScript.
In above case, the ModSecurity might have deemed the input as SQL Injection attack and hence may have blocked it. If you look into the logs of the firewall it may give you the detailed explanation.
In my case, I was passing URL as query parameter in the request hence it was returning 403.

Redirect URI Mismatch Error from Google Plus sign in

While attempting to implement Google+ sign in, I receive this error:
Upon clicking the sign in button, I receive a redirect_uri_mismatch, error stating that:
The JavaScript origin in the request:
http://70132bd6.ngrok.com did not match a registered JavaScript origin.
I have added the link (along with several others just in case) in my developer console under origins:
How do I resolve this mismatch issue?
Additionally, why is there a prepended storagerelay:// in the redirect uri of the request details. It says it is: redirect_uri=storagerelay://http/70132bd6.ngrok.com?id=auth109348. Where is the extra part coming from?
Please make sure you are using correct client_id. It is common that developers created multiple clients, and set those origins on a different client. Please double check.

The request URL is invalid in IIS 7

here is my URL
http://abc.domain.com/controller/action/A74444C3A7FA858C7995CA9954CBCF1E26604634767C5575396D908E8415CF8CCC04C05F49FED0AA9D9743B69ABF232BDE9787A5222D081DA638896C0D2379A673E1747A2FFE1158F14AF098B2899D2ABEB4EA738D89369627E479796B6B2B9EA9B247CC59EF10E3A88B6A56A87F0818E2AD2A942FFA31F1C941BB7AF6FDC55FE6733353F28DFAC1827688604CBFBAB4856E6C75F810D13923F9D913F51F5B02980163E6CD63BC04610AD2C12E07360D7BC2C69F1B0CD03E
There are no invalid characters in the URL itself as everything is encrypted. Still I am getting
Bad Request - Invalid URL
HTTP Error 400. The request URL is invalid.
I know the URL is awfully long and I was able to resolve that issue in my Cassini by adding this
httpRuntime maxUrlLength="512"
in the web.config
However in IIS7 even after playing around with the requestfiltering maxurl and maxquerystring values I have not been able to resolve this.
This is an asp.net mvc 3 application.
This one is for posterity and for tracking my own problem. It's been said in another answer however, not as explicitly.
I've had the same problem on my end. The answer is of course to transfer the long URL segment to a Query string. Easier to handle.
The problem however is that HTTP.sys is not even letting the request through because a segment of the URL is exceeding 260 or so characters. However, we still had to support it.
You can change that setting in the registry. Once you reboot, the url will work.
Registry:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\HTTP\Parameters]
"UrlSegmentMaxLength"=dword:00000400
This will effectively set the segment length to 1024.
Source
Your problem is you're not using a query string, but a path. A path has a maximum length of 255.
The final path segment is likely to be too long.
See: http://social.msdn.microsoft.com/Forums/nl/netfxnetcom/thread/723e6bfd-cab7-417b-b487-67f1dcfa524f

My site's error log is filled with the errors related to ScriptResource.axd

My Site's error log is filled with these errors:-
This is an invalid script resource request.
Invalid viewstate.
Invalid character in a Base-64 string.
Invalid length for a Base-64 char array.
All these errors are appearing at least 100 times a day.
After doing some RnD on internet i have done following things:-
1- define machine key in my web config.
2- created robots.txt file and add ScriptResource.axd file in that.
Can some one guide me what I am missing or doing wrong.
First Possible Reason
I have see some crawlers that remove the verify key on the end of the files, or convert it to small case, so this have as result to get this error.
Second Possible Reason
Some one test and search your pages for weak points and entry ways to your back data.
On the log you can see how they call the ScriptResource.axd and what is the problem on the key. And check what ip make this calls - is the same ?
Some reference.
"Padding is Invalid and cannot be removed" exception on WebResource.axd
CryptographicException: Padding is invalid and cannot be removed and Validation of viewstate MAC failed
one more, I do not think that its need to add ScriptResource.axd on robots and remove it from search (I mean that this is not actually the problem) - how ever its not bad idea.

Resources