Intermittent Apigee error of Javascript runtime exceeding limit of 200ms - apigee

The use case seeing this error is as follows:
There is an Apigee API Proxy that has been configured for a service. A second API Proxy has a JavaScript policy that makes a call out to the first configured Apigee API Proxy to get back a response and process it. Running this second API Proxy gives the following error from time to time:
"fault": {
"detail": {
"errorcode": "steps.javascript.ScriptExecutionFailed"
},
"faultstring": "Execution of getlocationserviceresponse failed with error: Javascript runtime exceeded limit of 200ms"
}
There are other JavaScript policies attached to this second Proxy so that the total JavaScript is chopped up into small modules but this runtime exceeded limit error persists from time to time. What can be done to avoid this?

You should check the configuration of the Apigee javascript policy. Here is an example of a policy definition:
<Javascript async="false" continueOnError="false" enabled="true"
timeLimit="200" name="validate-email">
<DisplayName>validate-email</DisplayName>
<FaultRules/>
<Properties/>
<ResourceURL>jsc://validate-email.js</ResourceURL>
</Javascript>
The timeLimit attribute can be updated to raise the execution limit. It's value is in ms.

Related

MULE-4:VM: Check number of messages in VM queue before consuming

We required to consume messages from VM in a flow. Currently it is throwing an error when VM is empty as below:
Message : Tried to consume messages from VM queue 'FQ' but it was empty after timeout of 5 SECONDS Payload Type : org.mule.runtime.core.internal.streaming.bytes.ManagedCursorStreamProvider
For now we wrapped it in try catch block and handling this error(still it is printing the error stack trace, we want to avoid it)
I want to check is there a way or piece of code that can be used for checking the number of messages available in VM before consuming it.
You can use the logException attribute on the error handler so the exception is not printed in the log.
Example:
<try doc:name="Try">
<vm:consume doc:name="Consume" config-ref="VM_Config" queueName="q1" />
<error-handler >
<on-error-continue enableNotifications="true" logException="false" doc:name="On Error Continue" type="VM:EMPTY_QUEUE">
<logger level="INFO" doc:name="Logger" message="consume timeout"/>
</on-error-continue>
</error-handler>
</try>

Azure Handle custom HTTP 401

I have a web service with basic auth (via custom database check in code) that returns HTTP 401 + SOAP FAULT message when user credentials are wrong. This works in my local IIS 7 but not in Azure...
When I move it to Azure, via App Service, and I insert bad credentials, I always get the following message:
HTTP 401
"You do not have permission to view this directory or page.".
But I want my custom error message in SOAP Fault format!
After hours of research I have found that if my code returns a response with HTTP 401 status, Azure translate this to the end-user like the message above, ignoring my custom error message.
I made another example with a "hello world" SOAP endpoint:
[WebMethod]
public string HelloWorld()
{
HttpContext.Current.Response.StatusCode = 401;
return "Hello world";
}
In my local IIS I got HTTP 401 "Hello world", but when I deploy this to Azure I got the same "Yout do not have permissions..."
¿How can I disable/avoid/dodge Azure 401 message transformation and return my own custom error message (SOAP FAULT)?
Note: I've already tried to disable authentication in Azure app service, enabling with "allow anonymous requests", etc.
Solution
The problem was in IIS. Adding the following line to my web.config works.
<httpErrors existingResponse="PassThrough" />
More info: https://stackoverflow.com/a/47307706/3763467
I will keep the same title/tags for other people facing the same problem and thinks the problem is in Azure.
It sounds like your web.config file might be setup to have custom errors turned on.
Can you please check your web.config file and set customer errors to on?
<customErrors="off" />

Mule ESB synchronous Until Successful payload modification

I've spent hours trying to solve my problem which seems to be caused by synchronous Until Successful Scope in Mule ESB v3.5.0. It seems to modify message payload when sending an outbound HTTP requests.
I need to continue in my flow after outbound HTTP request successfully returns from a HTTP server (which sometimes has connection problems). Thus I need the sync variant of Until Successful. For now I use just a simple Logger after the Until Successful block.
The body of my HTTP request is a XML file. When there is no problem at my server and the Until Successful doesn't need to make another HTTP request again, I receive the XML which I sent.
However, when there is a connectivity problem so the Until Successful repeats requesting a few times and then the server goes back online, on my server I receive an instance of org.apache.commons.httpclient.methods.PostMethod instead of the sent XML in the request body!
So no more XML on my server. It seems this sync Until Successful simply discards the original message payload...
The standard async variant of Until Successful works as intended - getting XML in requests all the time.
Here is a minimal sample of HTTP outbound endpoint with Until Successful:
<flow name="perform" doc:name="performHTTP">
<until-successful maxRetries="${repeater.retries}" millisBetweenRetries="${repeater.period}" failureExpression="#[exception != null && (exception.causedBy(java.net.ConnectException) || exception.causedBy(java.net.SocketTimeoutException)) || message.inboundProperties['http.status'] != 200]" doc:name="Until Successful - Repeater" synchronous="true">
<http:outbound-endpoint exchange-pattern="request-response" host="${https.outbound.address}" port="${https.outbound.port}" path="${https.outbound.path}" method="POST" mimeType="text/xml" transformer-refs="Custom_Outbound_HTTPS_Header" contentType="text/xml" doc:name="HTTPS - Outbound" doc:description="Outcoming HTTPS connection" responseTimeout="15000"/>
</until-successful>
<logger message="#['Sending done']" level="INFO" doc:name="Logger - Done"/>
</flow>
Long story short:
synchronous Until Successful: XML -> HTTP request - { NET } - HTTP request -> org.apache.commons.httpclient.methods.PostMethod
asynchronous Until Successful: XML -> HTTP request - { NET } - HTTP request -> XML
I had the same problem and fixed it by saving my payload and retrieving on each retry something like this
<set-variable value="#[payload]" variableName="paloadbeforecall" doc:name="Variable" />
<until-successful maxRetries="${repeater.retries}" millisBetweenRetries="${repeater.period}" failureExpression="#[exception != null && (exception.causedBy(java.net.ConnectException) || exception.causedBy(java.net.SocketTimeoutException)) || message.inboundProperties['http.status'] != 200]" doc:name="Until Successful - Repeater" synchronous="true">
<processor-chain>
<set-payload value="#[flowVars.?paloadbeforecall]" doc:name="Variable" />
<http:outbound-endpoint exchange-pattern="request-response" host="${https.outbound.address}" port="${https.outbound.port}" path="${https.outbound.path}" method="POST" mimeType="text/xml" transformer-refs="Custom_Outbound_HTTPS_Header" contentType="text/xml" doc:name="HTTPS - Outbound" doc:description="Outcoming HTTPS connection" responseTimeout="15000"/>
</processor-chain>
</until-successful>
Sounds like a bug. It would be interesting to report this as an issue. Anyhow, there is a simple workaround for this, just wrap the until-successful in a wire-tap. This will create a copy of the message (not necessarily the payload) and given that the payload is inmmutable (String) the oubound-endpoint will just change the reference without affecting the flow after the wire-tap.

How to stop consumers from hitting invalid resources in APIGee API

I have an Apigee proxy that has two resources (/resource1 and /resource2). If tried to access /resource3. How do I return a 404 error instead of the Apigee default fault?
Apigee displays the below fault string:
{
"fault": {
"faultstring": "The Service is temporarily unavailable",
"detail": {
"errorcode": "messaging.adaptors.http.flow.ServiceUnavailable"
}
}
}
Thanks
Currently the way flows work in apigee this way - It parses through your default.xml (in proxy) and tries to match your request with one of the flow either through the path-suffix like "/resource1, /resource2" or VERB or any other condition you might have. If it does not find any matching condition, it throws the error like above.
You can add a special flow which will be kicked in if the condition matches none of the valid flows you have. You can add a raisefault policy in that flow and add a custom error response through that flow.
A better solution is to:
be sure to define something in the base path of all Proxy APIs
create an additional Proxy API called "catchall" with a base path of "/" and with just a Raise fault throwing a 404
Apigee execute Proxy APIs from longest Base Path to shortest; the catchall will run last and always throw back a 404
I just want to clarify Vinit's answer. Vinit said:
If it does not find any matching condition, it throws the error like above.
Actually, if no matching flow condition is found, the request will still be sent through to the backend. The error you mentioned:
{
"fault": {
"faultstring": "The Service is temporarily unavailable",
"detail": {
"errorcode": "messaging.adaptors.http.flow.ServiceUnavailable"
}
}
}
was returned after attempting to connect to the backend without matching a flow.
Vinit's solution to raise a fault to create the 404 is the best solution for your requirements.
In some cases, however, it is appropriate to pass all traffic through to the backend (for example, if you don't need to modify each resource at the Apigee layer, and you don't want to have to update your Apigee proxy every time you add a new API resource). Not matching any flow condition would work fine for that use case.

Googlemaps API OVER_QUERY_LIMIT with no API Key

I'm working on an app that makes use of the googlemaps API. I'm not using any API key as I'm performing requests like http://maps.googleapis.com/maps/api/directions/json?origin=Brooklyn&destination=Queens&sensor=false&departure_time=1343641500&mode=transit (shown in one of the Google examples). Unfortunately, lately it keeps giving the following error:
{
"error_message": "You have exceeded your daily request quota for this API.",
"routes": [],
"status": "OVER_QUERY_LIMIT"
}
Even by adding a key I create as a parameter to the GET request (i.e., &key=blahblah), the result is always the same. Does this mean my I.P. is blocked by Google?
What can I do to get it back to work?
Thanks
When you use the key it means that that the limit for the account where the key has been created for has been reached.
When you don't use a key it means that the limit for the IP of your server has been reached.
What you can do:
wait until tomorrow or request the service from clientside

Resources