Apigee RouteRule evaluates correctly but then returns a 503 - apigee

I have two Resources set up for my API proxy and have a route rule named talkback that should take POST requests to my /matches API resource and route them to my talkback subdomain rather than www.
I have this working correctly for GET requests that redirect to my open subdomain. However the talkback rule correctly evaluates but then returns a 503 without reaching my target endpoint:
error The Service is temporarily unavailable
error.cause Connection refused
error.class com.apigee.messaging.adaptors.http.HttpAdaptorException
state TARGET_REQ_FLOW
type ErrorPoint
Are you able to advise on what may be the issue?
This is the route rule I'm using:
<RouteRule name="talkback">
<Condition>(proxy.pathsuffix MatchesPath "/matches/**") and (request.verb equals "POST")</Condition>
<TargetEndpoint>talkback</TargetEndpoint>
</RouteRule>
This is the talkback target endpoint:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<TargetEndpoint name="talkback">
<Description/>
<FaultRules/>
<Flows/>
<HTTPTargetConnection>
<Properties/>
<URL>http://talkback.test.xxxx.co.uk/gapi</URL>
</HTTPTargetConnection>
<PreFlow name="PreFlow">
<Request/>
<Response/>
</PreFlow>
<PostFlow name="PostFlow">
<Request/>
<Response/>
</PostFlow>
</TargetEndpoint>

This pretty much looks like an issue where Apigee is not able to connect to your target backend - http://talkback.test.xxxx.co.uk. Apigee throws a 503 back to the client when its unable to connect to the backend. Is the backend publicly accessible?

Related

Block undefined routes/paths in apigee

I have a proxy defined in Apigee with an end point which has quite a few different controllers and paths. I only want to expose 2 of these via apigee (a get and post). I can't work out how to stop anyone else being able to access the other endpoints through the apigee proxy.
Anyone able to help?
Just to confirm I understand your requirement right, the target API exposes multiple paths. Of all those paths, you would want to expose 2(GET and POST) paths via Apigee to your consumers.
This can be done using conditional flows. Create three conditional flows in your proxy endpoint. Two conditional flows for two paths you would want to expose. You may use combination of paths and HTTP verbs in the Condition tag.
Use the third conditional flow without any conditions as a catch all block. You can use the raise fault policy in the third conditional flow to return appropriate error to the consumer.
Your proxy endpoint should look something like this -
<Flows>
<Flow name="get-resource">
<Description>Get resource</Description>
<Request/>
<Response/>
<Condition>(proxy.pathsuffix MatchesPath "/resource") and (request.verb = "GET")</Condition>
</Flow>
<Flow name="post-resource">
<Description>Create resource</Description>
<Request/>
<Response/>
<Condition>(proxy.pathsuffix MatchesPath "/resource") and (request.verb = "POST")</Condition>
</Flow>
<Flow name="Unknown Resource">
<Description>Unknown resource</Description>
<Request>
<Step>
<Name>RaiseFault-UnknownResource</Name>
</Step>
</Request>
<Response/>
</Flow>
</Flows>
And the raise fault policy would look something like this -
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<RaiseFault async="false" continueOnError="false" enabled="true" name="RaiseFault-UnknownResource">
<DisplayName>RaiseFault-UnknownResource</DisplayName>
<Properties/>
<FaultResponse>
<Set>
<Headers/>
<Payload contentType="text/plain">Resource not found</Payload>
<StatusCode>404</StatusCode>
<ReasonPhrase>Not Found</ReasonPhrase>
</Set>
</FaultResponse>
<IgnoreUnresolvedVariables>true</IgnoreUnresolvedVariables>
</RaiseFault>
If this is not the requirement please clarify it and I'll try and update the answer accordingly.

WSO2 EI - Proxy Services with Load Balancer

When i need to put Load Balancer to Proxy Service deployed on WSO2 EI 6.5.0,
Would i need to implement Clustering with it?
1. I found below sample on Docs :
<loadbalance algorithm="org.apache.synapse.endpoints.algorithms.RoundRobin">
<endpoint>
<address uri="service_url (instance21">
<enableAddressing/>
<suspendOnFailure>
<initialDuration>20000</initialDuration>
<progressionFactor>1.0</progressionFactor>
</suspendOnFailure>
</address>
</endpoint>
<endpoint>
<address uri="service_url (instance2)">
<enableAddressing/>
<suspendOnFailure>
<initialDuration>20000</initialDuration>
<progressionFactor>1.0</progressionFactor>
</suspendOnFailure>
</address>
</endpoint>
</loadbalance>
it is the right way to create LB ?
2. I tried it below too :
https://medium.com/#snsavithrik1/wso2-ei-worker-manager-clustering-on-a-single-machine-dae1161bcb78
but i think that is not balancing service load through all proxy It only handles request
it's Manager node, Worker node does nothing
Note : i expect that when i send request to proxy services, if this service is busy, it sends coming requests to other node to handling it
Something like this :
[LOG] Response from service1
[LOG] Response from service2
Please refer to the documentation on [1]. The provided blog does not contain the load balancer configuration. Further to add to this, in the EI servers we do not have the concept of the worker, manager. This was a concept introduced in the ESB servers where the worker nodes serve the requests and the manager node was used to manage the artifacts deployed.
[1]-https://docs.wso2.com/display/EI650/Clustering+the+ESB+Profile

How can I switch an existing Azure web-role from http over to https

I have a working Azure web role which I've been using over an http endpoint. I'm now trying to switch it over to https but struggling mightily with what I thought would be a simple operation. (I'll include a few tips here for future readers to address issues I've already come across).
I have created (for now) a self-signed certificate using the powershell commands documented by Microsoft here and uploaded it to the azure portal. I'm aware that 3rd parties won't be able to consume the API while it has a self-signed certificate but my plan is to use the following for local client testing before purchasing a 'proper' certificate.
ServicePointManager.ServerCertificateValidationCallback += (o, c, ch, er) => true;
Tip: you need upload the .pfx file and then supply the password you used in the powershell script. Don't be confused by suggestion to create a .cer file which is for completely different purposes.
I then followed the flow documented for configuring azure cloud services here although many of these operations are now done directly through visual studio rather than by hand-editing files.
In the main 'cloud service' project under the role I wanted to modify:
I imported the newly created certificate. Tip: the design of the dialog used to add the thumbprint makes it very easy to incorrectly select the developer certificate that is already installed on your machine (by visual studio?). Click 'more options' to get to _your_ certificate and then check the displayed thumbprint matches that shown in the Azure portal in the certificates section.
Under 'endpoints' I added a new https endpoint. Tip: use the standard https port 443, NOT the 'default' port of 8080 otherwise you will get no response from your service at all
In the web.config of the service itself, I changed the endpoint binding for the service so that the name element matched the new endpoint.
I then published the cloud project to Azure (using Visual Studio).
At this point, I'm not seeing the results I expected. The service is still available on http but is not available on https. When I try to browse for it on https (includeExceptionDetailInFaults is set to true) I get:
HTTP error 404 "The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable"
I interpret this as meaning that the https endpoint is available but the service itself is bound to http rather than https despite my changes to web.config.
I have verified that the publish step really is uploading the new configuration by modifying some of the returned content. (Remember this is still available on http.)
I have tried removing the 'obsolete' http endpoint but this just results in a different error:
"Could not find a base address that matches scheme http for the endpoint with binding WebHttpBinding. Registered base address schemes are [https]"
I'm sure I must be missing something simple here. Can anyone suggest what it is or tips for further trouble-shooting? There are a number of stack-overflow answers that relate to websites and suggest that IIS settings need to be tweaked but I don't see how this applies to a web-role where I don't have direct control of the server.
Edit Following Gaurav's suggestion I repeated the process using a (self-signed) certificate for our own domain rather than cloudapp.net then tried to access the service via this domain. I still see the same results; i.e. the service is available via http but not https.
Edit2 Information from csdef file... is the double reference to "Endpoint1" suspicious?
<Sites>
<Site name="Web">
<Bindings>
<Binding name="Endpoint1" endpointName="HttpsEndpoint" />
<Binding name="Endpoint1" endpointName="HttpEndpoint" />
</Bindings>
</Site>
</Sites>
<Endpoints>
<InputEndpoint name="HttpsEndpoint" protocol="https" port="443" certificate="backend" />
<InputEndpoint name="HttpEndpoint" protocol="http" port="80" />
</Endpoints>
<Certificates>
<Certificate name="backend" storeLocation="LocalMachine" storeName="My" />
</Certificates>

Differences in validating APIKeys with GetOAuthV1Info and VerifyAPIKey

We are in the process of totally rewriting our main API Proxy config and we discovered an issue with our new configuration (or maybe our existing one) relating to how API keys are being validated. Our current API uses the policy GetOAuthV1Info
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<GetOAuthV1Info enabled="true" continueOnError="false" async="false" name="APIKey-Validate">
<DisplayName>APIKey-Validate</DisplayName>
<FaultRules/>
<Properties/>
<AppKey ref="request.queryparam.apikey"></AppKey>
</GetOAuthV1Info>
Our new configuration uses the policy VerifyAPIKey
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<VerifyAPIKey async="false" continueOnError="false" enabled="true" name="Verify-Api-Key">
<DisplayName>Verify API Key</DisplayName>
<APIKey ref="request.queryparam.apikey"/>
</VerifyAPIKey>
On the surface both of these policies appear to work fine. However, after deploying the new config to our test environment some API keys were failing with a 401 Unauthorized error. Digging into those keys we discovered that they are assigned to a product that doesn't have access to the test environment. It appears that the GetOAuthV1Info step is not validating the environment..? The documentation for GetOAuthV1Info doesn't help as it doesn't talk about APIKeys at all (http://apigee.com/docs/api-services/content/authorize-requests-using-oauth-10a).
Fixing this particular issue is pretty straight forward in that we just need to allow those other products access to the test environment. However, this makes me wonder what the other differences are between these two policies? I'm very nervous now about deploying any changes to these API proxies because I don't know what else will break, or what other unforeseen issues will appear.
Is this a known limitation with the GetOAuthV1Info policy? Why does this even work at all? What are the other differences between these two policies that might bite me later?
The only difference that I'm aware of is that the variable names are assigned differently in the VerifyAPIKey Policy (it appends the policy type and name to the vairalbes like verifyapikey.verify_apikey-1.apiproduct.developer.quota.limit for example).
Both VerifyAPIKey and OAuth 1 does support restrictions by environment -- when I tested the GetOAuthV1 with an APIKey in an invalid environment and got this error:
OAuth Failure : Invalid API call as no apiproduct match found
Keep in mind that the convention for most projects seems to be either OAuth2 flows or the VerifyAPI so there is less information about the OAuth1 policies.

Calling a GET on the Apigee HTTPTargetConnection when the request was POST

I need to call a legacy API which uses GET.
My API proxy uses POST.
I tried using in AssignMessage:
<AssignTo type="request" createNew="false"/>
and
<Set> ... <Verb>GET</Verb>
But it still does a POST on the target API.
What is the proper way of converting?
Will the gateway automatically convert the POST form parameters into GET query parameters?
Is message.queryparam the same for both GET and POST?
When converting the Verb from POST to GET, the policy will NOT automatically convert the form parameters to query parameters. You will need to use the <Add> and/or <Remove> functionality of the AssignMessage policy to manipulate the message further. Example use in the AssignMessage policy to add the queryparams, referencing the formparams:
<Add>
<QueryParams>
<QueryParam name="q1">{request.formparam.q1}</QueryParam>
</QueryParams>
</Add>
Also, in your question you mentioned that the API Proxy accepts the request using method as POST. Then, you have a policy to set GET:
<Set> ... <Verb>GET</Verb>
But it still does a GET on the target API.
What's the problem? Isn't that what you are expecting? The request goes into the Apigee API Proxy as POST, the proxy converts the method (verb) to GET, and sends the request to the backend legacy API using GET.
Note: <AssignTo> is optional in the AssignMessage. Try leaving this out if the method is not being set properly. In its absence, the message at the current point in the flow will be modified.
Change this predefined variable to post
request.verb = "GET"
Note: If you do this and you have a flow condition based on request.verb="POST" that will not work well in the response. So you need to use another variable to use in the flow condition.
Here is the policy code that worked for me.
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<AssignMessage async="false" continueOnError="false" enabled="true" name="changeverbassignmessage">
<DisplayName>ChangeVerbAssignMessage</DisplayName>
<FaultRules/>
<Properties/>
<AssignVariable>
<Name>request.verb</Name>
<Value>GET</Value>
<Ref/>
</AssignVariable>
<IgnoreUnresolvedVariables>true</IgnoreUnresolvedVariables>
<AssignTo createNew="false" transport="http" type="request"/>
</AssignMessage>

Resources