How to configure Json Settings object in Datapower - ibm-datapower

I configured a Json settings object for the domain in order to increase the maximum payload size of responses allowed. I'm assuming I need to configure my mpgw service to use this json settings object, because I'm assuming it doesn't automatically apply the setting to every service in the domain just because I created the object. But where do I configure my service or requests to use this json settings object that exists in the domain?

The answer is you add the Json Settings object to the service's XML Manager configuration.

Related

Can we use a variable which has endpoint URL directly in HTTP Request UIPath Activity?

How to use a variable directly in Http Request Endpoint?
If you close the activity configuration Wizard, your activity will still be there and you can fine tune the configuration freely in the Properties panel. In your case, configuring the Endpoint by using a variable.
Example

Apigee rest endpoint path mapping to custom path

I have rest end point /admn_resource_manager.I have created a apigee proxy to expose this.
I dont want to expose it like this to others as I want something like /adminmanager.
Is there any way to map /adminmanager to /admn_resource_manager using Apigee.
end user would use http://someurl.apigee/adminmanager instead of http://someurl.apigee/admn_resource_manager
I explored KeyValueMapoperation and AssignMessage in Apigee.
I am not sure if these are the right option to implement map path.I didn't get any example for this either.
The way you would think to do this would be to use the Assign Message policy and use the Set -> Path element. But this policy isn't currently working as designed for rewriting the proxy's target URL. See the Assign Message Guidance for more details.
To rewrite the incoming URL to a different target URL you can use the Assign Message Policy to set the entire URL (target.url) in the Target Endpoint flow, or you can use a JavaScript callout to set it. I chose to use a JavaScript callout because it gives a lot more control when rewriting the URL.
Here is an example project on Github I put together for this you can use to see how I did it. It uses the swapi.co api as the target endpoint. This proxy uses the Assign Message and JavaScript callout policies to rewrite the URL. Here's some details about it...
Proxy Endpoints
Create a proxy endpoint for each resource you are renaming.
This is where you setup the Assign Message policy to set the variables for the new path suffix.
Assign Message Policies
Set on the PreFlow of each proxy endpoint to set the targetPathSuffix and appendResourceIdToUrl (if needed) variables.
JavaScript Policy
Calls out to the URLRewrite.js file to execute the js code.
Set on the TargetEndpoints PreFlow and executes on each request
Uses the variables set in the Assign Message Policies to change the target.url variable.
I think Apigee can do it.
When I was started Apigee I have learned and try to understand from the picture below. (I think it is describe the main concept of this platform)
From your scenario,
You can specify the URL that you wants client to call maybe someurl.apigee/adminmanager or something else
Apigee is a middle also known as a Gateway. When you received the request from client, you can manage whatever you want. Of course, including pass your client to other URL like someurl.apigee/admn_resource_manager . (You just assigned new url to that request)
Because I'm not an expert as well so, you this link below can explain you more information.
Link:Using Flow Variables

is it possible to deploy a file to artifactory directly as a filtered resource?

We deploy tool settings files as filtered resources so we can publish a static link for developers to download them with credentials and (we template more than just the credentials but that's the key element). I don't see anything in the REST API to indicate to set the Filtered setting for the file, either as part of the deploy or as a separate API call to enable the setting for an already published file.
Artifactory is using the artifactory.filtered property to indicate whether an artifact should be a filtered resource.
You can use the set item properties REST API method for setting this property, for example:
curl -uuser:password -XPUT http://artifactory.mycompany/api/storage/repo-key/path/to/my/file?properties=artifactory.filtered=true
This means you first have to deploy the file and than perform the above request in order to set the property value.
You can also do it in one request using matrix parameters, the URL format for deployment should be in the following format:
http://artifactory.mycompany/repo-key/path/to/my/file;artifactory.filtered=true

Jmeter: What to add in HTTP Cookie Manager?

There are following column names under "User-defined cookies":
1. Name
2. Value
3. Domain
4. Path
5. Secure
What I should enter in all above mentioned fields and how it is useful?
HTTP Cookie Manager is smart enough to automatically take care about cookies. Being added and enabled it fetches cookies from Set-Cookie response header and adds them to the next request enabling client-side state management, cookie based authentication, etc. Moreover, it provides access to cookies via JMeter Variables assuming CookieManager.save.cookies=true property is set in user.properties file (lives under /bin folder of your JMeter installation).
In regards to fields like Name, Value, Domain, etc. - this way you can define your own custom cookies or override existing cookies if i.e. you need to hard-wire a request to this or that node behind the load balancer, simulate activity of certain user, whatever.
See Using the HTTP Cookie Manager guide for more details on this useful test element.

Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

Resources