Logstash, multiple http input with different path? - http

It is natural to have different api end point such as /questions /users in http.
Can I define http input with different path ? (I can only find examples using http://ip:port as an input)
I wonder if it's possible to define http://ip:port/foo as an input of logstash?

I think you may be looking at the wrong thing in Logstash. HTTP Input plugin is for some application to send data to Logstash over HTTP.
If you have an application and want to send to Logstash over HTTP and want to change how Logstash processes it based on the api endpoint from the original application then I would suggest putting a new field called endpoint into your message data. You application would populate that based on the endpoint which was used. Then you can use conditionals within Logstash to change the logic Logstash applies.

Related

send data to http endpoint using data fusion realtime pipeline

I'm creating a real-time data fusion pipeline where the Sink is a HTTP plugin call to Vertex AI endpoint in another GCP project. The request body will be provided by a previous step in the pipeline. The http sink plugin being used (HTTP v1.2.2) doesn't seem to support any oauth parameters. what is the best way to make that HTTP call with a dynamically generated token in the headers? any help is appreciated. Thank you
As of now, there is no way to achieve this. I also faced the same issue where my OAuth token expires in X days.
I've had to make a dynamic pipeline that doesn't fail so I have used a custom Argument setter and used the token(macro) that the custom args setter initializes in the HTTP plugin.
You can find the actual open-source code at the https://github.com/data-integrations/argument-setter

Can we use a variable which has endpoint URL directly in HTTP Request UIPath Activity?

How to use a variable directly in Http Request Endpoint?
If you close the activity configuration Wizard, your activity will still be there and you can fine tune the configuration freely in the Properties panel. In your case, configuring the Endpoint by using a variable.
Example

Apigee rest endpoint path mapping to custom path

I have rest end point /admn_resource_manager.I have created a apigee proxy to expose this.
I dont want to expose it like this to others as I want something like /adminmanager.
Is there any way to map /adminmanager to /admn_resource_manager using Apigee.
end user would use http://someurl.apigee/adminmanager instead of http://someurl.apigee/admn_resource_manager
I explored KeyValueMapoperation and AssignMessage in Apigee.
I am not sure if these are the right option to implement map path.I didn't get any example for this either.
The way you would think to do this would be to use the Assign Message policy and use the Set -> Path element. But this policy isn't currently working as designed for rewriting the proxy's target URL. See the Assign Message Guidance for more details.
To rewrite the incoming URL to a different target URL you can use the Assign Message Policy to set the entire URL (target.url) in the Target Endpoint flow, or you can use a JavaScript callout to set it. I chose to use a JavaScript callout because it gives a lot more control when rewriting the URL.
Here is an example project on Github I put together for this you can use to see how I did it. It uses the swapi.co api as the target endpoint. This proxy uses the Assign Message and JavaScript callout policies to rewrite the URL. Here's some details about it...
Proxy Endpoints
Create a proxy endpoint for each resource you are renaming.
This is where you setup the Assign Message policy to set the variables for the new path suffix.
Assign Message Policies
Set on the PreFlow of each proxy endpoint to set the targetPathSuffix and appendResourceIdToUrl (if needed) variables.
JavaScript Policy
Calls out to the URLRewrite.js file to execute the js code.
Set on the TargetEndpoints PreFlow and executes on each request
Uses the variables set in the Assign Message Policies to change the target.url variable.
I think Apigee can do it.
When I was started Apigee I have learned and try to understand from the picture below. (I think it is describe the main concept of this platform)
From your scenario,
You can specify the URL that you wants client to call maybe someurl.apigee/adminmanager or something else
Apigee is a middle also known as a Gateway. When you received the request from client, you can manage whatever you want. Of course, including pass your client to other URL like someurl.apigee/admn_resource_manager . (You just assigned new url to that request)
Because I'm not an expert as well so, you this link below can explain you more information.
Link:Using Flow Variables

Filter response and store something in memcached using nginx+Lua

I have a backend which generates three JWT tokens - reference token, access token and refresh token. Reference token stores a reference to the access token, which is used to access API and refresh token is used to reissue access token when it is timed out. The problem is I do not want to pass access token to the client, but want to use nginx to store it in memcached. So, my whole task is to filter the response from the backend, which currently looks as simple as:
{"reference_token":"...","access_token":"...","refresh_token":"..."}
Nginx should filter this response, get access token from this response and store it in memcached. Finally, it should return to the client a new response:
{"reference_token":"...","refresh_token":"..."}
As you can see, there should be no access_token any more. Access token is something which I try to secure and not to show it and even pass it to the client. What I do not know, is what is the best approach to implement this, what Lua block should I use for this task. I know about body_filter_by_lua , but documentation shortly says that:
Note that the following API functions are currently disabled within this context due to the limitations in NGINX output filter's current implementation
So, it seems like body filtering is rather limited and I'm not even sure if it is possible to call memcached API inside this block. So, how can I implement my task in real world? At least, what Lua (openresty) tricks should I use to approach this task?
You may issue a subrequest (e.g., ngx.location.capture) to your backend within you content handler for example.
Next you may filter a body as you want and use then lua-resty-memcached which use cosocket API.
The drawback of this approach is that you would have full buffered proxy.

Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

Resources