Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua - http

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...

Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

Related

Why is my custom header not present sometimes?

How do I get my custom header all the way to my Rails application when running behind nginx and Phusion Passenger? It is possible, please see details below, but when I just use the headers pane in Paw it is not passed through.
I am using Paw to test and develop some API endpoints in a Rails application. Everything works as expected in my development environment, which is a Rails 6 application running on macOS using Puma. For security, I use a custom header that contains a personal auth token. When I examine the Rails request object, specifically request.headers, I am able to see all the headers including my custom header and I can authenticate based on its value.
The problem comes when running on my staging system, where I have very little control of the environment. Here, the same Rails application is running under Phusion Passenger behind nginx. When I hit the same endpoint with the same request, just changing the host in the request, the custom header is not present. I verified this by writing all headers to a file for every request in staging.
Where has the header gone? Because the environment is different in staging, I suspect that nginx or Fusion Passenger is receiving the header but not passing it through to my Rails application. I can't verify this since I have no access to the logs other than my Rails application's logs. The application is designed to get requests from an external service, so I send some requests through that service and the header is present. That is very strange, so some headers are being passed through and some are not.
Paw (header defined under headers pane).
I checked with cURL using:
curl -X 'https://example.com/ivr/main_menu' -H 'X_JSW_AUTH_TOKEN':'my_tkn'
With Ruby's Net::HTTP:
uri = URI('https://example.com/ivr/main_menu')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
req = Net::HTTP::Post.new(uri)
req.add_field "X_JSW_AUTH_TOKEN", "my_tkn"
res = http.request(req)
With HTTPie:
http POST 'https://example.com/ivr/main_menu' 'X_JSW_AUTH_TOKEN':'my_tkn'
With the http gem from httprb:
resp = HTTP.headers(X_JSW_AUTH_TOKEN: "my_tkn").post("https://example.com/ivr/main_menu")
It seems the answer has nothing to do with Paw. There's pretty solid evidence of this when cURL, HTTPie, and Net::HTTP all have the same results as Paw.
The problem is how nginx treats headers with underscores by default by ignoring them. See "Why do HTTP servers forbid underscores in HTTP header names" for more information.
Ruby's HTTP, which was able to provide the header to my application, did so because it replaces underscores ("_") with hyphens ("-") in header names, so these headers made it through to the Rails application. Rails then replaces hyphens in the header names of requests with underscores. So, both the sender (HTTP) and receiver (Rails) were making substitutions behind the scenes. This made it way harder to troubleshoot.
Many thanks to Chris Oliver for the answer.

Sending passwords through GET request

I have an app that requires users to enter database passwords. These passwords will not be saved on the server, and the server does not need to remember anything about the database after the request. If it's my understanding, most web servers will log get requests, and the browser can as well (does it do this even for fetch() requests?). I do not want to put databases at risk, but I also understand that you should not use a body in GET requests.
I am also not creating resources, so from what I also understand, I should not be using a POST request. Is there a safe way to send a get request with the password (over https) that makes sure it is not logged on the server? This would be an app that anyone could start - so I have no idea what their server configuration could be so I couldn't specifically disable it on one server to ignore it.
so from what I also understand, I should not be using a POST request.
Your understanding is incorrect. POST is the appropriate verb when none of the other verbs make sense. From the spec:
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
Put simply, that means POST does whatever the service says it does. It isn't safe, idempotent, or cacheable so there are disadvantages to just using it for everything, but the intent is for it to be the catch-all verb.
You should not use GET because, as you mentioned, you should not include a body and URLs often get logged, which would expose your credentials.
If the client for your app is going to be a browser you can just use https only cookies to handle the authentication flow. In case if you want it to extend or use it in any other type of client, you can use the Authorization HTTP header.

Extending artifactory's pypi repos with plugins

I am trying to migrate a legacy system to use artifactory. However I have two blockers:
the old scripts require PyPixmlrpc, which artifactory doesn't support
they also make use of upload_docs, not supported by artifactory's pypi implementation either
a smaller issue, the old scripts call register and they expect 200 instead of 204 http status code.
Would it be possible for me to write a plugin to implement this?
Looking at https://www.jfrog.com/confluence/display/RTF/User+Plugins I couldn't find a callback for when POST /api/pypi/<index-name> is requested.
If I can make
work for the methods we actually use, to just pretend it deployed docs and to respond with the correct status code I will be happy enough.
As you say, there is no plugin hook for the Pypi API endpoints. It would be possible to use the altResponse endpoint to customize artifact downloads, but then you would be restricted to GET requests with no request body, which is also not a good option for you.
I think the most viable approach would be to define a custom executions endpoint. With this, you can specify the acceptable method, read the body, and set your own response code and body. The main shortcoming with this is that you can't customize the path (it's always /api/plugins/execute/[execution_name]), but this can be worked around.
Execution endpoints can take params in the following form:
/api/plugins/execute/[execution_name]?params=[param_name]=[param_val]
Say your plugin takes a param path, which represents the API path your old scripts are going to call. Then you can set your base URL to /api/plugins/execute/[execution_name]?params=path=/, so that the API path is appended to the param. Alternatively, you can use nginx or another reverse proxy to rewrite the original API path to this form.
(Since you'll be using XML-RPC, I don't suppose you'll need to worry about any of this path stuff, but I'm including it anyway for completeness.)
Some issues with this approach:
Execution endpoints only allow String responses, so sending binary data in the response body might be finnicky. However, no such limitation exists with the request body.
If you need more than one request method, you'll need more than one execution endpoint. This means you'll need to use a reverse proxy to rewrite each method to a separate endpoint. Again, since XML-RPC just uses POST, this probably won't be an issue for you.
Execution endpoints can't customize response headers. Therefore, if your scripts expect a particular Content-Type or other header, you'll need to use a reverse proxy to insert it into the response.

Building URLs in Go including server scheme

I am creating a REST API in Go, and I want to build URLs to other resources in my replies.
Based on the http.Response I can get the Host and URL.
However, how would I go about getting the transport scheme used by the server? http or https?
I attemped to check if server.TLSConfig is nil and then assuming it is using http since it says this in the documentation for http.Server:
TLSConfig *tls.Config // optional TLS config, used by ListenAndServeTLS
But it turns out this exists even when I do not run the server with ListenAndServeTLS.
Or is this way of building my URLs the wrong way of doing things? Is there some other normal way of doing this?
My preferred solution when running http and https is just to run a simple listener on :80 that redirects all traffic to https. Then any real traffic can be assumed to be https.
Alternately I believe you can access a request's URL at req.URL.Scheme to see the protocol.
Or do you mean for the entire application? If you accept configuration to switch between http and https, then can't you look at that and see which they chose? I guess I'm missing some context maybe.
It is also common practice for apps to take a baseURL via flag or config to generate external urls with.

How do you use .pem files to authenticate a WCF request?

I'm trying to utilize the Amazon Product Advertising API. They provided me with a .wsdl file which I consumed and generated wrapper classes for via Visual Studio 2008's "Add Service Reference" option. This wrapper class works just fine as is and I've been successfully sending requests and receiving responses from Amazon.
However, they are now requiring that all partners start authenticating their requests. They have provided me with two .pem files (one which they call my X.509 certificate file, and one which they call my private key file). I'm not entirely sure what to do with these files. Amazon states the following:
Each SOAP request must be signed with the private key associated with the X.509 certificate. To create the signature, you sign the Timestamp element, and if you're using WS-Addressing, we recommend you also sign the Action header element. In addition, you can optionally sign the Body and the To header element
I realize that much more information may need to be provided here, so please let me know if I need to provide further detail in order to get an answer to this question.
Checkout this article --> http://www.byteblocks.com/post/2009/06/15/Secure-Amazon-Web-Service-Request.aspx
Looks like it should help you out.
Other links that might help:
1) http://developer.amazonwebservices.com/connect/thread.jspa?messageID=132705

Resources