Why is my custom header not present sometimes? - http

How do I get my custom header all the way to my Rails application when running behind nginx and Phusion Passenger? It is possible, please see details below, but when I just use the headers pane in Paw it is not passed through.
I am using Paw to test and develop some API endpoints in a Rails application. Everything works as expected in my development environment, which is a Rails 6 application running on macOS using Puma. For security, I use a custom header that contains a personal auth token. When I examine the Rails request object, specifically request.headers, I am able to see all the headers including my custom header and I can authenticate based on its value.
The problem comes when running on my staging system, where I have very little control of the environment. Here, the same Rails application is running under Phusion Passenger behind nginx. When I hit the same endpoint with the same request, just changing the host in the request, the custom header is not present. I verified this by writing all headers to a file for every request in staging.
Where has the header gone? Because the environment is different in staging, I suspect that nginx or Fusion Passenger is receiving the header but not passing it through to my Rails application. I can't verify this since I have no access to the logs other than my Rails application's logs. The application is designed to get requests from an external service, so I send some requests through that service and the header is present. That is very strange, so some headers are being passed through and some are not.
Paw (header defined under headers pane).
I checked with cURL using:
curl -X 'https://example.com/ivr/main_menu' -H 'X_JSW_AUTH_TOKEN':'my_tkn'
With Ruby's Net::HTTP:
uri = URI('https://example.com/ivr/main_menu')
http = Net::HTTP.new(uri.host, uri.port)
http.use_ssl = true
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
req = Net::HTTP::Post.new(uri)
req.add_field "X_JSW_AUTH_TOKEN", "my_tkn"
res = http.request(req)
With HTTPie:
http POST 'https://example.com/ivr/main_menu' 'X_JSW_AUTH_TOKEN':'my_tkn'
With the http gem from httprb:
resp = HTTP.headers(X_JSW_AUTH_TOKEN: "my_tkn").post("https://example.com/ivr/main_menu")

It seems the answer has nothing to do with Paw. There's pretty solid evidence of this when cURL, HTTPie, and Net::HTTP all have the same results as Paw.
The problem is how nginx treats headers with underscores by default by ignoring them. See "Why do HTTP servers forbid underscores in HTTP header names" for more information.
Ruby's HTTP, which was able to provide the header to my application, did so because it replaces underscores ("_") with hyphens ("-") in header names, so these headers made it through to the Rails application. Rails then replaces hyphens in the header names of requests with underscores. So, both the sender (HTTP) and receiver (Rails) were making substitutions behind the scenes. This made it way harder to troubleshoot.
Many thanks to Chris Oliver for the answer.

Related

event espresso CORS is wrong

I was hired to write a wordpress plugin which involves an ajax request to the website's eventespresso api.
I got it working fine locally (calling the live site's api from my local server), but when I activate the plugin on the live site, it throws:
Failed to load http://example.com/wp-json/ee/v4.8.36/events: The
'Access-Control-Allow-Origin' header has a value 'http://opt.local'
that is not equal to the supplied origin. Origin
'http://www.example.com' is therefore not allowed access.
My local domain is "http://opt.local", and the live site is http://example.com.
This error suggests to me that it only wants to allow access from my local setup, and not from the live site, which isn't even cross origin! Maybe I caused it to cache the wrong thing in development?
So a few more tests revealed that the cors settings are correct for everything except the specific route I need.
> curl -I "http://example.com/wp-json"
Access-Control-Allow-Origin: http://example.com
> curl -I "http://example.com/wp-json/ee/v4.8.36"
Access-Control-Allow-Origin: http://example.com
> curl -I "http://example.com/wp-json/ee/v4.8.36/events"
Access-Control-Allow-Origin: http://opt.local
I was able to make it work by using ee/v4.8.35 (a lower api patch version) but hopefully, there is a better solution.
I helped develop the EE4 REST API.
Ya it sounds like some issue where the webserver or a proxy or something is caching the Access-Control-Allow-Origin header.
There's no code in the EE4 REST API that controls that header, that's actually handled by the WP API (on which the EE4 REST API is built).
The relevant code is in wp-includes/rest-api.php in the function rest_send_cors_headers(). That calls get_http_origin(), whose value can be filtered using the filter http_origin.
So you might want to try adding something like
function my_plugin_force_correct_http_origin($http_origin) {
return 'http://example.com';
}
add_filter('http_origin', 'my_plugin_force_correct_http_origin');
that will ensure the PHP code is sending the correct Access-Control-Allow-Origin header.
If that doesn't resolve the issue, I would verify rest_send_cors_headers() is getting called at all (you could temporarily put a line like echo 'called rest_send_cors_headers!';die; inside that function to check).
If it is getting called, and my suggested filter doesn't help, you could try tagging your question with 'wordpress-rest-api'. Also, I would be curious to see if http://example.com/wp-json/ee/v4.8.36/events?limit=50 has the same problem.

Building URLs in Go including server scheme

I am creating a REST API in Go, and I want to build URLs to other resources in my replies.
Based on the http.Response I can get the Host and URL.
However, how would I go about getting the transport scheme used by the server? http or https?
I attemped to check if server.TLSConfig is nil and then assuming it is using http since it says this in the documentation for http.Server:
TLSConfig *tls.Config // optional TLS config, used by ListenAndServeTLS
But it turns out this exists even when I do not run the server with ListenAndServeTLS.
Or is this way of building my URLs the wrong way of doing things? Is there some other normal way of doing this?
My preferred solution when running http and https is just to run a simple listener on :80 that redirects all traffic to https. Then any real traffic can be assumed to be https.
Alternately I believe you can access a request's URL at req.URL.Scheme to see the protocol.
Or do you mean for the entire application? If you accept configuration to switch between http and https, then can't you look at that and see which they chose? I guess I'm missing some context maybe.
It is also common practice for apps to take a baseURL via flag or config to generate external urls with.

Nginx Lua Scripting

One of the really cool things about Nginx is that you can take control of what it does by injecting Lua script at various phases of request processing. I have successfully used the rewrite_by_lua/file directive to examine the body of the incoming request and inject extra request headers for downstream processing by PHP. e.g.
location /api{
rewrite_by_lua_file "/path/to/rewrite.lua";
lua_need_request_body "on";
}
and then in rewrite.lua
local uri = ngx.var.request_uri;
//examine the URI and inject additional headers
ngx.req.set_header('headerName','headerValue');
What I can also do at this stage is inject response headers. For example
local cookieData = "cookieName=value;path=/;";
ngx.header['Set-Cookie'] = cookieData;
No issues thus far but this is not quite what I want to do. The workflow I have in mind goes like this
Examine the URI & inject extra request headers.
My PHP scripts examine the extra request headers, process the incoming data as required and may inject extra response headers
I then want to examine the response headers in another Nginx Lua script and inject cookies at that stage.
Injecting extra response headers via my PHP script is no problem at all. I thought that in order to examine those extra headers I would just need to setup
location /api {
header_filter_by_lua_file "path/to/header.lua";
}
with header.lua doing things like
local cookieData = "cookieName=value;path=/;";
ngx.header['Set-Cookie'] = cookieData;
The principle sounds perfect. However, I find that my header_filter_by_lua_file directives just get ignored - no errors reported in the Nginx log when I reload the configuration.
I must be doing something wrong here but I cannot see what it might be. I am using nginx 1.6.2 on Ubuntu 14.10 (x64). Nginx having been installed with apt-get install nginx-extras.
I'd be most grateful to anyone who might be able to explain how to get header_filter_by_* functioning correctly.
It's not a feature of vanilla nginx. You need to install openresty (instead of nginx).
See http://openresty.org/#Download and then http://openresty.org/#Installation

Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

JIRA: Is there a way to force HTTP Basic Authentication with standalone (tomcat) jira?

We have Jira 5.x running in the standalone variant (embedded tomcat). We'd like to prevent any request without a valid http basic header from reaching the Jira application. Or, in other words, force JIRA to use HTTP Basic authentication. Yes, I know that transmitting http basic credentials over the wire without tls isn't secure and stuff, but we don't have an SSL certificate anyway, so that doesn't matter (it doesn't make it worse than it actually is).
I read that Jira handles HTTP Basic Authentication headers if it gets them, and appending ?os_authType=basic to the URL makes Jira behave as we wish, but we'd like Jira to enforce HTTP Basic. We wouldn't care if we had some kind of "even-before-jira" login statically configured in tomcat as long as the jira application wouldn't be reachable from outside without it.
Is there a way to achieve this?
I tried adding:
<login-config>
<auth-method>BASIC</auth-method>
</login-config>
to jira's web.xml but that didn't help.

Resources