How to set up a hash based director using vcl in Varnish which is using the hashed request body? - http

I am trying to set up Varnish to route between backends using the request body's hash. I found good examples of using body access to set up caching where the hash of the request body is used as cache key. I cannot find an example of using the body hash for routing.
I tried the following but it doesn't seem to work. It is probably because bodyaccess was not meant to be used this way. How to set up a hash based director using vcl in Varnish which is using the hashed request body?
vcl 4.1;
import directors;
import bodyaccess;
backend backend1 {
.host = "backend1.example.com";
.port = "80";
}
backend backend2 {
.host = "backend2.example.com";
.port = "80";
}
sub vcl_init {
new xhash = directors.hash();
xhash.add_backend(backend1);
xhash.add_backend(backend2);
}
sub vcl_recv {
set req.backend_hint = xhash.backend(bodyaccess.hash_req_body());
}

There is no real way to retrieve the request body as a string in Varnish Cache (open source). The bodyaccess.hash_req_body() function will actually add the request body to the caching hash in the vcl_hash subroutine. But since this function returns a void data type, this won't help you.
The only realistic way that I'm aware of is by using vmod_xbody, which is a Varnish Enterprise module. That module has an xbody.get_req_body() function that returns the request body as a string.
See https://docs.varnish-software.com/varnish-cache-plus/vmods/xbody/#get-req-body for more information.

Related

'Access-Control-Allow-Origin' missing using actix-web

Stuck on this problem where I received this error everytime making POST request to my actix-web server.
CORS header 'Access-Control-Allow-Origin' missing
my javascript (VueJs running on localhost:3000) :
let data = //some json data
let xhr = new XMLHttpRequest();
xhr.open("POST", "http://localhost:8080/abc");
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onload = () => {
console.log(xhr.responseText);
}
xhr.send(JSON.stringify(data));
My Actix_Web server (running on localhost:8080) :
#[actix_web::main]
async fn main() {
HttpServer::new(move || {
let cors = Cors::default()
.allowed_origin("http://localhost:3000/")
.allowed_methods(vec!["GET", "POST"])
.allowed_header(actix_web::http::header::ACCEPT)
.allowed_header(actix_web::http::header::CONTENT_TYPE)
.max_age(3600);
App::new()
.wrap(cors)
.service(myfunc)
})
.bind(("0.0.0.0", 8080))
.unwrap()
.run()
.await
.unwrap();
}
my cargo.toml dependencies
[dependencies]
actix-web = "4"
actix-cors = "0.6.1"
...
Got any idea?
Okay, so I've done some testing. If you're writing a public API, you probably want to allow all origins. For that you may use the following code:
HttpServer::new(|| {
let cors = Cors::default().allow_any_origin().send_wildcard();
App::new().wrap(cors).service(greet)
})
If you're not writing a public API... well, I'm not sure what they want you to do. I've not figured out how to tell the library to send that header. I guess I will look at the code.
UPDATE:
So funny story, this is how you allow specific origins:
let cors = Cors::default()
.allowed_origin("localhost:3000")
.allowed_origin("localhost:2020");
BUT, and oh boy, is that but juicy. The Access-Control-Allow-Origin response header is only set when there is a Origin request header. That header is normally added by the browser in certain cases 1. So I did that (using the Developer tools in the browser). What did I get? "Origin is not allowed to make this request". I set my origin header to localhost:3000. Turns out, the arctix library simply discards that header if no protocol was provided... (e.g. http://) (I assume it discards it, if it deems its format invalid). That internally results in the header being the string "null". Which is, checks notes, not in the list of allowed origins.
And now the grand finale:
Your origin header needs to be set to (by either you or the browser): "http://localhost:3000".
Your configuration needs to include: .allowed_origin("http://localhost:3000").
After doing that, the server will happily echo back your origin header in the Access-Control-Allow-Origin header. And it will only send that one.
I've no idea if any of that is what the standard specifies (or not). I encourage you to read through it, and if it doesn't comply, please open an issue on GitHub. I would do it myself, but I'm done with programming for today.
Cheers!

is it possible to change response on proxy level using varnish?

For example we have setup like this:
user -> api gateway -> (specific endpoint) varnish -> backend service
If backend returns response 500 {"message":"error"} I want to patch this response and return 200 "[]" instead.
Is it possible to do something like this using varnish or some other proxy?
It is definitely possible to intercept backend errors, and convert them into regular responses.
A very simplistic example is the following:
sub vcl_backend_error {
set beresp.http.Content-Type = "application/json";
set beresp.status = 200;
set beresp.body = "[]";
return(deliver);
}
sub vcl_backend_response {
if(beresp.status == 500) {
return(error(200,"OK"));
}
}
Whenever your backend would fail, and return an HTTP/503 error, we will send a HTTP/200 response with [] output.
This output template for backend errors is also triggered when the backend does reply, but with a HTTP/500 error.
In real world scenarios, I would a some conditional logic in vcl_backend_error to only return the JSON output template when specific criteria are matched. For example: a certain URL pattern was matched.
I would advise the same in vcl_backend_response: maybe you don't want to convert all HTTP/500 errors into regular HTTP/200 responses. Maybe you also want to add conditional logic.

Spring Webflux: Remove WWW-authenticate header

I am using Spring 5 Webflux with Basic Authentication.
Problem:
When I type a wrong username or password spring reponses with Http Status 401 and includes the www-authenticate: Basic realm="Realm" Http Header which causes the browser to pop up the basic auth box.
How to remove that HTTP Header in Spring 5 Webflux?
Do I have to do a custom Webfilter?
The code below is in Kotlin copied from my project. But the idea can be simply transfered into Java.
So the solution is tied around a custom Webfilter.
#Component
class HttpHeaderWebFilter: WebFilter {
override fun filter(exchange: ServerWebExchange, next: WebFilterChain): Mono<Void> {
return next.filter(exchange).then(Mono.defer {
val headers = exchange.response.headers
if (headers.containsKey("WWW-Authenticate")) {
headers.remove("WWW-Authenticate")
}
Mono.empty<Void>()
})
}
}
We can use the following
if (exchange.getRequest().getHeaders().containsKey("headerKey")) {
exchange.getRequest().mutate().header("headerKey", null, null);
}
We are using the double null, to overcome deprecated Overriding method.
If you are using Spring Framework 5.2, usage of single null is sufficient.
The reason why it sends that header is that the authenticationFailureHandler associated with the default httpBasic() configuration uses the configured entryPoint. If it's not configured, then it uses the default entryPoint (HttpBasicServerAuthenticationEntryPoint) which adds that header to the response.
So, to change this behavior you can set a custom entryPoint, for example:
.httpBasic().authenticationEntryPoint(HttpStatusServerEntryPoint(HttpStatus.UNAUTHORIZED))

varnish referencing an external file

I am new to varnish and I am trying to figure out a way in which varnish can reference to an external html file to serve an error page, as having the html code in the synthetic function would be too complex as the error page has too many visualizations, thanks
If you are using Varnish 4 (or more recent) you can do it with the std vmod
See the doc : https://www.varnish-cache.org/docs/4.0/reference/vmod_std.generated.html#func-fileread
I think the vcl should look like the following (not tested) :
vcl 4.0;
import std;
#other stuff
sub vcl_synth {
if (resp.status == 404) {
set resp.http.Content-Type = "text/html;";
synthetic(std.fileread("/etc/varnish/404.html"));
return (deliver);
}
}

Is it possible for nginx reverse proxy server to compress request body before sending it to backend servers?

I was trying to compress request body data before sending it to backend server.
To achieve this, I add my module to the official nginx, my module has a rewrite phase handler,
which I want it to rewrite request body, code shows as below:
static ngx_int_t
ngx_http_leap_init(ngx_conf_t *cf)
{
ngx_http_handler_pt *h;
ngx_http_core_main_conf_t *cmcf;
cmcf = ngx_http_conf_get_module_main_conf(cf, ngx_http_core_module);
h = ngx_array_push(&cmcf->phases[NGX_HTTP_REWRITE_PHASE].handlers);
if (h == NULL)
return NGX_ERROR;
*h = ngx_http_leap_rewrite_handler;
return NGX_OK;
}
in ngx_http_leap_rewrite_handler method, I have the following line:
rc = ngx_http_read_client_request_body(r, ngx_http_leap_request_body_read);
the ngx_http_leap_request_body_read handler is able to compress request body, when posting data using application/x-www-form-urlencoded but not for multipart/form-data.
Because what I really want to do is to compress post files, not post lines,
is there any ideas?

Resources