asp.net header forwarding not working for external Identity provider - asp.net

I use asp.net Identity with AzureAD as an external Identity provider in my Balzor server side app. In development environment (localhost) logging in works fine. When I deploy the app to an on premise server in a docker image behind Nginx, it does not. Microsoft sends the error message AADSTS50011: The reply URL specified in the request does not match the reply URLs configured for the application. I have added the proper reply URL to Azure portal. As far as I can tell, the request uses http, while https should be used, which causes the error.
Since Nginx handles secure transport, the headers need to be forwarded, so I configured Nginx and enabled Header forwarding in Startup.ConfigureServices:
services.Configure<ForwardedHeadersOptions>(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
options.ForwardLimit = 1;
options.KnownProxies.Add(IPAddress.Parse("123.xxx.xxx.xxx"));
});
and at the very beginning of Startup.Configure:
app.UseForwardedHeaders();
app.UseHsts();
// should not be necessary but I tried with and without
//app.UseHttpsRedirection();
When I enable logging, I think I see that the correct header is forwarded from Nginx:
...
Header: X-Forwarded-For: 123.xxx.xxx.xxx
Header: X-Forwarded-Proto: https
...
To me it looks like ChallengeResult() in ExternalLogin.Post is not using the forwarded headers and sends http://my.domain.ch/signin-oidc instead of https:// as reply URL, which causes the error.
I ran out of ideas what else I could try, any suggestions please?

After some digging I found the mistake: I did add the wrong proxy IP. Since my asp.net app is hosted on docker, I had to add the IP address of the docker image as proxy, not the IP of the server which hosts nginx and docker. In fact, I added the default network used by docker
options.KnownNetworks.Add(new IPNetwork(IPAddress.Parse("172.17.0.0"), 16));

Related

Trouble making http request from lighttpd server to pm2 server

Background:
I have my personal website running on a lighttpd server on my raspberry pi. I have that server’s port (80) forwarded so it can be accessed publicly.
I’m in the process of making a project, and I want a node.js service to make requests to from the lighttpd server. I set up pm2 so the node.js server is always running. I have that port forwarded too (5000). I've verified that this server is working via postman and the browser
Problem:
I'm receiving the following error when making requests:
has been blocked by CORS policy: The request client is not a secure context and the resource is in more-private address space private.
Of note; I have Access-Control-Allow-Private-Network:true in the response header and Access-Control-Request-Private-Network:true in the request header. The only other solution I've found that might fix this is getting an SSL cert for the lighttpd server and using https for it, however I'm struggling setting that up to see if it would work
Questions:
Would getting an SSL cert for lighttpd allow me to make requests to my pm2 server?
Is there a different solution?
How secure is this setup? I don't expect a lot of traffic...

Mixed Content Error in ejabberd/XMPP chat server

My site is protected With SSL when i try to call XMPP Chat server is showing this error.
Mixed Content: The page at 'https://localhost:44300/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://192.168.30.1:5280/http-bind/'. This request has been blocked; the content must be served over HTTPS.
how to add ssl to ejabberd/XMPP chat server
pls Help Me am new to this
how to add ssl to ejabberd/XMPP chat server
I think you need the option tls: true and the option certfile: ... Try something like this:
listen:
-
port: 5280
module: ejabberd_http
request_handlers:
"/ws": ejabberd_http_ws
"/bosh": mod_bosh
"/api": mod_http_api
"/presence": mod_webpresence
web_admin: true
tls: true
certfile: "/etc/ejabberd/server.pem"
The reason why you have Mixed Content issue is not only because you did not setup ssl on your ejabberd server.
Actually it says about other issue:
You site is up and running on HTTPS (https://localhost:44300/)
From this HTTPS page you are trying to access not secure resource. By "not secure" I mean HTTP endpoint of your ejabberd server (http://192.168.30.1:5280/http-bind/')
So that's why you see this issue.
How to fix?
You need to access your ejabberd server by secure endpoint (HTTPS), so you should have this url https://192.168.30.1:5280/http-bind/ in your JS app code.
I'm not familiar with jsxc, but I found this getting started guide https://github.com/jsxc/jsxc/wiki/Install-jsxc#2-configure
so your config should have HTTPS instead of HTTP, e.g:
xmpp: {
url: 'https://localhost:5280/http-bind/',
After this, your Mixed Content issue should be resolved.
Probably, after it, you will face another issue that you did not setup SSL for your ejabberd server, but it relates to your ejabberd server config and not to your JS app.

Https communication on localhost in IIS using self-signed certificate

I have 2 sites running on the same machine, a client and an API.
Let's say the computer's IP is 10.10.10.10.
The API has a default page when you browse to it, the rest of the API is under 10.10.10.10/api.
The API has HTTP binding to port 80, and HTTPS binding to port 443.
The client has HTTP binding to port 8080, and HTTPS binding to port 64300.
Both HTTPS bindings use a self signed certificate I created via IIS manager.
Both sites have a HTTP to HTTPS redirect using "URL Rewrite".
When I try to browse either one of the apps, it works fine (gives the warning in the browser that you can skip).
When I do some action in the client which involves a HTTP request to the api using one of the following calls I get an error:
http://localhost/api/someMethod
http://localhost:80/api/someMethod
https://localhost/api/someMethod
https://localhost:443/api/someMethod
https://10.10.10.10/api/someMethod
The exception includes this error:
"The remote certificate is invalid according to the validation procedure"
I tried using the method described in this link (add the self-signed certificate to the Trusted Root Certificate Authorities folder) but it won't work.
Help please :D
found the answer.. posting if anyone else will get stuck on it.
It's pretty weird but the only thing that worked was to make the localhost http(s) request using the HOST NAME.
example:
https://the_name_of_the_computer:443/api/someMethod

how to connect nginx, 3scale and opendaylight controller?

I am using an Ubuntu machine with an Ubuntu guest OS. On the guest OS, I ran my OpenDaylight controller, making the topologies with Mininet and viewing them in the OpenDaylight GUI at localhost:8080. Next, I used Postman REST API Client extension on my Chrome Browser to make a GET request to my ODL Controller:
localhost:8080/restconf/operational/opendaylight-inventory:nodes/
I got the proper response to it in XML format. Now, I have to pass my request through NGINX proxy to 3Scale and get authentication using the app_id and app_key parameters. The request is then to be forwarded to the ODL controller so that I gan get the proper response.
I have already downloaded the proxy config files from NGINX. What modifications must be made in these files? What should be the request I enter in the Postman Client to get the same response as before?
You should only need to change the location of the nginx_.lua file in nginx_.conf
If you want to change the port that Nginx listens on, you will also need to change the listen directive in the server block, to your desired port e.g
server {
lua_code_cache off;
listen 81;
Also, you will need to ensure that there is an upstream block for your backend, e.g
upstream backend_localhost {
server localhost:8080 max_fails=5 fail_timeout=30;
}
but if you have entered this in the proxy configuration wizard that should already be there.
That should be all that you need to change/check.
The request in Postman should target Nginx instead of the ODL Controller, and pass in the application credentials e.g if Nginx is running on port 81
localhost:81/restconf/operational/opendaylight-inventory:nodes/?app_id=<YOUR_APP_ID>&app_key=<YOUR_APP_KEY>
Hopefully that should clear up any doubts. However, you can always email us at support#3scale.net if you have any further questions or add any comments here.

Why are browser requests not going through my proxy server?

I tried writing a simple proxy in node.js today with a basic HTTP server, I realized in Firefox when I reload the proxy, I can see a request. However, when I load any page, it doesn't seem to be going through my proxy. I can curl the server, and it works fine. But why is the browser not using my proxy?
The code just looks like:
var http = require('http');
var listener = function(request, response) {
console.log('hi');
response.write("200");
response.end();
};
var server = http.createServer(listener);
server.listen(8000, undefined, function() {
console.log('Server has started on 8000');
});
I'm just looking for something that changes the header of the request, though a reverse proxy would also be cool.
Edit: This is how I'm pointing my browser to my proxy. In Firefox, preferences -> advanced -> Network -> Settings
I tried to setting the HTTP Proxy under "Manual proxy configuration" to 127.0.0.1:8000 - that seems to do something, cuz all my pages fail to load, but I don't see any activity on my proxy server.
I also tried to just put 127.0.0.1:8000 under "Automatic proxy configuration URL" which sends a request when I just configure it, but nothing is proxied afterwards. I'm wonder what kind of response the "automatic" configuration is looking for...
The code you have written isn't a proxy server? It's just an HTTPd responder, which is why your curl script 'works' but firefox doesn't
Taking an example already online, http://catonmat.net/http-proxy-in-nodejs, you will see that as well as setting up the HTTPd in node, you have the dispatch HTTP calls to the server being proxied and drain that output back into the response to your browser.
In firefox, you want to set Manual Proxy configuration -> Http Proxy: 127.0.0.1 and your Port 8000
Check "Use this proxy server for all protocols"
That works for me :)
Maybe you have another server running on 8000 ?
To use Charles to capture traffic to localhost you need to use http://localhost./ (yes, with a dot on the end).
See the documentation here:
http://www.charlesproxy.com/documentation/faqs/localhost-traffic-doesnt-appear-in-charles

Resources