This must be very common knowledge I can't seem to get. I don't even know the keywords to explain this. I'll use what I understand till now. So here is the situation I'm in,
localhost:5000 ('/') takes to a base page. I set localhost/api as location in nginx to the same port. So going to localhost/api is the same thing as going to localhost:5000.
Now as usual, some UI is there with some text box and different buttons. Normally, form-action for some button goes to ('/run') endpoint and the process continues. But after setting nginx, I expected this localhost/api/run in url box of browser but got localhost/run instead. This is the issue.
Here what I added to /etc/nginx/sites-available/default
location /api {
proxy_pass http://127.0.0.1:5000/;
}
I must tell you I know little to nothing about nginx (you already got that I guess). I just know what it is use for. I'd really appreciate a quick and general solution to this, and if someone could direct me to a playground for learning nginx would be wonderfull.
Related
I am currently having issues with setting up an HTTPS domain redirect. I have a DNS URL redirect entry that points a few sub-domains to same-server URLs. For example:
docs.kipper-lang.org -> kipper-lang.org/docs/
play.kipper-lang.org -> kipper-lang.org/playground
The issue I am currently experiencing is that when using the subdomains, it mostly works, but it can only use HTTP. If I attempt to use HTTPS (like for example https://docs.kipper-lang.org) the redirect won't work and will get stuck apparently waiting for the HTTPS certificate (I think, but I don't know for sure, since it loads forever and gets a time-out).
So my DNS provider does its job for the most part as I want, but I am not sure how to add the HTTPS encryption to these redirects. Is there maybe even some DNS configuration or even middle-man service for redirects I can use, where these HTTPS encryptions are built-In? Since receiving a "Warning: Insecure connection" every time someone uses the sub-domains is a massive problem for me.
Note though that considering I am hosting on a GitHub Pages server, I am unable to do these redirects on the server side myself, as I can't use any code in this case.
I would greatly appreciate any ideas for fixing this or what I could use to achieve this another way.
Thanks in advance!
I have a question, I am a bit confused, I don't really understand why this is happening.
I have a website which works well over http. When I force redirect to https something happens. Even if I replace all my urls in my code, only GET request will work. Anybody has any idea why is this happening?
I also have admin part of the website. it works to login into the admin but it doesn't work to make any requests on it. I am trying to post or delete but I receive a 401 err, even if I am logged in and set the token right...
So bottom line is:
On Https, the website works, it shows all the resources from the db, I can login in the Admin but I can not post or delete.
On Http everything works.
I am in a huge need of advice or ideas.
thanks.
From my experience you cannot serve mixed content, that's my first suggestion is to call all your scripts/dependencies without the prefix; ie: script src="https://blahblah" to "script src="//blahblah"; you're going to make sure you are sticking consistently to one serving source; so that's the first thing I'd check (also look at console logs, they often give hints as to what failed);
Secondly I am unsure of the response or how the server handles traffic from non https, possibly there's a rule in htaccess or some form of redirection trying to force the call via https so http fails? these are all steps in debugging right you need to troubleshoot and play process of eliminations; first though I'd make sure we are serving everything from // or https; when on http I would look at console logs for clues but even more so I would force a redirect to use https exclusively (as most sites do now)
Check for mixed content issues first though, this is something that can have a multitude of solutions based on the many variations of what could be causing this issue.
So I know this seems like a question many people have asked before but I wasn't able to find an answer yet so I'll ask anyway.
I have a few websites set up on one IP address, which means I need to use SNI - one of said subdomains is mail.domain.tld, which works perfectly fine and another is cloud.domain.tld, which unfortunately doesn't.
cloud.domain.tld redirects to www.domain.tld when it is up.
manually typing 'cloud.domain.tld/login' works even when other websites are up but I haven't been able to make the subdomain append /login automatically, which is what I want to do.
when I change the name of cloud.domain.tld to mail.domain.tld, leaving the entire config the same, it works.
when I added clou.domain.tld and clouds.domain.tld to my DNS settings and set the website to those it works too.
So I changed the location / block from
rewrite ^ /index.php$uri; - as default in owncloud configuration
to
return 301 /index.php$uri;
and now this problem has been resolved, however logging out now returns a CSRF error.
nginx.conf:
server_tokens off;
Why could this get ignored, the header is still sent:
Server: nginx
No, other included config files do not contain server_tokens configuration.
Yes, I did restart all services.
To cite the docs on the server_tokens directive:
Enables or disables emitting nginx version in error messages and in the “Server” response header field.
According to the docs, it thus doesn't prevent the generation of the Server header but only prevents the addition of the exact version. If you want to completely remove the servers header, you could use the ngx_headers_more module.
"The setting works as documented"
The above is kinda insane... (Sorry Hulgar Just, but if you don't understand the rant you should probably not answer.)
Nginx doesn't need to broadcast out its version and the server OS, basically ever, outside of debug situations, shouldn't actually be a question. nor should people wanting to stop that insane behavior be a problem to anyone who knows anything about infosec.
As it stands site failures even with the "feature' enabled, results in disclosure of information that is unnecessary for visitors. The absolute best you can do is disable it in all your site configs, but when they die you still have a problem. Patching is the only way at the moment sadly...
I have my production site's app pool to recycle every 2 hours or so. I noticed that when the first call to the site is made, the App Pool caches the base url (e.g. www.mysite.com). This makes sense as this is used to resolve relative paths in ASP.NET e.g. ~/MyFolder/MyPage.aspx, which is resolved to:
http://www.mysite.com/MyFolder/MyPage.aspx
However since the site can be reached via our host name e.g.
http://masdfg.my.provider.net
IIS thinks the url is
http://masdfg.my.provider.net/MyFolder/MyPage.aspx
As you can image, this causing an issue with SSL as well as others. How can I prevent this from happening?
UPDATE: The work around was to create a url redirect. If anyone knows how to prevent this let me know.
I hope I've understood your question correctly, but please do let me know if I haven't.
It sounds like the sole issue you have is that when you write the links to the response they sometimes reference the wrong root URL.
I notice that you use ~/ . This would resolve and write the entire URL to the response I think. It is better to use only / when writing links to the response.
So in your example you would write /myfolder/mypage.aspx. The browser would then resolve the / to mean that it's from the root address of the site, whichever that may be.
Like I said, I hope I've understood your question correctly and apologies if I haven't.
I know it's a long shot, but I've had a similar problem with my IIS setup. I solved it by going to the already mentioned "bindings" window through "Edit Bindings".
Then I removed all the not wanted bindings, then adding the hostname www.mydomain.com the server should answer to.
Finally I edited the windows hosts file at
%windir%\System32\drivers\etc\hosts
Adding the line
127.0.0.1 www.mydomain.com
This ensures that www.mydomain.com always resolves to the local computer.
After executing iisreset.exe as administrator my problem was over.
HttpContext.Current.Request.Url is not a cacheable item. That value comes from the HOST value of the HTTP headers. Which means it is passed in to the application from the request.
The only time it should take that second URL is if the requests HOST value was masdfg.my.provider.net
There are three possible fixes here. The first is to set your bindings and have any requests to masdfg.my.provider.net be forwarded over to www.mysite.com
The second, because your primary issue appears to be about SSL is to get a unified communications (UC) SSL certificate and install that on your server. This would be to cover the mysite.com and masdfg.my.provider.net domain names.
The third is to simply create a separate IIS site which points to the exact same production directory as the first one. Each site would have only 1 domain name it's responsible for.