I'm currently implementing Firebase to send Webpush notification on a not fully HTTPS webapp. As webpsuh is using service worker, I did some research and on Google Developers website it is stated:
You can only register service workers on pages served over HTTPS, so we know the service worker the browser receives hasn't been tampered with during its journey through the network.
Does this means it is possible to register a service worker from a HTTPS page even though the rest of the website is HTTP ?
Thanks in advance for any clarification !
EDIT
I've found this conversation on the w3c Github which say that service worker should be served over HTTPS, from what I understand it is possible to have other HTTP pages as long as communication with the service worker is served over HTTPS, am I getting this right?
From the debate about specifications, # jyasskin said :
If the page requesting the SW isn't secure, and the SW is https but on an attacker-controlled domain, you haven't gained anything at all. Yes, the whole app will need to be https.
But this was said before service worker specifications were completely set, so not sure if this was the final way it was specified.
I've opened direclty on github an issue related to this particular question.
They were quick to respond that it is possible to register from a HTTPS page even though the rest of the website is HTTP, as long as it's in a secure context.
The browser will consider the https and http pages as 2 different website though and the service worker will not be able to have control over the http one.
Related
I have a web site deployed under App Services of Azure. It is working well up to couple of days before. But since two days Iam receiving below error
502 - Web server received an invalid response while acting as a gateway or proxy server.
There is a problem with the page you are looking for, and it cannot be displayed. When the Web server (while acting as a gateway or proxy) contacted the upstream content server, it received an invalid response from the content server.
When I access Site with xyz.azurewebsites.net it doesn't show this error. This is only displayed when I access with custom domain name. I have enabled Failure Request Tracing, but FREB logs doesn't show any 502 errors.
Can anyone help me understand what's the issue is?
Thanks
Azure WebApps have ARR sever (aka the FrontEnd) in front of the actual machines serving the web applications (known as the workers) and HTTP 502 is returned via the FrontEnd servers typically under these conditions
The request is taking a really long time on the actual worker
machine to execute
The worker process corresponding to the web
application is not even running or crashing multiple times.
Since the HTTP 502 error is happening on the FrontEnd servers, you won’t see these in the IIS logs because the IIS logs are of the worker so basically in your case either the request is taking too long to execute or the worker process serving the request (on the actual worker machine) is crashing.
As you mentioned that things work when you access the site over a custom domain vs xyz.azurewebsites.net then I would suggest checking a few things.
Make sure that your custom domain is really resolving to the right xyz.azurewebsites.net. Try using www.digwebinterface.com and put your custom domain there and make sure that it resolves to xyz.azurewebsites.net
Check if by any chance your code does something special for requests which arrive on your custom domain. Meaning any special processing like database lookups, or URLRewrite rules getting triggered etc. for requests only to custom domains.
Also check in your FREB logs to see if you can spot any long running requests when the hostname contains custom domain. You can use the FREB Viewer under the Support Portal for your WebApp to check this easily
Check if the WebApp is crashing by going to the EventViewer under the Support Portal for your WebApp and see if there are any crash related events.
You can reach the support portal of your WebApp by going to https://xyz.scm.azurewebsites.net/Support (where xyz is name of your Azure WebApp)
If this doesn’t help, then I recommend engaging Microsoft Support as they can check some of these things easily at their end.
I have some JS that is on some intranet application that's running on HTTP (this server/service is out of my control, run by the customer). I operate the internet application and it must run on HTTPS for security purposes.
I'm attempting to use XDomain but I'm finding that the cookies aren't being sent. Is the problem that I'm going intranet to internet or that I'm going HTTP to HTTPS or some configuration problem?
I keep getting 401 when checking authentication of the user even after they have logged in.
I've verified the backend/internet service works as expected via a jsfiddle (i.e. Access-Control-Allow-Origin, etc. are all correct).
Thanks!
There are some security related issues with XDomain that makes it strip any cookies according to no 5 in this msdn blog. However there also exist a workaround using proxy with example project on Github. I think everything you need to make it work are described in those two pages.
I have a web api application that I am considering moving to HTTPS. The reason is really just for the initial login where I would like to hide the username and password.
Once logged in do all other calls from the pages also need to be HTTPS? For example do my calls to CSS and scripts need to travel over HTTPS? How about WebAPI calls?
When referencing HTTP content from HTTPS pages, some user agents will issue warnings about "mixed content" or "insecure content" to the user, others may block the content (older versions of IE do that). GitHub solved this issue using ngnix as reverse proxy, so it serves the static content as HTTPS.
If you are only worried about the authentication, and it is cookie based, you can do the authentication in HTTPS and then get back to HTTP. The cookie will be shared as long it is not marked as Secure. Remember that both the GET request acquiring the login FORM and the POST call sending the login form should be HTTPS to be secure.
You can use the page in HTTP and do the AJAX calls in HTTPS: Ajax using https on an http page. Again, this may be useless if the auth form is not secure as well.
If your static content is hosted in a CDN, probably the CDN is able of proxying the requests to your site and return HTTPS content if required.
Static content served as HTTP won't be cached for when you request the same content through HTTPS, neither viceversa, so it will basically downloaded twice.
Also relevant, please check these 7 myths about HTTPS, specially myth #1. If you are worried about security, maybe switch completely to HTTPS is the best decision.
I'm very new to web services (please note, not WCF but the old fashioned .asmx files).
Now I may be liking this too much to ports, but if I expose a port on my web facing server then it is exposed to attacks as well as my own use; There are tools which can scan to see what ports are open.
Is this true of a web service? Now, don't get me wrong, I know each service should be coded well enough that nothing malicious can happen or that the calling class doesn't know the 'contract' to implement them, but that's not the question (and I guess port flooding could still occur?); If I put up a few web services on a server, is there a tool/program which can detect them (by name)?
Yes, a web service is basically a web page that takes arguments and response with a formatted result that can be read more easily by a program (technically both are a result of a http request and response - there are other mechanisms as well, but the typical one is over the http protocol).
If you type the link to your web service in a browser you will see you are presented with an interface that allows you to "execute" its services.
Therefor you need the same security as with a web page, meaning login or check of credentials, tokens, signing, encryption and so forth (preferably on a ssl-connection).
I was recently looking around at some of the features my current web host offers, and am now wondering about a few things. Even if you can only answer part of this, I appreciate any help you can provide.
I have a domain, mydomian.com, and the host offers shared SSL so I can use HTTPS by using this address https://mydomain.myhost.com. The SSL certificate is good for *.myhost.com.
I don't know a lot about SSL, but I'm assuming this means that the data between site users and ANY domain on myhost.com is encrypted. So was curious if this meant that if someone else on the same host as me somehow intercepted the data from my site would they be able to view it, since they would also have a https://theirdomain.myhost.com address, which uses the same SSL certificate? I may have no idea at all, and this was pretty much a guess.
If HTTPS is used on a login page, but after logging in the other pages are viewed over HTTP, is this a security issue?
Is there any way to show a web form via HTTP for bots like Google, but have real users redirected to the HTTPS version? Would be ideal if this could be done via .htaccess. I currently have some rewrite rules that redirect certain pages to HTTPS, but the rest as HTTP. So if a visitor visits the contact form they get the HTTPS version automatically, but it automatically switches back to HTTP for pages that don't contain forms. So, via htaccess, is there a way to direct real users to the HTTPS version, but have bots directed to the HTTP version? I would like these pages to still be indexed by the search engines, but would like users to see it via HTTPS.
Thanks in advance for any help you can provide.
I'm going to guess you'll be okay for number one. If your host does it correctly, individual subdomains never get to see the SSL keys. Here's how it would work:
Some guy with a browser sends an encrypted request to your subdomain server.
Your host's master server receives the request and decrypts it.
The master server sends the decrypted request to your subdomain server.
And any HTTPS responses you send back go through that process in reverse. It should be easy to check if they've set things up that way: If you can set up shared SSL without personally handling any key files, you're good. If you actually get your hands on some key files... not good.
For two: If you encrypt the login, you protect the passwords, which is good. But if you switch back to HTTP afterwards, you open yourself up to other attacks. See: Firesheep. There may be others.
And for three. Yes - definitely doable. Check out mod_rewrite. Can't give you an example, as I've never used this particular case, but I can point you to this page - particularly the section entitled "Browser Dependent Content."
Hope that helps!
Every traffic is encrypted, when you use https:// as protocol. (Except for some uncommon circumstances I won't talk about here). An SSL certificate's purpose is to prove the identity of the server, by combining it's public key with an identity. This certificate is only usable with the private key that belongs to the public one. In your case it seems that this certificate as well as the key-pair is provided by your hosting provider. I guess that neither you nor the other customers on the host have access to this private key. That means that only your provider is able to decrypt the traffic. Since that's always the case (he's running the server, so has access to every data), that should be no problem.
In most cases it is a security issue. On every further unencrypted http-request the client has to provide some information of the session to the server. These can be intercepted and used by an attacker. (simply speaking)
The bots should support https, why not redirect them? Anyhow: The important part is not to provide the page containing the form via https. To protect your user's data you should take care that the response is transferred via https.