google hosted scripts , use http or https? - http

I am using google hosted javascripts libraries,(jquery ,jquery-ui & other google jsapi),and I noticed that these scripts can be accessed by both http & https schema.Now ,I want to know that what are the effects of using http or https schema to access these google hosted scripts , and for my projects ,it's just an ordinary websites ,using http as default schema, so ,what should I do ,http or https? Is there any performance issue between the two ?

https does affect performance negatively, as encryption and security negotiation aren't trivial tasks. In the vast majority of cases this performance cost is not significant enough to outweigh its benefits.
Remember that SSL also secures the identity of the web server and not just the channel.
If a "man-in-the-middle" spoofed the address of your script's location (for instance), https would prevent you from unknowingly executing unintended scripts. http would not.

Check this out: HTTP vs HTTPS performance
The performance issue is rather small considering todays hardware and internet bandwidth. Personally I try to use the same protocol for all data used by one page (or iframe / frame), meaning scripts, CSS, images etc.
Data transferred over SSL will not be cached by the visitor's browser, instead will be downloaded each time a page is loaded.
Using SSL / HTTPS is recommended if a page contains sensitive data, personal data, or offers interactions like contact forms etc. Buying and installing a SSL certificate is justified in those cases.
Google analytics for example first checks which protocol your page uses, then uses the same protocol for downloading its scripts.

Related

Unable to figure out why firebase hosting is not using http 2

I've set up a fresh hosting project (not using any custom domain at the moment) and split up some of my js files expecting them to be served via http2 (as described in firebase blog posts it should be enabled by default?) However protocol still shows up as http/1.1. Am I missing something? Do I need to add entry in my config files to force http2?
DEMO: https://asimetriq-com.firebaseapp.com/
Works for me, see attached screenshot.
It may mean that you have some transparent proxy that does not support HTTP/2 in the network hops from your client to the server.
Also, from time to time, browsers may downgrade the protocol they are using to collect statistics about protocol performances to be able to compare them.

My Azure Website has an odd "HTTP success" pattern in the (Monitor) portal

I have a website hosted in Azure Websites as a Basic tier website.
I'm currently in the development stage, yet the site is live and accessible by the outside world (at least at a basic level), so I wanted to better understand the monitoring features in the Azure management portal.
When I looked at the monitoring tab inside the portal, I see an odd pattern for HTTP success. Looking at the past 60 minutes (which I personally have not been active on), the HTTP successes are very cyclic, with 80 connections, then 0, then 40, then 0, then repeat.
Does anyone have any pointers how I can figure out what the 80 and 40 connections are. I certainly don't have any timed events in my code, so there shouldn't be any calls being made unless a person is actually hitting the site.
UPDATE:
I setup a staging server and blocked all incoming traffic except my own IP. So the same code running, just without access from the outside world. And the HTTP success appears only when I hit the server myself (as expected). This suggests that my site is being hit by an outside bot maybe? Does anyone know how to protect against this? Or at least diagnose if the requests are not legitimate, etc?
I'd say it's this setting that causes the traffic:
Always On. By default, websites are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the site loaded all the time. If your site runs continuous web jobs, you should enable Always On, or the web jobs may not run reliably
http://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
It's just a keep alive to avoid cold starts every time you or someone else visit your site.
Here's another reference that describes this behavior:
What the always-on feature does is simply ping your site every now and
then, to keep the application pool up and running.
And Scott Gu says:
One of the other useful Web Site features that we are introducing
today is a feature we call “Always On”. When Always On is enabled on a
site, Windows Azure will automatically ping your Web Site regularly to
ensure that the Web Site is always active and in a warm/running state.
This is useful to ensure that a site is always responsive (and that
the app domain or worker process has not paged out due to lack of
external HTTP requests).
About the traffic in general: First of all, the requests could really only come from Microsoft, since any traffic pattern like this will quickly be automatically detected and blocked when using Azure Websites - you cannot set up a keep alive like this yourself. Second, no modern bot whatsoever would regularily ping a specific page with that kind of regularity since it's all to obvious. Any modern datacenter security appliance would catch that kind of traffic and block/ignore/nullroute it.
As for your question regarding protection and security: Microsoft cannot protect your code from yourself. However, everything at the perimeter is managed and handled by Microsoft. That's one of the USP features of Azure - Firewall, Load Balancing, Spoofing, Anti-bot and DDOS protection etc. There will of course always be security concerns regarding any publicly exposed service but you can stay focused on your application while Microsoft manages the rest.
When running Azure Websites, you're in the hands of Microsoft regarding security outside of your application scope. That's a great thing, but if you really like to be able to use other security measures you'll have to set up a virtual machine instead and run your site from there.
You may want to first understand what are these requests. Enable web server logging for the website on Azure Management portal and download IIS logs for your website after seeing this pattern. Then check those to understand the URL, client ip addresses for the requests and user agent field to identify if the requests are really from search bots. Based on the observation, you can either disable some IP statically, use dynamic ip restrictions or configure URLREWRITE to block requests with specific patterns in request or request headers
EDIT
This is how you can block search bots - http://moz.com/ugc/blocking-bots-based-on-useragent
You can configure the URLREWRITE locally on an IIS server in the way described in the above article and then copy the configuration generated in the web.config or connect to the azure website directly using IIS manager as described in http://azure.microsoft.com/blog/2014/02/28/remote-administration-of-windows-azure-websites-using-iis-manager/ and configure urlrewrite rule

Value of proxying HTTP requests with node.js

I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.

asp:MediaPlayer (Silverlight) Https / http issue

we have a site (https://oursite.net) in which we display a videostream hosted on http (http://someserver.com). The site needs to be hosted on https, and we don't control the video, so I'm assuming it needs to be on http. we recently added the option to play the stream through the silverlight asp:MediaElement, which works perfectly fine in our test environment (on http) but doesn't work in production (https).
The info on the web is somewhat confusing as I'm having a hard time differentiating between how this stuff worked at different stages in the silverlight development (seems to have been a bit to and fro)
Is this setup possible at all (hosting the player on https but playing a stream on http) with some sort of policy file?
in that case: does this policy file need to be hosted with the silverlight app (on https) or where the streams are located (http)
Thanks for your time
Andreas
You are running into a cross-scheme violation unfortunately. The stream would need to match the same scheme (https) as the hosting application. Unfortunately most streaming isn't available in HTTPS.
Can you check the enableHtmlAccess property on the object tag to make sure it is true? Most media players end up using the HTML DOM bridge to communicate with the web page.
It's also likely that there is a cross-scheme issue: you should try and optimize for all assets being on the same scheme (HTTP or HTTPS).

How do i check the client browser SSL certificate

How do i check the client browser SSL certificate in my ASP.net code behind
I want to ensure that if any https proxy like fiddler is running then my application does not load
I have done the following till now without any success:
My site is on Https
In IIS i have set
Require SSL= true
Require 128 bit encryption =true
accept certificate = true
in my default.aspx Page_Load i am trying to read the value of Request.ClientCertificate, the collection is coming as empty.
There is no way to do what you're trying to do unless you run an ActiveX control on the client.
Internet Explorer and other browsers do not expose the server's SSL certificate information to the JavaScript in the page, meaning that there's no way for your page, running on the client, to know whether or not it was delivered with your certificate or another certificate.
Having said that, even if such a method was offered, it probably wouldn't help you anyway. Presumably, you want to do this to prevent viewing/modification of your traffic, but there are other tools that plug into the browser directly (post HTTPS-decryption, pre HTTPS-encryption) that can view/modify traffic without resigning it as Fiddler and other proxies do.
Furthermore, your code would fail in corporate environments where the edge proxy (e.g. BlueCoat, Forefront) does content-inspection using the same mechanisms that Fiddler uses.
Are you expecting the client to have a certificate installed? Most users do not have client certificates installed.
Nonetheless, I'm not sure how exactly a client certificate is going to protect you in the situation you describe...

Resources