I was reading this article from Microsoft and in step 5 it says: The WWW Service uses the configuration information to configure HTTP.sys.
What exactly is the WWW service configure in HTTP.sys?
What is the purpose of the WWW service?
How is it different from the Windows Process Activation Service (WAS)?
Thank you!
In short, WWW service gets the configuration elements from applicationHost.config and applies the portion related to Windows HTTP API to the driver HTTP.sys.
The purpose of WWW service is roughly documented in https://learn.microsoft.com/en-us/iis/get-started/introduction-to-iis/introduction-to-iis-architecture#how-the-www-service-works-in-iis
Don't try to acquire a deep understanding of such components at the beginning. They are not open sourced so he documentation is rather vague.
The same applies to WAS, https://learn.microsoft.com/en-us/iis/get-started/introduction-to-iis/introduction-to-iis-architecture#windows-process-activation-service-was
If you are taking a course, just memorize the facts at this stage. Once you get more familiar with IIS daily operations, you will get more insights.
Related
I have been researching on the concept of how event logs are collected from cloud based applications like dropbox without deploying agents...i haven't found any clear explanation based on this...it would be grateful if someone could explain.
This is a very broad topic and can be very confusing because everyone logs differently, so while i cannot answer the question definitively, I can hopefully help you along.
A good heuristic is to see if the cloud service supports one of the oldest logging standard, Syslog. Typically if they do, you will not need to deploy an agent, but configure log forwarding and listen for messages on Linux server you control (which already has a logging service running though it might need additional configuration). Also if the cloud service has a Syslog service running on the remote service, you potentially can use that service to forward logs to your Syslog server.
The mechanism used for transportation should be TLS because logs can unknowingly contain very sensitive data (Twitter just recently put out a security warning concerning this). You can see how to configure a Linux Syslog server with TLS here
I have a website hosted in Azure Websites as a Basic tier website.
I'm currently in the development stage, yet the site is live and accessible by the outside world (at least at a basic level), so I wanted to better understand the monitoring features in the Azure management portal.
When I looked at the monitoring tab inside the portal, I see an odd pattern for HTTP success. Looking at the past 60 minutes (which I personally have not been active on), the HTTP successes are very cyclic, with 80 connections, then 0, then 40, then 0, then repeat.
Does anyone have any pointers how I can figure out what the 80 and 40 connections are. I certainly don't have any timed events in my code, so there shouldn't be any calls being made unless a person is actually hitting the site.
UPDATE:
I setup a staging server and blocked all incoming traffic except my own IP. So the same code running, just without access from the outside world. And the HTTP success appears only when I hit the server myself (as expected). This suggests that my site is being hit by an outside bot maybe? Does anyone know how to protect against this? Or at least diagnose if the requests are not legitimate, etc?
I'd say it's this setting that causes the traffic:
Always On. By default, websites are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the site loaded all the time. If your site runs continuous web jobs, you should enable Always On, or the web jobs may not run reliably
http://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
It's just a keep alive to avoid cold starts every time you or someone else visit your site.
Here's another reference that describes this behavior:
What the always-on feature does is simply ping your site every now and
then, to keep the application pool up and running.
And Scott Gu says:
One of the other useful Web Site features that we are introducing
today is a feature we call “Always On”. When Always On is enabled on a
site, Windows Azure will automatically ping your Web Site regularly to
ensure that the Web Site is always active and in a warm/running state.
This is useful to ensure that a site is always responsive (and that
the app domain or worker process has not paged out due to lack of
external HTTP requests).
About the traffic in general: First of all, the requests could really only come from Microsoft, since any traffic pattern like this will quickly be automatically detected and blocked when using Azure Websites - you cannot set up a keep alive like this yourself. Second, no modern bot whatsoever would regularily ping a specific page with that kind of regularity since it's all to obvious. Any modern datacenter security appliance would catch that kind of traffic and block/ignore/nullroute it.
As for your question regarding protection and security: Microsoft cannot protect your code from yourself. However, everything at the perimeter is managed and handled by Microsoft. That's one of the USP features of Azure - Firewall, Load Balancing, Spoofing, Anti-bot and DDOS protection etc. There will of course always be security concerns regarding any publicly exposed service but you can stay focused on your application while Microsoft manages the rest.
When running Azure Websites, you're in the hands of Microsoft regarding security outside of your application scope. That's a great thing, but if you really like to be able to use other security measures you'll have to set up a virtual machine instead and run your site from there.
You may want to first understand what are these requests. Enable web server logging for the website on Azure Management portal and download IIS logs for your website after seeing this pattern. Then check those to understand the URL, client ip addresses for the requests and user agent field to identify if the requests are really from search bots. Based on the observation, you can either disable some IP statically, use dynamic ip restrictions or configure URLREWRITE to block requests with specific patterns in request or request headers
EDIT
This is how you can block search bots - http://moz.com/ugc/blocking-bots-based-on-useragent
You can configure the URLREWRITE locally on an IIS server in the way described in the above article and then copy the configuration generated in the web.config or connect to the azure website directly using IIS manager as described in http://azure.microsoft.com/blog/2014/02/28/remote-administration-of-windows-azure-websites-using-iis-manager/ and configure urlrewrite rule
I have an WCF Webservice project, built in my local machine, which when hosted using test client and triggered, returns values from remote database in JSON format.
For example, if you key in the URL with localhost then you get results back in the below format:
{"Id":3,"Value1":"67.5687","Value2":"126.7125"}
I want to host this project on a remote server with a public URL, which should return the above results back from any network. I have 3 question regarding this:
** What modifications should I do to my current WCF project to host it on remote server.
** Given the various types of hosting like :
1) windows process activation services (WAS)
2) IIS
3) Self hosting
4) Hosting in a Windows service,
which type of hosting is best suited for hosting on remote server.
** What changes should I make in my App.Config file (including the change in my endpoint address from localhost to IP address) to make the service work.
Thanks.
1) You shouldn't need to make any changes to your project just because you want to host the code on another machine. I find this an odd question.
2) Given your choice of JSON as data format and a browser as test client, I'm guessing you want to make it available over HTTP using simple GET requests. In the Microsoft stack, IIS is the web server, and the natural choice for this scenario.
3) It is quite impossible to answer. I don't know what's in your app.config today. I don't know if you're going to authenticate, and if so how. And I don't want to know! That said, it seems to me if everything is supposed to behave as it does on your dev box, the bindings are already ok. I don't remember if a WCF service needs to know about the endpoint it is itself at (hard to see why it would need to know this, really); I would have thought it more natural to do such configuration on the host, e.g. IIS. The client of course should use a different endpoint pointing to wherever you host the service. (You can put many endpoints in app.config and let the user choose one, btw.)
I think most of us sin against the following advice now and then, but it is the best advice I can give: Read a book. Learn as much as possible about the thing you're using, in this case WCF. You'll get the time back later, and your software will be less bad!
I have very recently started development on a multiplayer browser game that will use nowjs to synchronize player states from the server state. I am new to server-side development (so many of the things I'm saying are probably being said incorrectly), and while I understand how node.js works on its own I have seen discussions about proxying HTTP requests through another server technology (a la NGinx or Apache) for efficiency.
I don't understand why it would be beneficial to do so, even though I've seen plenty of explanations of how to do so. My current plan is to have the game's website and info on the same server as the game itself, so if there is any gain from proxying node I'd love to know why.
In the context of your question it seems you are looking for an answer on the benefits of implementing a reverse proxy in front of your node.js webserver. In summary, a reverse proxy (depending on implementation) can provide the following features out of the box:
Load balancing
Caching of static content
Failover
Compression of responses (e.g gzip)
SSL support
All these features are cross-cutting concerns that you should not need to accommodate in your application tier/code. By implementing these features within the proxy it allows you to focus on developing the code for your application and leaves the web server to do what it's good at, serving the HTTP requests for your application.
nginx appears to be a common choice in a reverse proxy/node configuration and if you take a look at the modules reference you should get a feel for what features the proxy can provide.
When you say "through another technology" I assume you mean through a dedicated web server such as NGinx or Apache.
The reason you do that is b/c in a production environment there are a number of considerations you don't want your application to have to do on its own. Caching, domain (or sub-domain) mapping, perhaps security, SSL, load balancing, and serving static files to name a few.
The web servers are already built to do all those things for you, and so they can handle them and then pass only the requests on to your app that actually need to be handled by your app. They're also optimized for doing those things and will probably do them as well or better than the average developer can.
Hope that helps.
Another issue that people haven't added in here is that with a front-end proxy, when you need to take your service down for maintenance (or even just restart it), nginx can serve up a pretty "YourCompanyName is currently under maintenance" page, making for a much more pleasant user experience.
I just discovered, quite by accident, that a WCF service hosted in a Windows Service ill work with a HTTP binding. It seems to implement its own web server, but I have never seen this capability mentioned anywhere, and can't find any documentation on what the capabilities of the HTTP listener are (in terms of worker threads, etc.) Anyone have a pointer?
Thanks
if you google for self-hosting and WCF, you will come up with a wealth of information. The full power of WCF is available in this manner. The service can have accept multiple calls, and WCF can do the multithreading for you. You can also check out the WCF REST starter kit for more information.
Well, if it is going to support anything using the HTTP protocol, it would be definition have to be a web server.
The capabilities are that of the service host. Whatever you set for the throttles are going to be the capabilities of the server.
However, if you are going to have large loads on the service, you might want to consider hosting in IIS, as it offers more in the way of app recycling, fault tolerance, etc, etc.
Is System.ServiceModel.ServiceHost what you mean? The wcf configuration and ServiceBehavior allow you to set up concurrency settings, etc.