As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a number of websites and web based applications running on a dedicated web server. The same box currently also runs the database (and this is unlikely to change.)
I have noticed that the requirements I am getting across different projects are often duplicating each other. So, whilst I am quiet I am trying to replan the architecture of the various sites.
Ideally I'd like to extract the duplicate functionality (logins, some of the user reports, error reporting and so on) into a core library, which got me thinking.
If I make a core assembly, adn add it to the bin of each website, then it will communicate with whatever database is in the app config file. But that gives me versioning / maintenance headaches when things change.
I can put this core librar in the GAC, which means I need to regster / unregister it, but all apps can inculde and use it as necessary.
Or the 3rd way I can see of doing this is to use WCF Webservices and add another internal tier to my aps, where they hand off the core work to a seperate set of webservices. the advantage of this is if / when we expand, all the interfaces can stay as a set of webservices, leaving my apps just making http or tcp calls rather than me having to worry about moving bin files or gac'ed assemblies.
Basically I am here to see if anyone has any thoughts / comments / criticisms of either approach, as I'd hate to start going down one road then have to re do it all going the other way, as murphys law states it will go wrong just as we have a major piece of work come in :)
I suggest using the library as a reference as you need it(so it resides in the bin of your application).
What if you need a slight change to one of your reports? After you make the changes, do you plan to do regression tests on all existing apps?
Keeping a different copy of the dll allows you to upgrade it as an when necessary.
Make sure you can find out which app is running which version of the dll.
From a maintenance point of view, you do not always want to change what works.
"What can go wrong, will go wrong", so limit the area where it can go wrong.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Whats the point of node.js creating its own server and tries listening on it, isn't IIS/apache give us all of that? I understand its based on I/O Completion but we have web-servers technology in place. Can someone explain what can be achieved via node (apart from the java script on server side, which can be also achieved via SignalR) that can't be done via ASP.NEt and why we should focus so much on node when we have a tone of technology under asp.net stack
Any classic example of node? typically for a enterprise dev. shop
Most web programming are for data display & eCommerce applications which are mostly database intensive, though lately it has been mash-up as well with web services, yes mobile web is a different game due to hardware sensors I agree but what is Node giving us which ASP.NET with SignalR can't give us?
TIA
What I find very interesting with Node is that everything is event based, which is different than programming ASP.NET or PHP where behavior is more sequential. Not a bad thing, just a different way of doing things.
You can program the server itself (as opposed to programming applications that run on the server) to do more than serving files, the typical example with Node is the chat room application where you broadcast messages to all participants and each can send messages to the server. By programming your own server events (like listen, error, connect etc) you have a lot of control over how things go server side.
Then of course npm, the node package manager, is definitely a plus over having to manually work the dependencies if you want to use 3rd party libs.
To host an ASP.NET site/app you need IIS which is a proprietary system, whereas Apache and Node are more open. Granted though, Node hosting is not as widespread as Apache based hosting.
Hope this answers some of your questions
Each technology can achieve anything. If you prefer ASP.NET over Node, use it. ASP.NET is extremely powerful and there is no reason to use Node over ASP.NET when you have the expertise and software/money to run your services. Node is just different; it has a different execution model (no threading whatsoever) and above all, it is open source and free. It is easy to get started on any OS, and easy to deploy on any OS. But in the end, it comes down to; what do you prefer?
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
We want a high performance testing tool for a distributed scenario
We want to collect data from clients and from server (memory usage, cpu usage, response time, .net calls etc).
Most of our applications are using .Net 4.0 or Classic Asp.
We have 4 servers. We want 1 controller and three agents working together for testing, collecting data.
What's the best tool for this scenario?
ps: We've tried Visual studio 2012 ultimate and it seems promising. I don't know other tools that fits the scenario.
Give Load Tester a try: http://www.webperformance.com/load-testing/ (disclaimer: I work there). It has a monitoring agent that will run on your Windows servers to collect the metrics you mentioned and a lot more. It also collects client-side metrics such as page load time. The LITE version is free and can run simple tests with unlimited users.
Take a look at Rational Performance Tester. I was about to purchase a license for one of our projects but didn't push through for reasons not related to the software. Looked promising back then.
I would split things up to keep it simple.
First I would check what the average requests per seconds is when using your servers to generate load. For that there is a small tool included in Apache Http Server called ab.exe. It's easy to setup to generate requests.
If you think that you get acceptable response times all is well.
If not, use something like Jetbrains DotTrace (in your app) to collect data when generating load from one server.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am looking for either a best practice, supported, guide from Microsoft or a bloggers/developers guide of the same. Or both.
I am setting up some servers for hosting and I want to configure them with just enough permissions. I have done this before where I modified the Medium trust and gave it database permissions etc but I only briefed over it.
I want to setup solid machines with the respective, common, permissions that people use. Is there maybe a resource that explains in detail what each trust level has by default? That way I could compare and go from there.
To start the security, I have made a rule on my machines that I only create dedicated application pools per site/user. I know Microsoft say that each website is virtually seperate, even in the shared application pool space, but I just don't trust it.
I also know I shouldn't run in Full Trust as I am opening up my server to all kinds of attacks.
I have a bit of knowledge on this but not enough so hopefully you lot can help me. I'm not wanting to be spoon fed what to do, I have no problem figuring it out, I just can't find the info to start with.
I appreciate your help.
Anthony
I'm running:
Windows 2008 RC2 64 bit with IIS7.5 and a combination of 2.0/3.5 and 4.0 application pools.
The strict best practice is "don't let anything do anything to anything" but that is counterproductive in general -- if you aren't taking HTTP requests, you don't have a working HTTP application server.
That said, your question is very general and very nebulous. The first key question is "what sort of hosting scenario is this?" For example, full trust isn't necessarily a bad thing in a dedicated scenario, or even a shared server between "friendly" apps that should trust each other. But it is bad in a hotel server situation where you've got random guests sharing space.
The second question is what sorts of apps are you hosting? You've got completely different frontages depending on what you are doing -- spammers don't try as hard as thieves. Spies try even harder.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
This is really a 2 part question. First of all, I just wanted to know how common is asp.net in the real world?
Secondly, I just want to know what are the read world scenario regarding scaling a asp.net site? http://highscalability.com/ almost never talked about the asp.net stack. Does anyone have any reason article that talks about how to scale an asp.net app?
Thanks.
I don't have numbers but based on the number of .net questions on so I'd say it's pretty common For your second question seehttp://highscalability.com/plentyoffish-architecture
MySpace uses ASP.NET (source). A lot of big sites do. I would ignore the Plenty of Fish example though. From my recollection of stories I've read about it, they're just using HttpHandlers for output, skipping the Webforms stuff altogether. You could probably get Webforms to scale though if you absolutely had to. Most popular frameworks can handle high load, it just depends on the code and who's writing it. Anyone can program a site in any framework that won't scale but not vice versa.
As for how to scale, the biggest thing is caching, caching, caching. All big sites cache extensively. Facebook has thousands of servers just for caching. That's just a start though.
Yes asp.net is used in the real world. I have been following how Stackoverflow has been created since I first heard about it over a year ago and have taken away a lot of lessons. Following how stackoverflow will scale in future as their demand grows is pretty interesting and they are making a lot of their information public. Plus the podcasts are hilarious :)
Its hard to say how widespread ASP.NET is in the world but I think it is very widespread compared to PHP, Java and other server technologies. And I'm convinced that ASP.NET is as scalable as anything else you'll try.
If you wan't a starting point to read about ASP.NET performance you could take a look at chapter 6 of the P&P book "Improving .NET Application Performance and Scalability". It's from 2004 so it might be a little outdated.
To give a couple of examples of high traffic sites running ASP.NET you just have to look at http://www.microsoft.com/ or https://stackoverflow.com/. if your site is smaller than these (and it probably is) scalability wont be you biggest concern. You should probably be more concerned about writing maintainable code.
Plenty of Fish with about 1,2 billion pageviews/month
Over 9000.
Realistically I've run into many high traffic websites StackOverflow as an example that use ASP.NET
One thing that is useful for high scalability is the ability to add more servers if needed and still be able to maintain your current session using various ASP.NET session state technologies.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am currently looking at a distributed cache solution.
If money was not an issue, which would you recommend?
www.scaleoutsoftware.com
ncache
memcacheddotnet
MS Velocity
Out of your selection I've only ever attempted to use memcached, and even then it wasn't the C#/.NET libraries.
However memcached technology is fairly well proven, just look at the sites that use it:
...The system is used by several very large, well-known sites including YouTube, LiveJournal, Slashdot, Wikipedia, SourceForge, ShowClix, GameFAQs, Facebook, Digg, Twitter, Fotolog, BoardGameGeek, NYTimes.com, deviantART, Jamendo, Kayak, VxV, ThePirateBay and Netlog.
I don't really see a reason to look at the other solution's.
Good Luck,
Brian G.
One thing that people typically forget when evaluating solutions is dedicated support.
If you go with memcached then you'll get none, because you're using completely open source software that is not backed by any vendor. Yes, the core platform is well tested by virtue of age, but the C# client libraries are probably much less so. And yes, you'll probably get some help on forums and the like, but there is no guarantee responses will be fast, and no guarantee you'll get any responses at all.
I don't know what the support for NCache or the ScaleOut cache is like, but it's something that's worth finding out before choosing them. I've dealt with many companies for support over the last few years and the support is often outsourced to people who don't even work at the company (with no chance of getting to the people who do) and this means no chance of getting quality of timely support. On the other hand I've also dealt with companies who'll escalate serious issues to the right people, fix important issues very fast, and ship you a personal patch.
One of those companies is Microsoft, which is one of the reasons that we use their software as our platform. If you have a production issue, then you can rely on their support. So my inclination would be to go with Velocity largely on this basis.
Possible the most important thing though, whichever cache you choose, is to abstract it behind your own interface (e.g. ICache) which will allow you to evaluate a number of them without holding up the rest of the development process. This means that even if your initial decision turns out not to work for you, you can switch it without breaking much of the application.
(Note: I'm assuming here that all caches have sufficient features to support what you need from them, and that all caches have sufficient and broadly similar performance. This may not be a valid assumption, in which case you'll need to provide more detail in your question as to why it isn't).
You could also add Oracle Coherence to your list. It has both .NET and Java APIs.
From microsoft : App fabric
Commerical : NCache
Open source : RIAK
We tried a couple in the end we use the SQL session provider for asp.net/mvc yes there is the overhead of the connection to the DB but our DB server is very fast and the web farm has loads of capacity so not an issue.
Very interested in RIAK has .net client and used by Yahoo - can be scaled to many manu server