What's the best way to structure this kind of remote service? - asp.net

I'm not sure if this is technically a web service or not but I have a Flash file that periodically needs to make a round trip to a DB. As it stands, AS3 uses the URLLoader class to exchange XML with an ASP.NET/VB file on the server. The aspx code then goes to the DB and returns whatever information is requested back to the Flash file.
As my program grows and I need to execute a larger variety of tasks on the server, I'm wondering if I should just keep placing functions in that same aspx file and specify in AS3 which function I should load for any given task. OR, is it better to break up my functionality into several different aspx files and call the appropriate file for the task?
Are there any obvious pros and cons to either method that I should consider?
(note: I have put all of VB functions on the aspx pages rather than the code behind files because I was having trouble accessing the i/o stream from the code behind.)
Thanks.
T

When you are saying you need to execute a large variety of tasks you should think about breaking the code down into multiple files. Though this question cannot be answered in general and the solution is always specific to the problem this might help:
Reasons to keep all code in one file on the server side:
Code for different tasks heavily depends on each other
Effort for separating the tasks into files is too high
The variety/count of different tasks is manageable
You are the only developer
Every tasks works correct and is fail safe (Since they are all in one file I assume one error will break all tasks)
Reasons to separate tasks into different files:
The file is getting too big, unreadable and unmaintainable
Different tasks should not depend on each other (Separation of concerns)
There are multiple developers working on different tasks
Many new tasks will be added
A task could contain errors and should not break every other task
That is all I can think of right now. You will sure find more reasons for yourself. As said I would separate the tasks as I think the effort is not too high.

Related

What are the technical reasons to use multiple, smaller files instead of one large JS file? [duplicate]

This question already has answers here:
What are the benefits of concatenating all Javascript files into one before sending it to client?
(3 answers)
Closed 6 years ago.
Is there a difference between having one large Javascript file compared to having many different Javascript files?
I recently learned that a separate application at my office contains two Javascript files for everything it requires. They're almost 2 MB and contain roughly 40K lines of code each.
From a maintainability standpoint, that is obviously awful. I can't imagine dealing with that in SVN. But does it actual make a difference in the performance of the application?
There is a setting for chunked transfer encoding in IIS, but I know little about it beyond what's mentioned in the article there. The "Rationale" section doesn't seem particularly relevant to Javascript. It seems more important for the "actual" pages in the application and communicating back and forth between the client and server.
Tagged with ASP.NET as the setting is under the "ASP" section of IIS... If that's not actually related please edit and remove the tag or let me know and I can.
Javascript files are often combined in production environments to cut down on server requests and HTTP overhead. Each time you request a resource, it takes a round trip from the client to the server, which affects page load speed.
Each separate request incurs HTTP overhead, basically extra data that is attached to the request/response headers, that must get downloaded too.Some of this will change with the implementation of HTTP2, and smaller files will become more efficient.
From a maintainability perspective, you'd never want to deal with files that large. Ideally, each JS file should be broken up into a logical module and stored independently in SVN. That makes it easier for the developers to work with and keep track of changes. Those small, modular files would then go through a build process to combine and possibly minify/uglify them to get them ready to be served in a production environment.
There are tons of tools you can use to automate this build process like Gulp, Grunt, or npm. Some .NET content management systems like DNN have settings that allow you to do this automatically in production.

What are the benefits of deploying binary dll for website rather than source code?

I have a small internal app, and I am arguing against myself why I should not just copy the entire source folder to production, as supposed to Publish, which compiles the .cs files to .dll.
But I can't think of any realistic benefits one way or another, other than to reduce the temptation to make direct logic change on production. What do you think?
It eliminates the temptation to just change that one little thing in production...
Also, it secures the code against malicious changes, it adds extra steps between "build" and "deploy" which can be used as a natural QA speed bump, it increases start up time and a billion other things.
Two main things:
As antisanity points out, it lets you verify that all the pages on your site actually compile, which goes a long way toward catching a number of bugs before they get very far.
The website will end up compiling these files the first time they get accessed anyway. By precompiling them, you'll save time on the first load, which will make your application feel a little more responsive to a few of your users.
Well, I can agree with you only if you're talking just about views. If you're talking about controllers, I guess you'd need 'em compiled in order to run :).
Okay, joking aside, I'm for a complete binary deployment mainly for:
being sure that my code compiles (at least)
speed up view generation (or first time compile)
simplify management of patches (I deliver just a dll and not the entire webapp)
regards
M.
Well, for one thing... it makes sure that your site compiles.
Apart from that, check out Hanselman's Web Deployment Made Awesome: If You're Using XCopy, You're Doing It Wrong
There are a number of reasons why you should publish your application:
It will perform better;
You know that the code compiles;
It's cleaner (no .cs files cluttering the folder);
Some security benefits by not exposing the source code;
You can package your application for deployment to testing, staging, and production

ASP.Net: Generating a file to download

I have a file that I need to copy, run a command against the copy that specializes it for the person downloading it, and then provide that copy to a user to download. I'm using ASP.Net MVC2, and I've never done anything like this. I've searched around for the simplest way to do it, but I haven't found much, so I've come up with a plan.
I think what I'll do is generate a guid, which will become the name of a folder I'll generate at the same level of the source file that the copy is made from. I'll then copy the file to that folder, run my command against it, provide a link to the file, and I'll have some service that runs every now and then that deletes directories that are more than a day old.
Am I over thinking this? Is there an easier, simpler, or at least more formal way to do this? My way seems a bit convoluted and messy.
Can you process in memory and stream it to the client with a handler?
That is how I do things like this.
Basically, your download link points to an HttpHandler, typically async, that performs the processing and then streams the bits with a content-disposition of 'attachment' and a filename.
EDIT: I don't do much MVC but what Charles is describing sounds like and MVC version what I describe above.
Either way, processing in memory and streaming it out is probably your best bet. It obviates a lot of headaches, code and workflow you would have to maintain otherwise.
What sort of command do you have to run against it?
Because it would be ideal to process it in memory in a controller's action using MVC's FileResult to send it to the client.
Charles

Performing bulk processing in ASP.NET page

We need the ability to send out automatic emails when certain dates occur or when some business conditions are met. We are setting up this system to work with an existing ASP.NET website. I've had a chat with one of the other devs here and had a discussion of some of the issues.
Things to note:
All the information we need is already modelled in the ASP.NET website
There is some business-logic that is required for the email generation which is also in the website already
We decided that the ideal solution was to have a separate executable that is scheduled to run overnight and do the processing and emailing. This solution has 2 main problems:
If the website was updated (business logic or model) but the executable was accidentally missed then the executable could stop sending emails, or worse, be sending them based on outdated logic.
We are hoping to use something like this to use UserControls to template the emails, which I don't believe is possible outside of an ASP.NET website
The first problem could have been avoided with build and deployment scripts (which we're looking into at the moment anyway), but I don't think we can get around the second problem.
So the solution we decided on is to have an ASP.NET page that is called regularly by SSIS and to have that do a set amount of processing (say 30 seconds) and then return. I know an ASP.NET page is not the ideal place to be doing this kind of processing but this seems to best meet our requirements. We considered spawning a new thread (not from the worker pool) to do the processing but decided that if we did that we couldn't use the page returned to signify a success or failure. By processing within the page's life-cycle we can use the page content to give an indication of how the processing went.
So the question is:
Are there any technical problems we might have with this set-up?
Obviously if you have tried something like this any reports of success/failure will be appreciated. As will suggestions of alternative set-ups.
Cheers,
Don't use the asp.net thread to do this. If the site is generating some information that you need in order to create or trigger the email-send then have the site write some information to a file or database.
Create a Windows service or scheduled process that collects the information it needs from that file or db and run the email sending process on a completely seperate process/thread.
What you want to avoid is crashing your site or crashing your emailer due to limitations within the process handler. Based on your use of the word "bulk" in the question title, the two need to be independent of each other.
I think you should be fine. We use the similar approach in our company for several years and don’t get a lot of problems. Sometimes it takes over an hour to finish the process. Recently we moved the second thread (as you said) to a separate server.
Having the emailer and the website coupled together can work, but it isn't really a good design and will be more maintenance for you in the long run. You can get around the problems you state by doing a few things.
Move the common business logic to a web service or common library. Both your website and your executable/WCF service can consume it, and it centralizes the logic. If you're copying and pasting code, you know there's something wrong ;)
If you need a template mailer, it is possible to invoke ASP.Net classes to create pages for you dynamically (see the BuildManager class, and blog posts like this one. If the mailer doesn't rely on Page events (which it doesn't seem to), there shouldn't be any problem for your executable to load a Page class from your website assembly, build it dynamically, and fill in the content.
This obviously represents a significant amount of work, but would lead to a more scalable solution for you.
Sounds like you should be creating a worker thread to do that job.
Maybe you should look at something like https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
You can and should build your message body (templated message body) within domain logic (it means your asp.net application) when some business conditions are met and send it to external service which should only send your messages. All messages will have proper informations.
For "when certain dates occur" scenario you can use simple solution for background tasks (look at Craig answer) and do the same as above: parse template, build message and fast send to specified service.
Of course you should do this safe then app pool restarts does not breaks your tasks.

Concurrency ASP.NET best-practices worst-practices

In which cases to you need to watch out for Concurrency problems (and use lock for instance) in ASP.NET?
Are there 'best practices' around on this topic
Documentation?
Examples?
'worst practices...' or things you've seen that can cause a disaster...?
I'm curious about for instance singletons (even though they are considered bad practice - don't start a discussion on this), static functions (do you need to watch out here?), ...?
Since ASP.NET is a web framework and is mainly stateless there are very few concurrency concerns that need to be addressed.
The only thing that I have ever had to deal with is managing application cache but this is easily done with a cache-management type that wraps the .NET caching mechanisms.
One huge problem that caused us a lot of grief was using Modules vs. Classes in our main Web Service. This was before we really knew what we were doing and has since been fixed.
The big problem with using modules is that by default any module level variables are visible to every instance of the ASP worker process. We pass in multiple datasets and manipulate them then return them to the client. Because we were using modules the variables holding these datasets were getting corrupted by multiple calls occuring at one time.
This was not caught in testing and was difficult to reproduce until we figured out how to properly load test our web services. It took something like 10-20 requests per second before we could reproduce it accurately.
In the end, we just changed all the modules to classes, and then used those classes instead of calls to the modules, this cleared up this concurrency issue as each instantiated class had its own copy of the dataset in memory.

Resources