deploying flex applications - apache-flex

I have a Flex application which has to be deployed in some server. The typical form of access would be invoking the URL. How do i go about it?
Should I have multiple instances of the applications running on the same server/ deploying the application in diff servers and using a load balancer for routing?
If i must have multiple instances, how to do that?
On a given day, the application is expected to get around 2000-3000 hits. What are all the factors to be kept in mind while deployment?
Any pointers would be helpful.
thanks.

I'm actually a bit unsure what specifically you're asking, so I'll take your questions one by one.
I have a Flex application which has to
be deployed in some server. The
typical form of access would be
invoking the URL. How do i go about
it?
Put your SWF files on a web server. For best results export a release build first. Flash Builder makes this easy.
Should I have multiple instances of
the applications running on the same
server/ deploying the application in
diff servers and using a load balancer
for routing?
Probably not, but it depends. A SWF is just a binary asset. As far as the server is concerned it is no different from a JPG, gif, or PNG file. Whether or not you need a load balancer to serve the SWF depends on the size of the SWF, the amount of simultaneous hits you expect, other traffic on the server, bandwidth of your hosting provider, and probably a whole slew of other considerations that escape me at the moment.
If your SWF is making calls to the server--very common in Flex Applications--that may also be a consideration.
If i must have multiple instances, how
to do that?
If you have multiple instances of what? Of the SWF? Why would you need to do that? As I said, as far as the server is concerned a SWF is just a binary asset. In theory you could keep as many copies of the file on your server as you want, in practice most people just use a single one.
On a given day, the application is
expected to get around 2000-3000 hits.
What are all the factors to be kept in
mind while deployment? Any pointers
would be helpful.
That strikes me as low traffic site; however it depends what you're doing.
Despite my answer, I have to vote to close as your questions is vague and ambiguous. I'm not sure what you want to know.

I think you miss a basic information on your application.
As long as you create Flex/Flash application and you put them on your server they will always be SWF file executed Client Side.
So i think you don't have to wonder on the workload of your server because isn't your server that run the application but the Client PC.
As long as your server can manage 2000/3000 hit per day you can be quite sure that alway will run smootly.
Claudio.

Related

How can I synchronize physical files manually in a Kentico web farm?

We have a non-standard Kentico architecture which Kentico have advised is supported as long as synchronization of physical files between load balanced servers is disabled and handled manually. What is the correct way to manually synchronize web farm server files? I wondered about using a tool like DirSync but assume this would require one server to act as the primary, whereas with Kentico a new media file, for example, may be initially saved to any of the physical servers.
I'm hoping to identify a definitive solution to this issue. Thanks.
Kentico web farm by default synchronizes physical files automatically if the web farm is working properly. As each request can be served by different server Kentico serializes file binary into Database which is shared by all servers and then re-creates file on the server where it is missing.
I'm not aware of any situation where web farms are supported, but file synchronization isn't. It's either all or nothing, there is no middle solution.
Can you be more specific of why the synchronization of physical files is not working on your end? As long as all servers see the database (which they should otherwise the WF is not working at all) the file synchronization will work.
PS: If your files are not synchronized, go to Web farm -> Tasks application and check how many tasks are there. If there are no tasks (or very few which are being deleted constantly) then your web farms are working, if there are tasks older then few minutes then your web farms are not working at all.
I read the thread above and would recommend you take a look at this tool from BizStream: https://devnet.kentico.com/marketplace/modules/compare-for-kentico
I haven't gotten to play with it myself, but they are a top notch shop so I can bet its a top notch product.
Otherwise you are going to have to go the custom sync code.
We've tried to do moves via the SQL Tables and it is 'possible', but the amount of interconnected relationships just make it quite unrealistic to build or support.

keep track of folder's activities

I want to monitor the activities of all the folders present at "C:\Inetpub\ftproot\san".User can work on any type of files and not only text files.Since we have given 1GB space (lets say) to each user, so user can do anything to utilize this space.
Now I want to monitor the activites that the user will do in his folder like creating new file, deleting an existing file or editing a file.I want to monitor user's activities because i have to keep track of the space given to the user so tht i can restrict the user to use 1GB space only and not more than that.
is there any class that i can use other than FileSystemWatcher as it works only in console applications and not in webapplications??
any help would be highly apperciated..
Many thanks
FileSystemWatcher should work just fine in a web application, but the problem is that the web application isn't always on. If nobody accesses it for a while, it can be shut down in lieu of other things that need resources and then started again when next accessed. It can also be re-started easily when things within its own structure change (such as its config file). It's very stateless and transient.
How do you plan to use this information within the web application? Does your web application really need to be constantly watching, or does it just need to generate a report of the current state of the filesystem when requested? If you really need the former, then the aforementioned nature of how web applications run on the server will make things a bit unreliable. Maybe a Windows service running on the web server would be more up to the task?
First, I would break this down into:
What are the possible ways users can modify the contents of the folder?
What are you trying to accomplish/present to the user via the Web interface?
One way to do this somewhat simply (in a sense) would be to maintain a service on the machine that periodically monitors the directory for the information you need (size, # of files, whatever), and connect this (via something like WCF) to the actual web application. In effect, you'd have a semi-soft limit, in that for a period users could operate on more than 1GB, but there are obviously corrective measures you could take, but this way you don't actually have to monitor every action of every user in realtime.
Off the cuff I would think that you need some sort of service to use the FileSystemWatcher class. The only way to really watch over the directory using asp.net is if you are controlling how all files get added and deleted from the directory. If that is the case then you can add code to skim through the directory and add the sizes of everything in there pretty easily.
If they can put files in these folders with other applications (such as an FTP client) then you are going to need a service to watch over the folders.
A better way of doing this is let your WebApp run as a portal to what's happening, but you will need a windows service running to ensure that someone does not go over the space allotment.
The service would also be able to help give data to your portal.
Remember, a website only runs when someone calls it. So if you don't use your website for 5 days, nothing will monitor it.
Sure you could keep a web page open for X amount of days, but that's overkill.

ASP.NET AJAX JavaScript files served from a static location

I realise that this is going to be a fairly niche requirement and will almost certainly raise a few "WTF's" but here goes...
Within an ASP.NET Webforms application I need to serve static content from a local client machine in order to reduce up-front bandwidth requirements as much as possible (Security policy has disabled all Browser caching). The idea is to serve CSS, images and JavaScript files from a location on the local file system referenced by filesystem links from within the Web application (Yes, I know, WTF's galore but that's how it is). The application itself will effectively be an Intranet app that's hosted externally from a client but restricted by IP range along with standard username/password security. So it's pretty much a hybrid Internet/Intranet application but we can easily roll out packages of files to client machines. I am not suggesting that we expect nor require public clients to download packages of files. We have control to an extent over the client machines in terms of the local filesystem and so on but we cannot change the caching policy.
We're using UpdatePanel controls to perform partial page updates which obviously means that we need to Microsoft AJAX JavaScript files. Presently these are being served (as standard) by a standard resource handler within IIS/ASP.NET. Ideally I would like to be able to take these JS files and reference them statically from a client machine, and no longer serve them via an AXD.
My questions are:Is this possible?If it is possible, how do we go about doing so?
In order to attempt to pre-empt some WTF's the requirement stems from attempting to service a requirement with as little time and effort as possible whilst a more suitable solution is developed. I'm aware that we can lighten the load, we can switch to jQuery AJAX updates, we can rewrite the front-end in MVC etc. but my question is related to what we can quickly deploy with our existing application architecture.
Many thanks in advance :)
Lorna,
maybe your security team is going crazy. What is the difference between serving a dynamic HTML generated by the server and a dynamic JS generated by the server?
It does not make any sense. You should try talking them out of it.
what is the average size of pages and viewstate data. you might need to store viewstate in sqlserver rather than sending it to client browser every time.

Gotchas: Upgrading from single servers to web farms

Our company currently runs two Windows 2003 servers (a web server & a MSSQL 8 database server). We're planning to add another couple of servers for redundancy / availability purposes in a web farm setup. Our web sites are predominately ASP.NET, we do have a few PHP sites, but these are mainly static with no DB.
Does anyone who has been through this process have any gotchas or other points I should be aware of? And would using Windows Server 2008 offer any additional advantages for this situation (so I can convince my boss to upgrade :) ?
Thanks.
If you have dynamic load balancing (i.e. My first request goes to server X, but my next Request may go to server Y or Z), you will find out that In-Proc Sessions do not work. So you will either need sticky Sessions (your load balancer will ALWAYS send me (=my session) to server X) or out-of-process sessions (i.e. stored in an SQL Server).
Like Michael says, you'll need to take care of your session. Ideally make it lean and store out of process. You'll have similar challenge with cache depending on how you use it and might be interested in looking towards a more robust caching technology if you only use asp caching.
Don't forget things like machine keys and validation in your web.config. The machineKeys need to be consistant across your servers.
Read up on IIS7 and you should be able to pick out several good examples to show off to your boss.
A web farm can give you opportunities and challenges with deployment that should not be overlooked.
Without specifc experience to the setup above but to general moves of this kind. I would recommend phasing the approach. That is, move to Windows 2008 first and then farm.
One additional thing to look at is your deployment plan. Deployment plans seem to be sadly overlooked and/or undervalued. Remember that you are deploying to multiple nodes and you want to take into account how you want to deploy and test in a logical fashion.
For example, assume you have four nodes in your farm. Do you pull two out of the cluster and update and test, then swapping out the other two to repeat? Determine if your current deployment process fits in with the answer you provide. Just because you have X times the amount of servers does not mean that you want or need to do X times the amount of work.
Just revisiting the caching part of the conversation for a moment. You should definitely take a look at a distributed caching solution. If you are pre-caching data and using callbacks with cache removals, you can really put a pounding on the database if you are not careful. Also, a lot of the distributed caching solutions offer some level of session state management, as well. I have been very much enjoying Microsoft's Velocity project, although it is just a second CTP release and not ready for production.
In addition to what others have said, you might want to consider looking into Richard Campbell's (of .NET Rocks!) product:
http://www.strangeloopnetworks.com/
We use the ASP.NET State Server for handling out sessions. This comes free with windows server 2003/2008.
We then have to make sure the machine key's are the same (a setting in your web.config files).
I then manually take each site offline (using app.offline or whatever the magic file is called). Alternatively, u can use IIS and just turn the site off and the offline site 'on'.
That's about it. You could worry about distributed caching, but that's pretty hard-core stuff. You can get a lot of good millage out of the default Output Caching with ASP.NET. I'd start there, before you delve into the complexity (and cost, for some products) if you're going to do distributed caching.
Oh, we're using an F5 load balancer that does NOT do sticky sessions, so we need to maintain our sessions .. which is why we're using the ASP.NET state server.
One other gotcha aside from the Session issues described by the other posters is if the apps are writing to the local file system. Scaling out to a web farm would break the apps if they assume the files are on the local PC. For example, uploaded files might be available or not depending on which server is hit. Changing the paths to point to a shared drive should fix this.

How do I cluster an upload folder with ASP.Net?

We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes.
This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover.
Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that.
At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation.
We ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server.
Two of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync.
In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files.
The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover.
This also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another.
A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows?
The problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?

Resources