Flex writing to it’s own directory - apache-flex

I have a Flex application I’m writing (Learning exercise) that I’d like to run of a network drive for many users to access. I’d like users to be able to save high scores on the network.
Users have read write to the network location it's on.
I don’t want to change anything on the computers that might use it (IE install AIR) or IE/Firefox settings. They are just default.
I don’t want to run a server (IE PHP)
Is there anyway to do it?
Cheers

Nope, not without AIR. And even then with difficulty. Flex runs within the context of the browser, and only has available to it the resources available to the browser (for obvious security reasons.)
Flash enjoys a unique position of corporate trust for reliability and safety, and they do everything possible to protect that position. So you're sandboxed.
The best I can think of is put together something that serves a URL and a common or custom read-write protocol - probably not trivial.

You will have to use a backend to access any of those resources. Eg, if you're using BlazeDS then you can just use Java to write to the network. You will have a server anyways to host your application.

You really want to use a backend technology for this. If you're dead set against it, Flash Player 10 can write files to the local filesystem. You could probably trick it to use a network resource by referencing it as a mapped drive or maybe even a named host.
http://livedocs.adobe.com/flex/3/langref/flash/net/FileReference.html#save()
You can also use the "load()" method of FileReference to read a local file into your Flex application.
I really don't recommend you write in an application using this but it looks like it could be done. The caveat here is that these actions can happen only if the user specifically chooses a location for a file: they need to select the file you want to load or choose the location where a file is saved.

Related

Accessing Environment Variables in flash

I know I cannot access environnment variables directly in Flash.
My project is a local swf file, run from flash player and not through browser.
The goal is to protect the SWF to be played from an unauthorized PC.
(this is my client requirements).
My idea was to embed it into an EXE (made in Delphi for instance) as activeX.
I am not sure it is the best solution.
I think AIR is even more complex to be done.
Besides, how to forbid the access of the SWF directly ?
Maybe embedding the swf any way ?
Any suggestions, tips are welcome.
regards
I'm going to preface this by saying that I don't think there's a 100% way to stop unauthorised access - if there was, there'd be no such things as pirated copies of windows, or flash. The best you can do is make it hard to hack.
Some suggestions:
You can actually access environment variables, by calling an external process in AIR, using NativeProcess (this link has a quick writeup: http://www.tikalk.com/js/get-windows-environment-variables-air-application) - but it's trivial to hack the .bat or add the env var
You can implement your own serial key system and give out keys to legitimate users. It would ideally need to be verified by a server call
You can code a "phone-home" server call - the app won't work without it. How you identify your users is really up to you; you could try via IP, but it's not perfect
You could disable local execution (check out SecureSWF), and run it online, behind a login wall
You could disable local execution, and run it via an intranet, so people in a company can use it, but not the general public
Depending on your app, on startup, you can download necessary files (content) from the web. This can either necessitate a login, or you can block unauthorised IPs. This is how Ubisoft DRM works on some of their games.
In a similar vein, you can download other SWF files that contain the actual logic of your application. These SWFs would only be stored in memory, never saved to disk
With all of these, the app can eventually be hacked open and modified (e.g. your server-check code could be removed, so the phone-home never happens). At the very least, run your SWF through something like SecureSWF (http://www.kindi.com/) to obfusticate the code before any public release.
It all comes down to how much effort you want to put into tackling the issue. For all the of suggestions that involve the internet, if the network is down, you won't be able to use your app, which understandably will cause frustration. For all of the suggestions that don't involve the internet, you will never know if it was successful or not.

Is there a solution for a BitTorrent Uploader?

I have a requirement by my client to be able to upload extremely large files.
I'm talking about 7 GB files. The website they are currently running on is a ASP.NET 4.0 app, so obviously the standard upload scheme for my web app is not going to work.
I'm tossing around multiple options trying to figure out what the best route to go would be.
One option I'm thinking about seeing if I can do would be to have a BitTorrent Uploader. The end users for this app will typically have the same file on hand, so the idea would be that an end user would go to the site, say that they wanted to upload a file. At that point, they would pick the file, and then the server would immediately mark that person as a seed for that file. Then, my web app would go to a preconfigured leech on our side, and instruct the leech to download the file. I would expect at some point during or after this process the torrent would do some magic to find other seeders on the client's network, or wherever, but that's the idea.
Is there any technology out there already that does this? Or am I describing something that I'm going to have to build from the ground up?
It doesn't sound like it's going to be easy to do this with BitTorrent. In order for BT to work, you need torrent files. In order to create a torrent file for a particular file, you need that file (the torrent file basically contains a hash of the file). In general for a torrent, you need a tracker. You could rely on a public one, but that could be a risky dependency. You could operate your own, but that has other challenges (for one, you'd have to make sure it's locked down so it doesn't become a free-for-all for all the latest movies, music & TV).
Assuming you have a tracker in place, you then need to coordinate the downloading of torrents. Your users are going to have to create the torrent files, which is an extra complicated step, then presumably upload them via usual HTTP methods. As well as getting the user to upload the torrent, you'd have to remind the user to start seeding the torrent in their client of choice. You'd then want to automatically begin leeching the torrent (again, security issue here - what if a user uploads a completely unrelated torrent for the latest episode of House?). Apart from the security problem, this is probably the easiest part - most torrent clients can be configured to watch a directory and automatically start downloading torrent files in that directory. Once you've started downloading, you have to make sure that the user continues seeding the torrent until you've completed, otherwise you'll be stuck with a useless partial file.
It could all work, but without a fair bit of customisation work it's going to be a convoluted process at best for your users, and quite possibly beyond them. Obviously I don't know your specific requirements, but I'd be looking at more traditional file transfer protocols, like FTP.....

keep track of folder's activities

I want to monitor the activities of all the folders present at "C:\Inetpub\ftproot\san".User can work on any type of files and not only text files.Since we have given 1GB space (lets say) to each user, so user can do anything to utilize this space.
Now I want to monitor the activites that the user will do in his folder like creating new file, deleting an existing file or editing a file.I want to monitor user's activities because i have to keep track of the space given to the user so tht i can restrict the user to use 1GB space only and not more than that.
is there any class that i can use other than FileSystemWatcher as it works only in console applications and not in webapplications??
any help would be highly apperciated..
Many thanks
FileSystemWatcher should work just fine in a web application, but the problem is that the web application isn't always on. If nobody accesses it for a while, it can be shut down in lieu of other things that need resources and then started again when next accessed. It can also be re-started easily when things within its own structure change (such as its config file). It's very stateless and transient.
How do you plan to use this information within the web application? Does your web application really need to be constantly watching, or does it just need to generate a report of the current state of the filesystem when requested? If you really need the former, then the aforementioned nature of how web applications run on the server will make things a bit unreliable. Maybe a Windows service running on the web server would be more up to the task?
First, I would break this down into:
What are the possible ways users can modify the contents of the folder?
What are you trying to accomplish/present to the user via the Web interface?
One way to do this somewhat simply (in a sense) would be to maintain a service on the machine that periodically monitors the directory for the information you need (size, # of files, whatever), and connect this (via something like WCF) to the actual web application. In effect, you'd have a semi-soft limit, in that for a period users could operate on more than 1GB, but there are obviously corrective measures you could take, but this way you don't actually have to monitor every action of every user in realtime.
Off the cuff I would think that you need some sort of service to use the FileSystemWatcher class. The only way to really watch over the directory using asp.net is if you are controlling how all files get added and deleted from the directory. If that is the case then you can add code to skim through the directory and add the sizes of everything in there pretty easily.
If they can put files in these folders with other applications (such as an FTP client) then you are going to need a service to watch over the folders.
A better way of doing this is let your WebApp run as a portal to what's happening, but you will need a windows service running to ensure that someone does not go over the space allotment.
The service would also be able to help give data to your portal.
Remember, a website only runs when someone calls it. So if you don't use your website for 5 days, nothing will monitor it.
Sure you could keep a web page open for X amount of days, but that's overkill.

Is it commonplace/appropriate for third party components to make undocumented use of the filesystem?

I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.

How do I cluster an upload folder with ASP.Net?

We have a situation where users are allowed to upload content, and then separately make some changes, then submit a form based on those changes.
This works fine in a single-server, non-failover environment, however we would like some sort of solution for sharing the files between servers that supports failover.
Has anyone run into this in the past? And what kind of solutions were you able to develop? Obviously persisting to the database is one option, but we'd prefer to avoid that.
At a former job we had a cluster of web servers with an F5 load balancer in front of them. We had a very similar problem in that our applications allowed users to upload content which might include photo's and such. These were legacy applications and we did not want to edit them to use a database and a SAN solution was too expensive for our situation.
We ended up using a file replication service on the two clustered servers. This ran as a service on both machines using an account that had network access to paths on the opposite server. When a file was uploaded, this backend service sync'd the data in the file system folders making it available to be served from either web server.
Two of the products we reviewed were ViceVersa and PeerSync. I think we ended up using PeerSync.
In our scenario, we have a separate file server that both of our front end app servers write to, that way you either server has access to the same sets of files.
The best solution for this is usually to provide the shared area on some form of SAN, which will be accessible from all servers and contain failover.
This also has the benefit that you don't have to provide sticky load balancing, the upload can be handled by one server, and the edit by another.
A shared SAN with failover is a great solution with a great (high) cost. Are there any similar solutions with failover at a reasonable cost? Perhaps something like DRBD for windows?
The problem with a simple shared filesystem is the lack of redundancy (what if the fileserver goes down)?

Resources