Asp.net APP - Handle File storage on another computer - asp.net

All,
I have an asp.net (asp.net 2.0) application that allows users to attach (upload) files to records (not much different from typical e-mail client) and then view those attached files later.
Typically, we will store these files on the webserver (i.e. on the same machine in app's directory or in another drive on the same computer). But, ... some of our client's would like to store these files on another computer (their designated file server that doesn't run our asp.net app) and here where the problem is: Since our application runs typically another Network Service account it will not have permissions to a directory on another machine.
The million dollar question is: For this type of setup, what are best options to make this setup work? I have listed some below that could elaborated:
1) Keep the files on the same machine as the app and tell client to use backup software to copy it to the file server at the end of the day (not really real-time)
2) Setup a share on the file server but ... how do I deal w/ security,- in other words, network service account is not privileged enough to copy files to shared folder. Do I?
2a) Run my app-pool w/ more powerful account?
2b) Is this possible? If yes, how to:- Use .net code to change current thread's SID to a more powerful account, copy file, and the switch back to network service.
Another issue with #2 is domain networks where policies are on the domain controller and require additional setup.
3) Other solutions????

I decided to use impersonation for directories that require user logon permissions. Here's a good reference material:
http://support2.microsoft.com/?id=306158
This solution works very well.

Related

Sharing data files between users in a Universal Windows Platform application

I am about to embark on the development of a line of business application using the Universal Windows Platform (Windows 10). One of the requirements of the application is the synchronisation of data from a server to a local SQLite database; this is required because the application needs to be usable where there is no network connectivity.
It is likely that multiple (windows domain) users will be accessing the application on the same device, sometimes simply by "swapping users", other times by logging off the first user and logging on as a new user.
I realise that UWP applications are installed at a user level, however I would like to be able to share the SQLite database between these users instead of forcing each user to download their own copy of the data.
Is this possible? I am struggling to find any reference to this kind of sharing within the Microsoft documentation - but of course that documentation is new and far from complete!
I guess at the end of the day I am looking for access to a folder that is accessible by any user running that application on the same device, such as the "x:\Users\Public" folders that are available from the desktop, but without having to ask the user to provide access to that folder via any picker control - instead simply being able to "open" it.
Thanks.
In case anyone runs across this, this functionality is now available as described in this blog post:
We introduced a new storage location Windows 10, ApplicationData.SharedLocalFolder, that allows multiple users of one app to share local data. Obviously this feature is only interesting with devices that will be used by more than one person. For such scenarios, for example in educational uses, it may make sense to place any large downloads in Shared Local. The benefits will be two-fold: any user can access these files without the need to re-download them, also there will be storage space savings
Keep in mind that Shared Local is only available if the machine has the right group policy, otherwise when you call ApplicationData.Current.SharedLocalFolder you will get back a null result.
In order to enable Shared Local the machine administrator should enable the corresponding policy.
Alternatively, the administrator could create a REG_DWORD value called AllowSharedLocalAppData with a value of 1 under HKLM\SOFTWARE\Policies\Microsoft\Windows\CurrentVersion\AppModel\StateManager
Note that data store in ShareLocal will only be persisted as long as the app is installed on the device and won’t be backed up by the system.
In Solution Explorer , Right click on Package.appxmanifest then click on ViewCode , end of this file in both projects add below code :
<Extensions>
<Extension Category="windows.publisherCacheFolders">
<PublisherCacheFolders>
<Folder Name="FolderName" />
</PublisherCacheFolders>
</Extension>
</Extensions>
After that in code you can access this folder with below line of code :
StorageFolder sharedDownloadsFolder = ApplicationData.Current.GetPublisherCacheFolder("FolderName");
It`s so important that the folder you will share between two these Apps depend on same publisher info at Certificate File [ProjectName]_TemporaryKey.pfx , if this Certificate File and publisher Info of app is same in both Projects , then you can access the same SharedFolder in both application and use it for create or use dataBase file(like SQLite Database file) or other files that need to be share in both applications.

Deploying a Symfony 2 application in AWS Opsworks

I want to deploy a php application from a git repository to AWS Opsworks service.
I've setup an App and configured chef cookbooks so it runs the database schema creation, dumping assets etc...
But my application has some user generated files in a sub folder under web root. git repository has a .gitignore file in that folder so an empty folder is there when i run deploy command.
My problem is : after generating some files (by using the site) in that folder, if I run 'deploy' command again 'Opsworks' adds a new release under 'site_name/releases/xxxx' folder and symlink to it from 'site_name/current' folder.
So it makes my previous 'user generated stuff' inaccessible. What is the best solution for this kind of situation?
Thanks in advance for your kind answers.
You have a few different options. Listed below in order of personal preference:
Use Simple Storage Service (S3) to store the files.
Add an Elastic Block Store (EBS) volume to your server and save files to the volume.
Save files to a database (This is something I would not do myself but the option is there.).
When using OpsWorks think of replicable/disposable servers.
What I mean by this is that if you can create one server (call it server A) and then switch to a different one in the same stack (call it server B), the result of using server A or server B should not impact how your application works.
While it may seem like a good idea to save your user generated files in a directory that is common between different versions of your app (every time you deploy a new release directory is generated) when you destroy your server, you run the risk of destroying your files.
Benefits and downsides of using S3?
Benefits:
S3 will give you high redundancy and availability to your files.
S3 is external to your application server so if your server dies or decide to move it to a different region, you can continue using the same s3 bucket.
Application Easy to scale. You could add multiple application servers that read and write files to S3.
Downsides:
You need extra code in you application. You will have to use the AWS API in order to store and retrieve the files. Using the S3 API is not hard but it may require an extra step to get you where you need. Take a look at the "Using an Amazon S3 Bucket" walk through for reference. This is be the code they use to upload the files to the S3 bucket in the example.
Benefits and downsides of using EBS?
Benefits:
EBS is an "external hard drive" that you can easily mount to your machine using the OpsWorks Resource Manager.
EBS volumes can be backed-up and restored.
It may be the fastest option to implement and integrate to your application.
Downsides:
You need to assign it to an instance before it is running.
It could be time consuming to move from server A to server B (downtime may be required).
You can not scale your application horizontally. While you can create copies of the EBS and assign them to different instances, the EBS will not be shared.
Downside of using a database?
Just do a google search on "storing files in database"
Take a look at Storing Images in DB - Yea or Nay?
My preferred choice would be to use S3, but ultimately this is your decision.
Good luck!
EDIT:
Take a look at this repository opsworks-chef-cookbooks it contains some recipes to deploy Symfony2 application on OpsWorks. I have been using it for over a year and works quite well.
Use Chef templates, and use them in a recipe in the opsworks deploy lifecycle event.

how to create a new directory in application server

i want make a new directory in application server in sap system ,and send my file in it .
for sending file in existing directory i find and use this transaction
CG3Z :/usr/sap/R3D/exe .
But i can not find a solution , neither with transaction nor abap codes .
I know that we can see directory with AL11 but I want to make my own directory.
I searched in SAP SCN and Stackoverflow but have not been able to find any similar problem.
Usually this is NOT done by application code but by a system administrator - otherwise you would have to add provisions for all supported operating systems. Also, there are a lot of other issues to take care of, like setting the proper file system permissions or making sure that a DFS is available on all application servers (writing stuff to application servers randomly depending on which server the user was logged on to usually won't do you any good). Have your system administrator setup a logical file name for you and use that.

Remote File Read

How can I read a text file resides in a remote machine? There is no share exists in that machine and I am not allowed to create any share or file in the remote machine. Also I am not allowed to run any client program in the remote machine. My program is a ASP.net in C# residing in a IIS webserver. For linux machine we used ssh connections and file reads are easy. Is there something by default available in windows similiar to it ?
Thanks,
Sreejith
The first question to ask is if there's a good business reason to read that file. If yes, the IT people will have to allow you a reasonable solution to the problem.
I have frequently used SFTP (secure FTP) for this kind of problem. Unfortunately SFTP is not part of Windows, but there are free and low-cost SFTP servers available. Here's a list from Wikipedia
Explain to IT why you need access to that file and discuss options including SFTP. If you have a valid business reason for this and they will "not let you because of policy", it's the job of your project manager or boss to clear out that roadblock. Ask them to help.
Finally, consider whether it's practical for the file on the remote machine to be pushed to you instead of you pulling it. If you can setup a file share on your PC, ask them to setup a job on the remote server that copies the file to your file share every time it is changed.
You could try accessing the Admin share of the machine. Windows by default created a share for all disks (named C$, D$ etc). But in that case the application you write should be running with the credentials of a user with rights to that share ((local) administrators have sufficient rights to do that).
If that doesn't work you need to create a share or install software to get files from that machine (like FTP). This is all because of security, it's a good thing you are not able to just read a file from any machine...
I have done this many time with the Remote File port 34
http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

How to deploy an ASP.NET Application with zero downtime

To deploy a new version of our website we do the following:
Zip up the new code, and upload it to the server.
On the live server, delete all the live code from the IIS website directory.
Extract the new code zipfile into the now empty IIS directory
This process is all scripted, and happens quite quickly, but there can still be a 10-20 second downtime when the old files are being deleted, and the new files being deployed.
Any suggestions on a 0 second downtime method?
You need 2 servers and a load balancer. Here's in steps:
Turn all traffic on Server 2
Deploy on Server 1
Test Server 1
Turn all traffic on Server 1
Deploy on Server 2
Test Server 2
Turn traffic on both servers
Thing is, even in this case you will still have application restarts and loss of sessions if you are using "sticky sessions". If you have database sessions or a state server, then everything should be fine.
The Microsoft Web Deployment Tool supports this to some degree:
Enables Windows Transactional File
System (TxF) support. When TxF support
is enabled, file operations are
atomic; that is, they either succeed
or fail completely. This ensures data
integrity and prevents data or files
from existing in a "half-way" or
corrupted state. In MS Deploy, TxF is
disabled by default.
It seems the transaction is for the entire sync. Also, TxF is a feature of Windows Server 2008, so this transaction feature will not work with earlier versions.
I believe it's possible to modify your script for 0-downtime using folders as versions and the IIS metabase:
for an existing path/url:
path: \web\app\v2.0\
url: http://app
Copy new (or modified) website to server under
\web\app\v2.1\
Modify IIS metabase to change the website path
from \web\app\2.0\
to \web\app\v2.1\
This method offers the following benefits:
In the event new version has a problem, you can easily rollback to v2.0
To deploy to multiple physical or virtual servers, you could use your script for file deployment. Once all servers have the new version, you can simultaneously change all servers' metabases using the Microsoft Web Deployment Tool.
You can achieve zero downtime deployment on a single server by utilizing Application Request Routing in IIS as a software load balancer between two local IIS sites on different ports. This is known as a blue green deployment strategy where only one of the two sites is available in the load balancer at any given time. Deploy to the site that is "down", warm it up, and bring it into the load balancer (usually by passing a Application Request Routing health check), then take the original site that was up, out of the "pool" (again by making its health check fail).
A full tutorial can be found here.
I went through this recently and the solution I came up with was to have two sites set up in IIS and to switch between them.
For my configuration, I had a web directory for each A and B site like this:
c:\Intranet\Live A\Interface
c:\Intranet\Live B\Interface
In IIS, I have two identical sites (same ports, authentication etc) each with their own application pool. One of the sites is running (A) and the other is stopped (B). the live one also has the live host header.
When it comes to deploy to live, I simply publish to the STOPPED site's location. Because I can access the B site using its port, I can pre-warm the site so the first user doesn't cause an application start. Then using a batch file I copy the live host header to B, stop A and start B.
OK so since everyone is downvoting the answer I wrote way back in 2008*...
I will tell you how we do it now in 2014. We no longer use Web Sites because we are using ASP.NET MVC now.
We certainly do not need a load balancer and two servers to do it, that's fine if you have 3 servers for every website you maintain but it's total overkill for most websites.
Also, we don't rely on the latest wizard from Microsoft - too slow, and too much hidden magic, and too prone to changing its name.
Here's how we do it:
We have a post build step that copies generated DLLs into a 'bin-pub' folder.
We use Beyond Compare (which is excellent**) to verify and sync changed files (over FTP because that is widely supported) up to the production server
We have a secure URL on the website containing a button which copies everything in 'bin-pub' to 'bin' (taking a backup first to enable quick rollback). At this point the app restarts itself. Then our ORM checks if there are any tables or columns that need to be added and creates them.
That is only milliseconds downtime. The app restart can take a second or two but during the restart requests are buffered so there is effectively zero downtime.
The whole deployment process takes anywhere from 5 seconds to 30 minutes, depending how many files are changed and how many changes to review.
This way you do not have to copy an entire website to a different directory but just the bin folder. You also have complete control over the process and know exactly what is changing.
**We always do a quick eyeball of the changes we are deploying - as a last minute double check, so we know what to test and if anything breaks we ready. We use Beyond Compare because it lets you easily diff files over FTP. I would never do this without BC, you have no idea what you are overwriting.
*Scroll to the bottom to see it :( BTW I would no longer recommend Web Sites because they are slower to build and can crash badly with half compiled temp files. We used them in the past because they allowed more agile file-by-file deployment. Very quick to fix a minor issue and you can see exactly what you are deploying (if using Beyond Compare of course - otherwise forget it).
Using Microsoft.Web.Administration's ServerManager class you can develop your own deployment agent.
The trick is to change the PhysicalPath of the VirtualDirectory, which results in an online atomic switch between old and new web apps.
Be aware that this can result in old and new AppDomains executing in parallel!
The problem is how to synchronize changes to databases etc.
By polling for the existence of AppDomains with old or new PhysicalPaths it is possible to detect when the old AppDomain(s) have terminated, and if the new AppDomain(s) have started up.
To force an AppDomain to start you must make an HTTP request (IIS 7.5 supports Autostart feature)
Now you need a way to block requests for the new AppDomain.
I use a named mutex - which is created and owned by the deployment agent, waited on by the Application_Start of the new web app, and then released by the deployment agent once the database updates have been made.
(I use a marker file in the web app to enable the mutex wait behaviour)
Once the new web app is running I delete the marker file.
The only zero downtime methods I can think of involve hosting on at least 2 servers.
I would refine George's answer a bit, as follows, for a single server:
Use a Web Deployment Project to pre-compile the site into a single DLL
Zip up the new site, and upload it to the server
Unzip it to a new folder located in a folder with the right permissions for the site, so the unzipped files inherit the permissions correctly (perhaps e:\web, with subfolders v20090901, v20090916, etc)
Use IIS Manager to change the name of folder containing the site
Keep the old folder around for a while, so you can fallback to it in the event of problems
Step 4 will cause the IIS worker process to recycle.
This is only zero downtime if you're not using InProc sessions; use SQL mode instead if you can (even better, avoid session state entirely).
Of course, it's a little more involved when there are multiple servers and/or database changes....
To expand on sklivvz's answer, which relied on having some kind of load balancer (or just a standby copy on the same server)
Direct all traffic to Site/Server 2
Optionally wait a bit, to ensure that as few users as possible have pending workflows on the deployed version
Deploy to Site/Server 1 and warm it up as much as possible
Execute database migrations transactionally (strive to make this possible)
Immediately direct all traffic to Site/Server 1
Deploy to Site/Server 2
Direct traffic to both sites/servers
It is possible to introduce a bit of smoke testing, by creating a database snapshot/copy, but that's not always feasible.
If possible and needed use "routing differences", such as different tenant URL:s (customerX.myapp.net) or different users, to deploy to an unknowing group of guinea pigs first. If nothing fails, release to everyone.
Since database migrations are involved, rolling back to a previous version is often impossible.
There are ways to make applications play nicer in these scenarios, such as using event queues and playback mechanisms, but since we're talking about deploying changes to something that is in use, there's really no fool proof way.
This is how I do it:
Absolute minimum system requirements:
1 server with
1 load balancer/reverse proxy (e.g. nginx) running on port 80
2 ASP.NET-Core/mono reverse-proxy/fastcgi chroot-jails or docker-containers listening on 2 different TCP ports
(or even just two reverse-proxy applications on 2 different TCP ports without any sandbox)
Workflow:
start transaction myupdate
try
Web-Service: Tell all applications on all web-servers to go into primary read-only mode
Application switch to primary read-only mode, and responds
Web sockets begin notifying all clients
Wait for all applications to respond
wait (custom short interval)
Web-Service: Tell all applications on all web-servers to go into secondary read-only mode
Application switch to secondary read-only mode (data-entry fuse)
Updatedb - secondary read-only mode (switches database to read-only)
Web-Service: Create backup of database
Web-Service: Restore backup to new database
Web-Service: Update new database with new schema
Deploy new application to apt-repository
(for windows, you will have to write your own custom deployment web-service)
ssh into every machine in array_of_new_webapps
run apt-get update
then either
apt-get dist-upgrade
OR
apt-get install <packagename>
OR
apt-get install --only-upgrade <packagename>
depending on what you need
-- This deploys the new application to all new chroots (or servers/VMs)
Test: Test new application under test.domain.xxx
-- everything that fails should throw an exception here
commit myupdate;
Web-Service: Tell all applications to send web-socket request to reload the pages to all clients at time x (+/- random number)
#client: notify of reload and that this causes loss of unsafed data, with option to abort
# time x: Switch load balancer from array_of_old_webapps to array_of_new_webapps
Decomission/Recycle array_of_old_webapps, etc.
catch
rollback myupdate
switch to read-write mode
Web-Service: Tell all applications to send web-socket request to unblock read-only mode
end try
A workaround with no down time and I am regularly using is:
Rename running .NET core application dll to filename.dll.backup
Upload the new .dll (web application is available and serving the requests while file is being uploaded)
Once upload is complete recycle the Application Pool. Either Requires RDP Access to server or function to recycle application pool in your hosting control panel.
IIS overlaps the app pool when recycling so there usually isn’t any downtime during a recycle. So requests still come in without every knowing the app pool has been recycled and the requests are served seamlessly with no downtime.
I am still searching for more better method than this..!! :)
IIS/Windows
After trying every possible solution we use this very simple technique:
IIS application points to a folder /app that is a symlink (!) to /app_green
We deploy the app to /app_blue
We change the symlink to point to /app_blue (the app keeps working)
We recycle the application pool
Zero downtime, but the app does choke for 3-5 seconds (JIT compilation and other initialization tasks)
Someone called it a "poor man's blue-green deployment" without a load balancer.
Nginx/linux
On nginx/linux we use "proper" blue-green deployment:
nginx reverse proxy points to localhost:3000
we deploy to localhost:3001
warmup the localhost:3001
switch the reverse proxy
shot down localhost:3000
(or use docker)
Both windows and linux solutions can be easily automated with powershell/bash scripts and invoked via Github Actions or a similar CD/CI engine.
I would suggest keeping the old files there and simply overwriting them. That way the downtime is limited to single-file overwrite times and there is only ever one file missing at a time.
Not sure this helps in a "web application" though (i think you are saying that's what you're using), which is why we always use "web sites". Also with "web sites" deploying doesn't restart your site and drop all the user sessions.

Resources