Can you write code to create multiple batch files in asp.net? Can you create it and also have the code write batch file commands into the batch files? What I want to do is create .bat files in a certain location on the computer (or make it create a folder to put the .bat files into) on a button click and have it write all the commands into the .bat file. For example, I have a web form the user inputs data, pushes a button and based off the .bat files it creates, it creates a new batch file every time the button is clicked. Is this possible?
Yes, it is, but it could have a lot of security consequences, so, be careful.
You create a file using methods in the File class, like File.CreateText.
http://msdn.microsoft.com/en-us/library/system.io.file.createtext.aspx
Yes, this is possible but in general, you will only be able to create these files inside your application's folder.
If you need to create the files outside your application's folder, you need to make sure that the App Pool your app runs under, has permissions to write to that folder.
Since batch files are basically executables, I'd be very careful about using them and I would never, under any circumstance take free user input to be placed into those files. You may give the user a set of predefined "commands" to chose from to mitigate your exposure, for example.
Related
I am writing a PLSQL procedure that takes input as an excel file through front end and using that excel input the procedure inserts , updates or deletes the records present in an existing table . Can anyone show me the approach for this?
If that "Excel" file has to be really in native XLS(X) format, a simple option - if you want to stay within Oracle boundaries - is an Apex application which offers a data loading wizard. Takes 4 pages to create it (don't worry, Apex Wizard creates almost everything for you). Once the loading is over, a (stored) procedure can do the rest of processing (you'd call it by pushing a button).
Alternatively, if you save contents of that file as a CSV file, you can load it with SQL*Loader, utility ran at the operating system command prompt. You'd have to create a control file (no wizard to do that, I'm afraid). This approach probably isn't convenient for end users (who's going to type anything at the command prompt?) so you'd have to create some kind of an application to do that.
Or, CSV again, but this time used as an external table. This approach requires the file to be located in a directory accessible by the database server (most frequently, the directory is located on that computer, and you most frequently don't want to allow access to anyone to it). Its advantage is that you can access the CSV file directly from (PL/)SQL, fetch data from it, perform various adjustments etc.
If you're capable of writing programs that aren't part of the Oracle niche (I'm not), go for it (but I can't suggest anything; someone else might).
I am using the R shiny package to build a web interface for my executable program. The web interface provides user input and shows output.
On the server background, the R script formats user inputs and saves them to a local input file. Then R calls the system command to run the executable program.
My concern is that if multiple users run the web app at the same time, it is possible that the input file generated by the first user will be overwritten by the second user's input before it is read by the executable program.
One way to solve the conflict is to ask R to create a temporary folder and generate/run the input file under that folder for each user. But I'd like to know whether there is a better or automatic way to resolve this potential conflict with shiny. For example, if use shiny fileInputs, the uploaded files are automatically stored in a temporary folder.
Update
Thanks for the advice.#Symbolix and #Mike Wise
I read the persistent data storage article before but I don't think it is exactly what I wanted. Maybe my understanding is not correct. I end up with creating a temporary folder and run my executable from there.
I need to iterate through infopath templates (xsn files) and change the URL of data connections, and then save the changes to the templates.
The data connections I want to change, points to lists in a sharepoint environment.
So, how could I accomplish this task?
I was thinking doing this with a console application.
Infopath definitely doesn't make it easy to deploy to different servers. I have used a powershell script but you can use any console app or scripting language.
Steps to follow:
1. Extract the files from the XSN (either use extrac32 util from MS or rename to zip and use any zip library)
2. Change the data connection (string replace) in manifest.xsf, template.xml, and sampledata.xml
3. Repackage the files into the XSN (either use cabarc util from MS or zip and rename)
It is a pain to have to do all that but the entire script is less than a page long and runs pretty fast. One caveat I ran into was I needed a delay between steps 1 and 2 - the files weren't actually finished extracting and my script was trying to change them.
I have tons of files dumped into a few different folders. I've tried organizing them several times, unfortunatly, there is no organization structure that consistently makes sense for all of them.
I finally decided to write myself an application that I can add tags to files with, then the organization can be custom to the actual organizational structure.
I want to prevent from getting orphaned data. If I move/rename a file, my tag application should be told about it so it can update the name in the database. I don't want it tagging files that no longer exist, and having to readd tags for files that used to exist.
Is there a way I can write a callback that will hook into the mv command so that if I rename or move my files, they will invoke the script, which will notify my app, which can update its database?
My app is written in Ruby, but I am willing to play with C if necessary.
If you use Linux you can use inotify (manpage) to monitor directories for file events. It seems there is a ruby interface for inotify.
From the Wikipedia:
Some of the events that can be monitored for are:
IN_ACCESS - read of the file
IN_MODIFY - last modification
IN_ATTRIB - attributes of file change
IN_OPEN and IN_CLOSE - open or close of file
IN_MOVED_FROM and IN_MOVED_TO - when the file is moved or renamed
IN_DELETE - a file/directory deleted
IN_CREATE - a file in a watched directory is created
IN_DELETE_SELF - file monitored is deleted
This does not work for Windows (and I think also not for other Unices besides Linux) as inotify does not exist there.
Can you control the path of your users? Place a script or exe and have the path point to it before the standard mv command. Have this script do what you require and then call the standard mv to perform the move.
Alternately an alias in each users profile. Have the alias call your replacement mv command.
Or rename the existing mv command and place a replacement in the same dir, call it mv and have it call your newly renamed mv command after doing what you want.
I'm creating a script to process files provided to us by our users. Everything happens within the same UNIX system (running on Solaris 10)
Right now our design is this
User places file into upload directory
Script placed on cron to run every 10 minutes.
Script looks for files in upload directory, processes them, deletes immediately afterward
For historical/legacy reasons, #1 can't change. Also, deleting the file after processing is a requirement.
My primary concern is concurrency. It is very likely that the situation will arise where the analysis script runs while an input file is still being written to. In this case, data will be lost and this (obviously) unacceptable.
Since we have no control over the user's chosen means of placing the input file, we cannot require them to obtain a file lock. As I understand, file locks are advisory only on UNIX. Therefore a user must choose to adhere to them.
I am looking for advice on best practices for handling this problem. Thanks
Obviously all the best solutions involve the client providing some kind of trigger indicating that it has finished uploading. That could be a second file, an atomic move of the file to a processing directory after writing it to a stage directory, or a REST web service. I will assume you have no control over your clients and are unable or unwilling to change anything about them.
In that case, you still have a few options:
You can use a pretty simple heuristic: check the file size, wait 5 seconds, check the file size. If it didn't change, it's probably good to go.
If you have super-user privileges, you can use lsof to determine if anyone has this file open for writing.
If you have access to the thing that handles upload (HTTP, FTP, a setuid script that copies files?) you can put triggers in there of course.