So I have a 3rd party application which exposes a Python API. I have a python script that does what I need it to do with that 3rd party application.
The python script only uses command line arguments, no user input and writes to stdout.
I'm wiring this up to be launched at runtime from an ASP.NET website. Since python is so powerful, I'm wondering if it would make sense from a security standpoint to create a new executable that launches the python process with a hard coded or locally read (App.Config) script path and copy its command line arguments to the python process.
This new executable would be launched from the .NET application as a way to prevent attackers from uploading and launching arbitrary python code? Does this negate any attack vectors or is it just extra complexity?
Also, I can't use IronPython, as the 3rd party extensions are written for CPython only.
Idea of the code:
Process p = new Process();
p.StartInfo.FileName = "python.exe";
p.StartInfo.Arguments = "goodscript.py \"" + argument1.Replace("\"","\\\"") + "\" \"" +
argument2.Replace("\"","\\\"") + "\"";
p.Start();
I'm worried that either a command line switch like -i could be injected or that somehow the file path to goodscript.py could be changed to point to another script. I guess if the later is possible then really the file path to python.exe could be changed to another executable file path.
The sample code definitely has security concerns. I'm not sure how Process.StartInfo works there; if it splits up the arguments before executing the subprocess or if it uses some shell to do it for you, but either way, there's at least room for an attacker to put backslashes in the argument strings and trick your double-quote-escaping.
If you have some other method to convey arguments individually, instead of trying to pack them into a string and let something else parse them apart, that would be much preferable.
Your Python script is going to be invoked by the website. If someone can use the website to cause the Python script to be invoked with different arguments, or to execute a different script, or whatever, then that's the security hole. It has nothing to do with "Python being so powerful", this would be a security issue for calling out to any language
How exactly do you think this would be affected by wrapping the Python script with something else and invoking that instead? Either someone can cause a different executable to be invoked, or to be invoked with different arguments or on different data, in which case you're screwed no matter what language you're calling. Or they can't, in which case you're fine no matter what language you're calling.
As it currently stands, your script is not just an added complexity, but is actually opening up a security hole. Namely, that you're passing the arguments a single string that have to be parsed instead of as pre-tokenized list of strings.
Related
I sometimes need to execute a built-in shell command like type, or just run an arbitrary shell command string (as a single string rather than an array of command + arguments). While I prefer the API of Deno.run for the vast majority of cases, it doesn't let me do either of these.
Is there any clean way to do these in Deno? It doesn't need to be the same solution for both, but it could be. I'm currently using a solution like what's seen here, but it doesn't feel very elegant, and I'd love to find a better way. Another alternative I considered was writing the command to a temp file with .sh/.ps1 extension and then running that.
I want to make a protected Lua script [for a game] that can be run via an external program. This means I don't want anyone else to see the source code. The external program is a Lua wrapper
Seraph is a ROBLOX Lua script execution exploit. It uses a wrapper to emulate a real ROBLOX scripting environment. It can run scripts in an elevated level 7 thread, allowing you to access API functions and change properties that are normally not accessible. It also contains re-implementations of missing functions such as loadstring and GetObjects, and bypasses various security checks, such as the URL trust check in the HttpGet/HttpPost functions.
They recently implemented LuaJIT and I thought this might help. If it can only be run by LuaJIT wrappers that would be awesome!
--PS I know basic lua but can't write anything cool.
--PPS It needs to be able to have a password pulled from an online database
Since I'm not familiar with ROBLOX, I'm just going to focus on this part of your question:
This means I don't want anyone else to see the source code.
For this, you will want to use Lua's bytecode dumping facilities. When executing a script, Lua first compiles it to bytecode, then executes said bytecode in the VM. LuaJIT does the same thing, except that it has a completely different VM & execution model. The important thing is that LuaJIT's bytecode is compatible with standard Lua and still offers the same bytecode dumping facilities.
So, the best 'protection' you can have for your code is to compile it on your end, then send and execute only the compiled, binary version of it on the external machine.
Here's how you can do it. First, you use this code on your machine to produce a compiled binary that contains your game's bytecode:
local file = io.open('myGame.bin', 'wb')
file:write(string.dump(loadfile('myGame.lua')))
file:close()
You now have a compiled version of your code in 'myGame.bin'. This is essentially as 'safe' as you're going to get.
Now, on your remote environment where you want to run the game, you transfer 'myGame.bin' to it, and run the compiled binary like so:
local file = io.open('myGame.bin', 'rb')
local bytecode = file:read('*all')
file:close()
loadstring(bytecode)()
That will effectively run whatever was in 'myGame.lua' to begin with.
Forget about passwords / encryption. Luke Park's comment was poignant. When you don't want someone to have your source, you give them compiled code :)
I know barely more than zero about R: until yesterday I didn't know how to spell it. But I'm suicidal: for my web site, I'm thinking about letting a visitor type in an R "program" ( is it even called a "program") and then, at submit time, blindly calling the R interpreter from my CGI. I'd then return the interpreter's output to the visitor.
Does this make sense? Or does it amount to useless noise?
If it's workable, what are the pitfalls in this approach? For example, what are the security issues, if any? Is it possible to make R crash, killing my CGI program? Do I have to clean up the R code before calling the interpreter? And the like.
you could take a look to Rserve which allows to execute R scripts via the TCP/IP interface available in PHP for example if I'm not mistaken.
Its just asking for trouble to let people run arbitrary R code on your server. You could try running it in a chroot jail, but these things can be broken out of. Even in a chroot, the R process could delete or alter files, or spawn a long-running process, or download a file to your server, and all manner of nastiness.
You might look at Rweb, which has exactly this behavior: http://www.math.montana.edu/Rweb/
Since you can read and write files in R, it would not be safe to let people run arbitrary R code at your server. I would look if R has something like PHP's safe mode... If not, and if you are root, you can try to run R under user nobody in a chroot (you must also place there packages and libraries - for readonly access, and some temporary directory for RW access).
I have a program that monitors certain files for change. As soon as the file gets updated, the file is processed. So far I've come up with this general approach of handing "real time analysis" in R. I was hoping you guys have other approaches. Maybe we can discuss their advantages/disadvantages.
monitor <- TRUE
start.state <- file.info$mtime # modification time of the file when initiating
while(monitor) {
change.state <- file.info$mtime
if(start.state < change.state) {
#process
} else {
print("Nothing new.")
}
Sys.sleep(sleep.time)
}
Similar to the suggestion to use a system API, this can be also done using qtbase which will be a cross-platform means from within R:
dir_to_watch <- "/tmp"
library(qtbase)
fsw <- Qt$QFileSystemWatcher()
fsw$addPath(dir_to_watch)
id <- qconnect(fsw, "directoryChanged", function(path) {
message(sprintf("directory %s has changed", path))
})
cat("abc", file="/tmp/deleteme.txt")
If your system provides an API for monitoring filesystem changes, then you should use that. I believe Macs come with this. Not sure about other platforms though.
Edit:
A quick goog gave me:
Linux - http://wiki.linuxquestions.org/wiki/FAM
Win32 - http://msdn.microsoft.com/en-us/library/aa364417(VS.85).aspx
Obviously, these APIs will eliminate any polling that you require. On the other hand, they may not always be available.
Java has this: http://jnotify.sourceforge.net/ and http://java.sun.com/developer/technicalArticles/javase/nio/#6
I have a hack in mind: you can setup a CRON job/Scheduled task to run R script every n seconds (or whatever). R script checks the file hash, and if hashes don't match, runs the analysis. You can use digest::digest function, just check out the manual.
If you have lots of files that you want to monitor, then R may be too slow for this purpose. Go to your c: or / dir and see how long it takes to do file.info(dir(recursive = TRUE)). A dos or bash script may be quicker.
Otherwise, the code looks fine.
You could use the tclTaskSchedule function in the tcltk2 package to set up a function that checks for updates and runs your code. This would then be run on a regular basis (you set the timing) but would still allow you to use your R session.
I'll offer another solution for windows that I have been using in a production environment that works perfectly and that I find very easy to set up and, under the hood it basically accesses the system API for monitoring folder changes as others have mentioned, but all the "hard work" is taken care of for you. I use a freely available piece of software called Folder Monitor by Nodesoft and well described here. Once you execute this program, it appears in your system tray and from there you can specify a given directory to monitor. When files are written to the directory (or changed or modified - there are a few options from which you can choose), the program executes any program you like. I simply link the program to a windows batch that that calls my R Script. So for example, I have Folder Monitor set up to monitor a "\myservername\DropOff" UNC path for any new data files written to it. When Folder Monitor detects new files, it executes RunBatch.bat file that simply runs an R script (see here for information on setting that up) that validates the format of the expected file based on an expected naming convention for files received and then it unzips and processes the data, creating a dataframe and ultimately loads that into a SQL Server Database. It just doesn't get any easier.
One note if you decide to use this solution: take a look at the optional delay execution parameter, which might be important if files take a while to copy into the target directory from the source location.
Is there a way to pass commands (from a shell) to an already running R-runtime/R-GUI, without copy and past.
So far I only know how to call R via shell with the -f or -e options, but in both cases a new R-Runtime will process the R-Script or R-Command I passed to it.
I rather would like to have an open R-Runtime waiting for commands passed to it via whatever connection is possible.
What you ask for cannot be done. R is single threaded and has a single REPL aka Read-eval-print loop which is, say, attached to a single input as e.g. the console in the GUI, or stdin if you pipe into R. But never two.
Unless you use something else as e.g. the most excellent Rserve which (when hosted on an OS other than Windoze) can handle multiple concurrent requests over tcp/ip. You may however have to write your custom connection. Examples for Java, C++ and R exist in the Rserve documentation.
You can use Rterm (under C:\Program Files\R\R-2.10.1\bin in Windows and R version 2.10.1). Or you can start R from the shell typing "R" (if the shell does not recognize the command you need to modify your path).
You could try simply saving the workspace from one session and manually loading it into the other one (or any kind of variation on this theme, like saving only the objects you share between the 2 sessions with saveRDS or similar). That would require some extra load and save commands but you could automatise this further by adding some lines in your .RProfile file that is executed at the beginning of every R session. Here is some more detailed information about R on startup. But I guess it all highly depends on what are you doing inside the R sessions. hth