I am using the following in openssh/telnet code, which sets the user environment.
setenv("TEST_ENV", "testing", 1);
But this can be modified by the user after login, is there anyway to make it read-only env variable?
No, I'm not aware of any way of making a process's environment read-only.
You are aware, I trust, that a process can't change its parent's environment, and that a process has complete freedom to set the initial environment of any processes it in turn creates. It might be worth being a little more detailed about what you want to do, or what you want to stop a program being able to do.
Some OSs have fairly elaborate sandboxing support in the kernel (I know OS X has, for example, but it won't be the only one), and these might be able to control access to getenv. But that's obviously platform-specific.
Related
)
I'm deveoping a program using an SQLite database I acces via QSqlDatabase. I'd like to handle the (hopefully rare) case when some changes are done to the database which are not caused by the program while it's running (e. g. the user could remove write access, move or delete the file or modify it manually).
I tried to use a QFileSystemWatcher. I let it watch the database file, and in all functions wrtiting something to it, I blocked it's signals, so that only "external" changes would trigger the changed signal.
Problem is that the check of the QFileSystemWatcher and/or the actual writing to disk of QSqlDatabase::commit() seems not to happen in the exact moment I call commit(), so that actually, first the QFileSystemWatcher's signals are blocked, then I change some stuff, then I unblock them and then, it reports the file to be changed.
I then tried to set a bool variable (m_writeInProgress) to true each time a function requests a change. The "changed" slot then checks if a write action has be requested and if so, sets m_writeInProgress to false again and exits. This way, it would only handle "external" changes.
Problem is still that if the change happens in the exact moment the actual writing is going on, it's not catched.
So possibly, using a QFileSystemWatcher is the wrong way to implement this.
How could this be done in a safe way?
Thanks for all help!
Edit:
I found a way to solve a part of the problem. Starting an exclusive lock on the database file prevents other connections from changing it. It's quite simple, I just have to execute
PRAGMA locking_mode = EXCLUSIVE
BEGIN EXCLUSIVE
COMMIT
and handle the error that emerges if another instance of my program trys to access the database.
What's left is to know if the user (accidentally) deleted the file during runtime ...
First of all, there's no SQLITE support for this: SQLITE only supports monitoring changes created over a database connection within your direct control. Whatever happens in a separate process concurrently with your process, or when your process is not running, is by design completely out of your control.
The canonical solution to this problem is to encrypt the database with a key specific to your application (and perhaps user, etc.). Then, no third-party process can modify the database using SQLITE. Of course any process can corrupt your database, or get rid of it -- that's too bad. You can detect corruption trivially by using cryptographic signatures, perhaps even error correcting codes so as to be able to restore the data should a certain amount of corruption happen. You don't need notifications of someone moving or deleting the database file: you will know when you attempt to open the database and the "file not found" error is given back to you.
Of course all of the above requires a custom VFS implementation. That's very much par for the course.
I'm running a script by pasting it in a console like this:
bin/client2 debug
... my script ...
The script normalizes titles of the files. Since there are more than 20k files it takes really too much to time. So I need users still can use the site but in a read-only fashion.
But I assume that setting read-only true in zeo.conf would not let me run my normalization script. Wouldn't it?
How can I solve this?
Best regards,
Manuel.
There isn't, I'm afraid.
If your users alter the site when logged in, disable logging in for them until you are done.
Generally, for tasks like these, I run the changes in batches, to minimize conflicts and allow end-users to continue to use the site as normal. Break your work up in chunks, and commit after every n items processed.
You can add another zeo client that is not RO--it's not required that a zeoserver be RO to have the clients RO.
So, all the clients that are being used, make RO, and then add an additional RW client that isn't used by anyone but your script and then leave the zeoserver RW.
I need to store settings that update from time to time about my website and was wondering what is the most efficient way to do this. Note that once these settings are changed, I need them to be accessible even after an IIS reboot, so a simple application variable is not the answer. I also know I can store application settings in my config file, but as you change them, the actual XML doesn't seem to update, so I would have to re-write the XML behind the scenes each time it changes.
As for writing to an file, such as an INI, my concern arises if as I am writing, another is trying to read. I have had IO locking errors before doing such a thing. Also I can do a database storage, but trying to keep database calls low.
Right now I am set on probably having an INI that I write to on change, on application load I pull from that so that I can always access the local variable. This would make IO issues pretty unlikely since I am only reading once. Just basically looking for some input on what is likely the most efficient way of doing this.
Thanks in advance,
Anthony F Greco
If you want to minimize database calls, I'd say put the values in the database, then cache them in your application. Depending on your needs, maybe expire the cache every 30 minutes, so when you change the DB value, it will be applied in no more than 30 minutes, and you will only make a DB call every 30 minutes (or when needed, like when your app restarts). Or you can use a SQL Cache Dependency, but I've never done that, so I don't know the pitfalls of that technique.
We moved all of our application settings from the web.config to the database as can be seen HERE.
Ours was done to get around the problems with promoting code from Dev to Test to QA to Prod, but it can be used for other reasons.
Ours is cached on application startup and then every 5 minutes it checks to see if a single counter in a table was updated. If so, it refreshes all of the settings.
I'd go with #Joe Enos
Except that I'd use an xml file or a .ini, as you'd like to limit your db calls.
Let's presuppose that you have R running with root/admin privileges. What R calls do you consider harmful, apart from system() and file.*()?
This is a platform-specific question, I'm running Linux, so I'm interested in Linux-specific security leaks. I will understand if you block discussions about R, since this post can easily emerge into "How to mess the system up with R?"
Do not run R with root privs. There is no effective way to secure R in this way, since the language includes eval and reflection, which means I can construct invocations to system even if you don't want me to.
Far better is to run R in a way that cannot affect the system or user data, no matter what it tries to do.
Anything that calls external code could also be making system changes, so you would need to block certain packages and things like .Call(), .C(), .jcall(), etc.
Suffice it to say that it will end up being a virtually impossible task, and you are better off running it in a virtualized environment, etc. if you need root access.
You can't. You should just change the question: "How do I run user-supplied R code so as not to harm the user or other users of the system?" That's actually a very interesting question and one that can be solved with a little bit of cloud computing, apparmor, chroot magic, etc.
There are tons of commands you could use to harm the system. A handful of examples: Sys.chmod, Sys.umask, unlink, any command that allows you to read/write to a connection (there are many), .Internal, .External, etc.
And if you blocked users from those commands, there's nothing stopping them from implementing something in a package that you wouldn't know to block.
As noted by just about every response to this thread, removing the "potentially harmful" calls in the R language would:
Be potentially impossible to do completely.
Be difficult to do without spending significant time writing complicated (i.e. ugly) hacks.
Kneecap the language by removing a ton of functionality that makes R so flexible.
A safer solution that doesn't require modifying/rewriting large parts of the R language would be to run R inside a jail using something like BSD Jails, Jailkit or Solaris Zones.
Many of these solutions allow the jailed process to exercise root-like privileges but restrict the areas of the computer that the process can operate on.
A disposable virtual machine is another option. If a privileged user thrashes the virtual environment, just delete it and boot another copy.
One of my all time favorites. You don't even have to be r00t.
library(multicore);
forkbomb <- function(){
repeat{
parallel(forkbomb());
}
}
forkbomb();
To adapt a cliche from gun rights people, "system() isn't harmful - people who call system() are harmful".
No function calls are intrinsically harmful, but if you allow people to use them freely then those people may cause harm.
Also, the definition of harm will depend on what you consider harmful.
In general, R is so complex that you can assume that there is a way to trick it in executing data with seemingly harmless functions, for instance through buffer overflow.
I'm planning to use some session variables on an intranet ASP/VB.NET page, and want to make sure that I'm not missing out on anything important I should know, or that I have my information mixed up. So here's what I (think) I know about session variables.
They:
are stored on the server, so if I have a lot of users then they'll each be using some more memory which could lead to a slowdown.
are inaccessible by the user unless I expose access.
hang around/persist across user requests (i.e. each time the user makes a request from the page the data will still be there - until it times out). This also means that I need to make sure that the variable doens't have "left over" data in it.
Is there anything that I've got totally wrong, or anything I'm missing? I'd prefer not to get bitten by a bug down the road because I think I understand what's going on.
Thanks!
Unless it's a trivial app, I'd advise (out of proc) using StateServer or SQL server for session, with a preference to SQL. To configure just make a small change to web.config and run a sql script (see http://msdn.microsoft.com/en-us/library/ms972429.aspx).
It will save you a lot of headaches with IIS recycles and allow you to scale your app to multiple load balanced servers if the need should ever arise.