Several applications need SU to be run. What are the Unix rules about when a user needs this privilege? Is it whenever we need to modify something outside our home?
This question arose from a more specific one - why do we need to be root to insert a kernel module with insmod ?
Insmod requires superuser privilege because inserting a kernel module modifies the operating system kernel. The module will be able to read and write any memory within the system, read or modify any file on disk, change permissions, ownership of any resources, etc.
And generally these are the sorts of the things that are protected by the superuser privilege: ability to bypass access controls on memory, files and other resources, and to perform various administrative tasks needed to operate the system (and that might render the system insecure or unusable if allowed to a malicious user).
In linux, it is possible to configure the system in such a way that there is not in fact a single superuser, but a series of granular capabilities that each can be granted to individual users (indeed this is how it is modeled in the kernel source code: one does not check whether the current UID is zero, one checks whether the current user has the "change-ownership" capability [CAP_CHOWN]). In the huge majority of deployed linux systems, the system is configured with a single all-or-nothing superuser privilege (i.e. whether the calling user has an effective user ID of 0).
The single superuser privilege (EUID == 0) was the traditional model used in Unix from the earliest days, though there have been a number of implementations that provided more granular privileges.
Modifying files outside your home directory is one use, although it is possible to own files in places other than one's home directory, and it is possible for other users to own files within your home directory.
Related
This may be a super simple question, but for some reason I'm hesitating on the obvious solution. Should I be storing a user's local device filesystem path (for an image or video) in my remote database?
In my mobile application, users can take photos and videos and those are stored locally on their devices until they decide to upload at a later time. I am currently just tracking the local filesystem locations on their device in the local app state and not saving that information in my database. I only update my database with an image URL after it is uploaded. This works well as long as it works! In the event of crashes some users are losing their images because the local app state gets lost, and there's no remote state to recover to.
The easy, obvious solution is to just save the user's local file path to the images in my database - then even if the local state is lost they can still recover the path to those images.
Example:
{ "file_path": "/users/user1/.my_app_data/media/img.jpg" }
However, it just feels improper to store internal, device-specific information like that in my remote database, since it's meaningless without that specific device (even for the same user). In my mind, data in the server should be as device-agnostic as possible.
What is considered best practice, if there is such a thing, for this situation?
There is no singular best practice here. It all depends on your exact use-case, your tolerance for problems, and the amount of effort you're willing to spend on things.
There is no harm in storing these local paths as long as you can recognize them, and not try to use them on another user's device.
But if the local state of the device is lost, aren't the local files themselves likely to be affected too? If not, the cloud backup of the paths may be a good aid in restoring the state. But if the files are likely to also be affected, storing the paths in the cloud probably isn't very helpful and I'd skip the effort.
So to be more specific, I am aware that an admin can see your browser history and stuff but can they see what you do in cmd and if you run cmd in general?
This question is rather vague. Do you have a specific question here? As a general rule, an administrator account exists to keep tabs on all actions performed on the host in question. The administrator would have access to whatever histories, file systems and commands you may have executed, added, deleted, etc.. In some cases, the logging level may be turned down, but I would never assume that your actions are invisible to an administrator account.
Is there a way to protect the database from deletion? I mean it's very easy to click on the "x" next to the root node. This would destroy the whole app and cause an enourmous mess to deal with.
How to deal with this fragility?
EDIT:
Let's assume I have two firebase accounts: one for testing and one for the launched app. I regularly log in and out to use the other one. On the test account I delete whole nodes on a regular basis. An activated password protection would avoid a very expensive confusion of the two accounts.
If you give a user edit access to the Firebase Console of your project, the user is assumed to be an administrator of the database. This means they can perform any write operation to the database they want and are not tied to your security rules.
As a developer you probably often use this fact to make changes to your data structure while developing the app. For application administrators, you should probably create a custom administrative dashboard, where they can only perform the actions that your code allows.
There is no way to remove specific permissions, such as limiting the amount of data they can remove. It could be a useful feature request, so I suggest posting it here. But at the moment: if you don't trust users to be careful enough with your data, you should not give them access to the console.
As Travis said: setting up backups may be a good way to counter some of this anxiety.
I'd like to write a Qt application which main purpose is to warn the user that there are things to do before he should shut down the computer. I thought this is possible, since a lot of applications ask the user to save before quitting when the computer is about to be shut down. I also want the user to be able to interrupt the shut down process, like those applications allow the user to say "Cancel".
Is there a way to do this in Qt?
If not, how to do this at least in a gnome session? (Support for more desktop environments would be nice, but currently this application is for me and my friends only, and we all use gnome.)
I read about the signal QCoreApplication::aboutToQuit(), but the documentation says that it doesn't allow user interaction. My application doesn't use any widget (maybe only a system tray icon), so the QWidget::closeEvent isn't of any use either.
Will handling the appropriate posix signal help? But as far as I know, such a signal handler may only contain trivial statements and asking the user whether to really shut down isn't trivial.
Here are some details if it helps: When the user wants to shut down the computer, my application will check if the repository (he configured to watch) is "clean", i.e. there is nothing to commit. If there is something to be committed, the application should warn the user and let him choose to ignore the uncommitted files or to abort the shut down process (in order to commit the changes).
You should implement session handling. When the operating system shuts down, QApplication::commitData() is called and you can ask the session manager to allow user interaction:
Within this function, no user interaction is possible, unless you ask the manager for explicit permission.
There is also an example about exactly your use case here.
I have been asked by my (pananoid!) boss to do two things
1. Detect when a user uploaded files to the net using HTTP. So for example how can I detect if a user uploads fire to a free webserver somewhere and can hense steal company data
Detect that a user is copying files to a USB device and what the name of these files are. Also if they copy a zip file to log the contents of the zip file, in case someone just zips up some company files and takes it like that.
Firstly is number 1 possible? and for number 2 can i detect the file names that are copied?
Secondly, any likes to software that does this?
Note that I am the network admin and everyone who I will monitor has local admin rights on their computer and we do not want to further restrict users access.
Thanks a lot
"Note that I am the network admin and
everyone who I will monitor has local
admin rights on their computer and we
do not want to further restrict users
access."
You can have liberty or security, but not both. The number of paths to get data out of an unlocked box are too many to enumerate. Someone zipped up the files and put them on a thumb drive? What if they used tar or shar or pasted them into a Word document, or printed them to a PDF file and sent it out via e-mail steganographically embedded in pornography?
Yeah, a former coworker was stupid enough to send a huge set of huge, logged e-mails to his future employer a couple of days prior to leaving, but you can't count on people being quite that stupid.
What your boss wants isn't possible given a moderately motivated thief and not wanting to "further restrict" access.
Given freely available cryptographically secure tools like OpenSSH (ssh, scp) are usable by almost anyone, what he's asking for is not possible.
I agree with all of you, websense, a DLP, a proxy, a network monitoring, can help you to identify and stop activities not permited by your policies. By the way, a tech should be sustained by a policy on information security and an awareness program. So you have two fields to build-up. one way people must be warned because of the information security policy and constantly informed by the awareness program, then (second) if someone breaks the policy, the technology has to do its work. warn you.
There's basically no way to prevent a malicious employee from stealing and exporting data, short of strip searches when entering and leaving the building and no outside network access whatsoever.
Your boss should be more concerned with accidental data leakage (ie, mistyped email address or mistaken reply alls) and breach containment. The series of technologies dedicated to the former are called Data Leakage Prevention. I'm not hip to all their jive, but I bet more than a few companies would be willing to promise you the world if you showed interest.
The latter is mostly done by closely following the "least privilege" mindset. A guy from sales should not be able to use CVS to check out the source code to the product, and a developer shouldn't be able to access the payroll database. Always only grant the minimum amount of access required to someone in order for them to do their job.
Short answer: No. Not unless you're willing to "further restrict access".
The access restriction for http uploads would be a filtering internet proxy. Make everyone go through Websense or something, and you have a log of everything they did online.
For the USB devices, no. Your option there, and how companies with security needs of that magnitude deal with that issue, is to tightly lock down the clients and disable USB key use. (as well as CD burners, floppy drives if you still have those, etc) Again, that's going to require intrusive software, something like Landesk, + removing local admin so users can't take the software off.