I know the sticky bit on directories allows file deletion only by the owner of files contained inside the sticky dir.
But I can also chmod 1777 a File and the ls -l cmd shows that the file indeed has the sticky bit set.
Nothing.
From the docs: https://www.freebsd.org/cgi/man.cgi?query=sticky&sektion=7&apropos=0&manpath=FreeBSD+10.3-RELEASE+and+Ports
A special file mode, called the sticky bit (mode S_ISTXT), is used to
indicate special treatment for directories. It is ignored for regular
files. See chmod(2) or the file for an explanation of file
modes.
Erik Bennett linked the Wikipedia article on the history of the sticky bit - its original purpose was a hint to the kernel to try to keep files hot in swap, for performance. Modern OSs have abandoned this feature though.
When set, it instructed the operating system to retain the text segment of the program in swap space after the process exited. This speeds up subsequent executions by allowing the kernel to make a single operation of moving the program from swap to real memory.
One notable problem with "stickied" programs was replacing the executable (for instance, during patching); to do so required removing the sticky bit from the executable, executing the program and exiting to flush the cache, replacing the binary executable, and then restoring the sticky bit.
Currently, this behavior is only operative in HP-UX and UnixWare. Solaris appears to have abandoned this in 2005. The 4.4-Lite release of BSD retained the old sticky bit behavior, but it has been subsequently dropped from OpenBSD (as of release 3.7) and FreeBSD (as of release 2.2.1). No version of Linux has ever supported this traditional behavior.
Related
In RStudio, if you are dealing with a directory that contains a large number of files, and you want to commit and push the recent changes (that you made on all of them) to your repository, the GUI Git component gets super slow and practically doesn't work. Any idea?
Of course you can ignore the GUI and stick to the command-line Git forever, but if you don't want, a quick jump to the command-line git would solve this problem for now.
The temporary solution that I found is as follows:
Click on the blue-gear icon on the GIT panel, inside RStudio.
Select Shell (a terminal window will pup up!)
Write the add and commit command in the terminal:
{ATTENTION: The following command will commit changes on ALL files! You may want to use what is appropriate for your situation!}
git add -A && git commit -m 'staging all files'
Now you can go back to the GUI Git, and click on push button. All files that you staged in the terminal window, will be pushed up to your repository.
My workaround today was to...
create a man1/ directory in my RStudio project (after force-closing and
relaunching RStudio a few times after I had done something that caused it to hang again),
include man1/ in .gitignore,
move just about everything in man/ to man1/,
delete the .git/index.lock file in the repository,
futz around with RStudio, until it was responsive enough to make the (small) commit of files from man/,
pull and push, so that the remote main was once-again fully synched
copy some files from man1/ to man/, commit these
repeat and rinse of steps 6 and 7, until there's nothing left in man1/
Delete man1/ and its entry in .gitignore
My recipe above isn't one-size-fits-all... for example, you may have run into a "diff is too large" difficulty with RStudio because of a single oversized file, rather than (as I had) with "too many" small files. If you're trying to commit a monstrously-big set of diffs from a single file... you should be including this file in your .gitignore, rather than expecting it to be version-controlled without any difficulty by git. Also, if you're locking up RStudio's Git-interface because of "too many" files being committed simultaneously, your first port of call should I think to commit directories one at a time (but do be sure to push & pull after each commit).
And... I'm not going to complain about this defect in how RStudio interacts with git!
Instead I'll close with some kudos. After just a few days of futzing around with RStudio, I'm finding it to be so much easier than what I remember about hacking on S via emacs in the early 1990s. RStudio handles just about everything (especially the ROxygen documentation workflows) than the emacs/eSS setup that I had been struggling to get fully operational earlier this month.
I'm also impressed with how R has developed since the last time I looked at it -- about 20 years ago! The semantics lurking in corner cases are still very "surprising", to put it mildly ;-) But I do appreciate how it has maintained compatibility with the truly-bizarre and primitive semantics of S while allowing its power-users (which will surely never include me!) to write expressively, elegantly, and with an appropriate balance between concision and the write-only alphabet-soup of APL and its ilk (https://en.wikipedia.org/wiki/Write-only_language)
This is mostly a stupid question, since UPX (a tool that wrings extra bytes out of your executable files) saves a tiny amount of space over the built in compression in the buildapp tool.
A very small demo application creates a 42 megabyte file. Understandable, since the SBCL environment isn't tiny.
Passing the --compress-core option to buildapp shrinks that down to 9.2MB.
I thought I'd try throwing UPX at the resulting binary, and the savings only amounts to a few more bytes: 9994288 -> 9871360
However, the resulting file no longer runs anymore - it just jumps into the SBCL REPL (with no errors, it's as if I just ran sbcl by hand), and some poking around there reveals that the functions making up my test program no longer exist.
What did UPX do to the binary that resulted in this breakage?
This may not be the answer, but it may serve as a clue: I've found that if you add anything, even a single byte, to the end of an SBCL executable created with sb-ext:save-lisp-and-die, then all the definitions disappear, just as you described.
Perhaps SBCL creates executables by appending the core (which contains your definitions) to a copy of the SBCL ELF (or PE on Windows) binary, plus some metadata at the end so that SBCL can still find the beginning of the core even though it's appended to an executable.
If you hex-edit an executable created with save-lisp-and-die, you'll find that it ends with the string "LCBS" (SBCL backwards), which seems to support my theory. "LCBS" probably serves as a magic number, letting SBCL know that yes, this executable contains its own core.
UPX compresses executables, probably including that magic number at the end. When SBCL opens its UPX-compressed self on disk, it won't find "LCBS" at the end, so it assumes that there is no core appended to the executable.
I can't explain why the standard library seems to still be there if this is the case. It could be that SBCL loads /usr/lib/sbcl/sbcl.core (or its equivalent on Windows) in that case. This could be tested by moving the executable to a machine where SBCL is not installed and seeing if it works at all, and if so, whether you still have car, cdr, list, etc.
Is this possible to do?
Conceptually, a solution should apply across a lot of possible configurations, ranging from two vim instances running in separate virtual terminals in panes in a tmux window, to being in separate terminals on separate machines in separate geographical regions, one or both connected over network (in other words, the vims are hosted by two separate shell processes, which they would already be under tmux anyhow).
The case that prompted me to ponder this:
I have two tmux panels both with vim open and I want to use the Vim yank/paste to copy across the files.
But it only works if I've got them both running in the same instance of Vim, so I am forced to either:
use tmux's copy/paste feature to get the content over (which is somewhat tedious and finicky), or
use the terminal (PuTTY, iTerm2)'s copy/paste feature to get the content over (which is similarly tedious but not subject to network latency, however this only works up to a certain size of text payload to copy at which point this method will not work at all due to the terminal not knowing the contents of the not-currently-visible parts of the file), or
lose Vim buffer history/context and possibly shell history/context in reopening the file manually in one of the Vim instances in either a split buffer or tab and then closing the other terminal context (much less tedious than 1 for large payloads but more so with small payloads).
This is a bit of a PITA and could all be avoided if I have the foresight of switching to an appropriate terminal already running vim to open my files but the destiny of workflow and habit rarely match up with that which would have been convenient.
So the question is, does there exist a command or the possibility of a straighforwardly-constructed (shell) script that allows me to join buffers across independently running vim instances? Am having a hard time getting Google to answer that adequately.
In the absence of an adequate answer (or if it is determined with reasonable certainty that Vim does not possess the features to accomplish the transfer of buffers across its instances), a good implementation (bindable to keys) for approach 3 above is acceptable.
Meanwhile I'll go back to customizing my vim config further and forcing myself to use as few instances of vim as possible.
No, Vim can't share a session between multiple instances. This is how it's designed and it doesn't provide any session-sharing facility. Registers, on-the-fly mappings/settings, command history, etc. are local to a Vim session and you can't realistically do anything about that.
But your title is a bit misleading: you wrote "buffer" but it looks like you are only after copying/pasting (which involves "register", not "buffers") from one Vim instance to another. Is that right? If so, why don't you simply get yourself a proper build with clipboard support?
Copying/yanking across instances is as easy as "+y in one and "+p in another.
Obviously, this won't work if your Vim instances are on different systems. In such a situation, "+y in the source Vim and system-provided paste in the destination Vim (possibly with :set paste) is the most common solution.
If you are on a Mac, install MacVim and move the accompanying mvim shell script somewhere in your path. You can use the MacVim executable in your terminal with mvim -v.
If you are on Linux, install the vim-gnome package from your package manager.
If you are on Windows, install the latest "Vim without Cream".
But the whole thing looks like an XY problem to me. Using Vim's built-in :e[dit] command efficiently is probably the best solution to what appears to be your underlying problem: editing many files from many different shells.
After a lot of trial and error (mostly due to lack of documentation and examples) I have managed to create MSI installers that install custom DLLs to WinSxS as side-by-side assembly. There is only one problem: Uninstalling leaves all files (DLLs, manifests and catalogs) in the WinSxS directory. How can or should I best clean that up? I know for sure that nothing else references it.
I have read somewhere that WinSxS has a self-scavenging process that cleans up over time but I could not find more information about that. Can you manually invoke this to clean up stuff?
The only other way I see is manually deleting those bits. First you have to change the owner of all files (assembly, catalog, manifest and their respective directory) from SYSTEM to an administrator account, adjust the permissions and delete them. There are also pieces left in the registry (I think HKLM\COMPONENTS\DerivedData\Components may be one place), but since WinSxS should be treated as opaque it is hard to find any information.
Scavenging isn't exposed anywhere that I know of. I'm not even sure when it is kicked off automatically. Maybe on uninstall of a service pack? Maybe some tool admins can run? I really forget.
Anyway, my suggestion is don't fight it. There are so many twisty turns down there that it just isn't worth trying to get the disk space back. Once uninstalled the bits still in the SxS cache will not be activated so they are just wasting space.
It's a dumb design but blame Microsoft and don't try to overcompensate.
Here is an article, it's kinda complete guide to WinSxS.
So, shortly, you can only uninstall some components (all their versions are in this folder), and you can run Service Pack bridge burning utility (in Vista it is named VSP1CLN.EXE and shipped with SP1). Note, that after execution, you shouldn't be able to uninstall SP or any components to state, prior to SP release date.
No-one is convinced you can - short of a complete reinstall, your bloaty WinSxS directory is there to stay.
There's been a long "discussion" of the problem on technet.
There is no documentation of the format, or any instructions how to remove files that are no longer needed - MS seems to think that disc space is cheap. There is a self-scavenging feature, but no-one's convinced it works, or if it does, it is very conservative (as you'd hope as you don't want it to break your OS)
You can tell is the scavenger is working by checking the "C:\Windows\winsxs\Temp\PendingDeletes." folder, as this is where files are moved by windows update or an installer moves them to - the scavenger just deletes the files in here.
You'll notice that after you uninstall your assembly, while the files are still there, they can no longer be bound to - so they are just "staged", or cached, but not really installed.
Rob & gbjbaanb are correct - you cannot manually invoke a scavenge yourself. Don't try to delete the files yourself - there are multiple places in the registry where they are registered, DerivedData\Components being only one of the many references.
I think the rule for Vista is scavenging is kicked off by the TrustedInstaller service after 10 minutes of machine inactivity, after the last servicing operation (service pack, hotfix, etc). But it's very fickle, so it doesn't run as often as it should. So just be patient, and the files will disappear on their own.
Well i was having some issues as i have an 80GB SSD for my windows and the WinSxs folder was about 12gb's
I was searching the net and i found this command:
DISM.exe /online /Cleanup-Image /spsuperseded
And now my WinSxs is 7gb which was wonderful news.
There are a few updates regarding the cleanup method that apply to newer OS. Check http://www.karafilis.net/winsxs-cleanup
Assume that I set up an automatic nightly build. What artifacts of the build should I save?
For example:
Input source code
output binaries
Also, how long should I save them, and where?
Do your answers change if I do Continuous Integration?
You shouldn't save anything for the sake of saving it. you should save it because you need it (i.e., QA uses nightly builds to test). At which point, "how long to save it" becomes however long QA wants them.
i wouldn't "save" source code so much as tag/label it. I don't know what source control you're using, but tagging is trivial (performance & disk space) for any quality source control system. Once your build is tagged, unless you need binaries, there really isn't any benefit to just having them around because you can simply re-compile when necessary from source.
Most CI tools let you tag on each successful build. This can become problematic for some systems as you can easily have 100+ tags a day. For such cases I recommend still running a nightly build and only tagging that.
Here are some artifacts/information that I'm used to keep at each build:
The tag name of the snapshot you are building (tag and do a clean checkout before you build)
The build scripts themselfs or their version number (if you treat them as a separate project with its own version control)
The output of the build script: logs and final product
A snapshot of your environment:
compiler version
build tool version
libraries and dll/libs versions
database version (client & server)
ide version
script interpreter version
OS version
source control version (client and server)
versions of other tools used in the process and everything else that might influence the content of your build products. I usually do this with a script that queries all this information and logs it to a text file that should be stored with the other build artifacts.
Ask yourself this question: "if something destroys entirely my build/development environment what information would I need to create a new one so I can redo my build #6547 and end up with the exact same result I got the first time?"
Your answer is what you should keep at each build and it will be a subset or superset of the things I already mentioned.
You can store everything in your SCM (I'd recommend a separate repository), but in this case your question on how long you should keep the items looses sense. Or you should store it to zipped folders or burn a cd/dvd with the build result and artifacts. Whatever you choose, have a backup copy.
You should store them as long as you might need them. How long, will depend on your development team pace and your release cycle.
And no, I don't think it changes if you do continous integration.
This isn't a direct answer to your question, but don't forget to version control the nightly build setup itself. When the project structure changes, you may have to change the build process, which will break older builds from that point on.
In addition to the binaries as everyone else has mentioned I would recomend setting up a symbol server and a source server and making sure you get the correct information out and into those. It will aid in debugging tremendously.
We save the binaries, stripped and unstripped (so we have the exactly same binary, once with and once without debug symbols). Further we build everything twice, once with debug output enabled and once without (again, stripped and unstripped, so every build result in 4 binaries). The build is stored to a directory according to SVN revision number. That way we can always retain the source from the SVN repository by simply checking out this very revision (that way the source is archived as well).
A surprising one I learned about recently: If you're in an environment that might be audited you'll want to save all the output of your build, the script output, the compiler output, etc.
That's the only way you can verify your compiler settings, build steps, etc.
Also, how long to save them for, and where to save them?
Save them until you know that build won't be going to production, iow as long as you have the compiled bits around.
One logical place to save them is your SCM system. Another option is to use a tool that will automatically save them for you, like AnthillPro and its ilk.
We're doing something close to "embedded" development here, and I can tell you what we save:
the SVN revision number and timestamp, as well as the machine it was built on and by whom (also burned into the build binaries)
a full build log, showing whether it was a full/incremental build, any interesting (STDERR) output the data baking tools produced, a list of files compiled and any compiler warnings (this compresses very well, being text)
the actual binaries (for anywhere from 1-8 build configurations)
files produced as a side effect of linking: a linker command file, address map, and a sort of "manifest" file indicating what was burned into the final binaries (CRC and size for each), as well as the debugging database (.pdb equivalent)
We also mail out the result of running some tools over the "side-effect" files to interested users. We don't actually archive these since we can reproduce them later, but these reports include:
total and delta of filesystem size, broken down by file type and/or directory
total and delta of code section sizes (.text, .data, .rodata, .bss, .sinit, etc)
When we have unit tests or functional tests (e.g. smoke tests) running, those results show up in the build log.
We've not thrown out anything yet -- given, our target builds usually end up at ~16 or 32 MiB per configuration, and they're fairly compressible.
We do keep uncompressed copies of the binaries around for 1 week for ease of access; after that we keep only the lightly compressed version. About once a month we have a script that extracts each .zip that the build process produces and 7-zips a whole month of build outputs together (which takes advantage of only having small differences per build).
An average day might have a dozen or two builds per project... The buildserver wakes up about every 5 minutes to check for relevant differences and builds. A full .7z on a large very active project for one month might be 7-10GiB, but it's certainly affordable.
For the most part, we've been able to diagnose everything this way. Occasionally there's a hiccup on the buildsystem and a file isn't actually a the revision it's supposed to be when a build happens, but there's usually enough evidence of this in the logs. Sometimes we have to dig out a tool that understands the debugging database format and feed it a few addresses to diagnose a crash (we have automatic stackdumps built into the product). But usually all the information needed is there.
We haven't had to crack the .7z archives yet, to mention. But we have the info there, and I have some interesting ideas on how to mine bits of useful data from it.
Save what can't be reproduced easily. I work on FPGAs where only the FPGA team have the tools and some cores (libraries) of the design are licensed to compile on only one machine. So we save the output bitstreams. But try to check them over one another rather than with a date/time/version stamp.
Save as in check in to source code control or just on disk? Save nothing to source code control. All derived files should be visible in the file system and available to developers. Don't checkin binaries, code generated from XML files, message digests etc. A separate packaging step will make these end products available. As you have the change number you can always reproduce the build if necessary assuming of course everything you need to do a build is completely in the tree and is available to all builds by syncing.
I would save your built binaries for exactly as long as they have a chance to go into production or be used by some other team (like a QA group). Once something has left production, what you do with it can vary a lot. For a lot of teams, they'll keep just their most recent prior build around (for rollback) and otherwise discard their builds.
Others have regulatory requirements to keep anything that went into production around for as long as seven years (banks). If you are a product company, I'd keep around any binary a customer might have installed in case a tech support guy wants to install the same version.