When the QSharedMemory example is running, a mapped file is created into /tmp directory.
$ ls -s /tmp/*Example*
0 /tmp/qipc_sharedmemory_QSharedMemoryExampleff77196351dd7c1c8f79461ad32f21726fe31f5b
0 /tmp/qipc_systemsem_QSharedMemoryExampleff77196351dd7c1c8f79461ad32f21726fe31f5b
All works fine, but the file always have zero length. I tried using setNativeKey() but the result is the same.
When I use shm_open/ftruncate/mmap functions from C, the mapped file have the file size indicate in ftruncate() parameter.
Is this the expected behavior?
Related
realpath <<<'foo' fails "realpath: missing operand". I don't know what that means.
realpath <(<<<'foo') returns /proc/3443695/fd/pipe:[26244650] which I guess means it's creating a temporary pipe which will contain the string "foo".
Or maybe printf is more clear:
❯ printf "%q" <<<'foo' # no output
❯ printf "%q" <(<<<'foo')
/proc/self/fd/11%
The actual program I'm trying to call doesn't like either of those. I think I need an actual file.
I can do that in multiple commands by creating a file with mktemp and then writing to it, and then sending that off as the arg, but does zsh have any convenient syntax for doing this in-place? A 1-liner?
It looks like the =(list) process substitution should do what you want.
From the zshexpn man page:
If =(...) is used instead of <(...), then the file passed as an
argument will be the name of a temporary file containing the output
of the list process. This may be used instead of the < form for a
program that expects to lseek on the input file.
...
The temporary file created by the process substitution will be deleted when the function exits.
On my system, realpath =(<<<'foo') returns something like /private/tmp/zsh3YAdDx, i.e. the name of a temporary file that does indeed appear to be deleted after executing the command.
As a bonus, the documentation notes that in some cases the =(<<<...) form is optimized to execute completely in the current shell.
I am using a gtar command that should create the file of 10 GB in pendrive. I researched on it little bit and got to know that FAT 32 file system supports file of size 4gb max. How can I put check in middle of running gtar command that creates multiple files by splitting based on file size less than 4gb.
Gtar should be able to detect if file it creating is exceeding 4 GB size and then it should stop creating that file and continue creating the other one.
I know that we can make 10 GB file at one location and split that static file, but we do not want this.
You can use split command like this:
cd /path/to/flash
tar cvf - /path/of/source/files |split -b 4198400
Check the man page for these options.
[--tape-length=NUMBER] [--multi-volume]
Device selection and switching:
-f, --file=ARCHIVE
use archive file or device ARCHIVE
--force-local
archive file is local even if it has a colon
-F, --info-script=NAME, --new-volume-script=NAME
run script at end of each tape (implies -M)
-L, --tape-length=NUMBER
change tape after writing NUMBER x 1024 bytes
-M, --multi-volume
create/list/extract multi-volume archive
--rmt-command=COMMAND
use given rmt COMMAND instead of rmt
--rsh-command=COMMAND
use remote COMMAND instead of rsh
--volno-file=FILE
use/update the volume number in FILE
Device blocking:
-b, --blocking-factor=BLOCKS
BLOCKS x 512 bytes per record
-B, --read-full-records
reblock as we read (for 4.2BSD pipes)
-i, --ignore-zeros
ignore zeroed blocks in archive (means EOF)
--record-size=NUMBER
NUMBER of bytes per record, multiple of 512
I'm trying to come up with a unix pipeline of commands that will allow me to log only the most recent n lines of a program's output to a text file.
The text file should never be more than n lines long. (it may be less when it is first filling the file)
It will be run on a device with limited memory/resources, so keeping the filesize small is a priority.
I've tried stuff like this (n=500):
program_spitting_out_text > output.txt
cat output.txt | tail -500 > recent_output.txt
rm output.txt
or
program_spitting_out_text | tee output.txt | tail -500 > recent_output.txt
Obviously neither works for my purposes...
Anyone have a good way to do this in a one-liner? Or will I have to write a script/utility?
Note: I don't want anything to do with dmesg and must use standard BSD unix commands. The "program_spitting_out_text" prints out about 60 lines/second, every second.
Thanks in advance!
If program_spitting_out_text runs continuously and keeps it's file open, there's not a lot you can do.
Even deleting the file won't help since it will still continue to write to the now "hidden" file (data still exists but there is no directory entry for it) until it closes it, at which point it will be really removed.
If it closes and reopens the log file periodically (every line or every ten seconds or whatever), then you have a relatively easy option.
Simply monitor the file until it reaches a certain size, then roll the file over, something like:
while true; do
sleep 5
lines=$(wc -l <file.log)
if [[ $lines -ge 5000 ]]; then
rm -f file2.log
mv file.log file2.log
touch file.log
fi
done
This script will check the file every five seconds and, if it's 5000 lines or more, will move it to a backup file. The program writing to it will continue to write to that backup file (since it has the open handle to it) until it closes it, then it will re-open the new file.
This means you will always have (roughly) between five and ten thousand lines in the log file set, and you can search them with commands that combine the two:
grep ERROR file2.log file.log
Another possibility is if you can restart the program periodically without affecting its function. By way of example, a program which looks for the existence of a file once a second and reports on that, can probably be restarted without a problem. One calculating PI to a hundred billion significant digits will probably not be restartable without impact.
If it is restartable, then you can basically do the same trick as above. When the log file reaches a certain size, kill of the current program (which you will have started as a background task from your script), do whatever magic you need to in rolling over the log files, then restart the program.
For example, consider the following (restartable) program prog.sh which just continuously outputs the current date and time:
#!/usr/bin/bash
while true; do
date
done
Then, the following script will be responsible for starting and stopping the other script as needed, by checking the log file every five seconds to see if it has exceeded its limits:
#!/usr/bin/bash
exe=./prog.sh
log1=prog.log
maxsz=500
pid=-1
touch ${log1}
log2=${log1}-prev
while true; do
if [[ ${pid} -eq -1 ]]; then
lines=${maxsz}
else
lines=$(wc -l <${log1})
fi
if [[ ${lines} -ge ${maxsz} ]]; then
if [[ $pid -ge 0 ]]; then
kill $pid >/dev/null 2>&1
fi
sleep 1
rm -f ${log2}
mv ${log1} ${log2}
touch ${log1}
${exe} >> ${log1} &
pid=$!
fi
sleep 5
done
And this output (from an every-second wc -l on the two log files) shows what happens at the time of switchover, noting that it's approximate only, due to the delays involved in switching:
474 prog.log 0 prog.log-prev
496 prog.log 0 prog.log-prev
518 prog.log 0 prog.log-prev
539 prog.log 0 prog.log-prev
542 prog.log 0 prog.log-prev
21 prog.log 542 prog.log-prev
Now keep in mind that's a sample script. It's relatively intelligent but probably needs some error handling so that it doesn't leave the executable running if you shut down the monitor.
And, finally, if none of that suffices, there's nothing stopping you from writing your own filter program which takes standard input and continuously outputs that to a real ring buffer file.
Then you would simply do:
program_spitting_out_text | ringbuffer 4096 last4k.log
That program could be a true ring buffer in that it treats the 4k file as a circular character buffer but, of course, you'll need a special marker in the file to indicate the write-point, along with a program that can turn it back into a real stream.
Or, it could do much the same as the scripts above, rewriting the file so that it's always below the size desired.
Since apparently this basic feature (circular file) does not exist on GNU/Linux, and because I needed it to track logs on my Raspberry Pi with limited storage, I just wrote the code as suggest above!
Behold: circFS
Unlike other tools quoted on this post and other similar, the maximum size is arbitrary and only limited by the actual available storage.
It does not rotate with several files, all is kept in the single file, which is rewritten on "release".
You can have as many log files as needed in the virtual directory.
It is a single C file (~600 lines including comments), and it builds with a single compile line after having installed fuse development dependencies.
This first version is very basic (see the README), if you want to improve it with some of the TODOs (see the TODO) be welcome to submit pull requests.
As a joke, this is my first "write only" fuse driver! :-)
I am using the shell() command to generate pdf documents from .tex files within a function. This function sometimes gets ran multiple times with adjusted data and so will overwrite the documents. Of course, if the pdf file is open when the .tex file is ran, it generates an error saying it can't run the .tex file. So I want to know whether there are any R or Windows cmd commands which will check whether a file is open or not?
I'm not claiming this as a great solution: it is hacky but maybe it will do. You can make a copy of the file and try to overwrite your original file with it. If it fails, no harm is made. If it succeeds, you'll have modified the file's info (not the contents) but since your end goal is to overwrite it anyway I doubt it will be a huge problem. In either case, you'll be fixed about whether or not the file can be rewritten.
is.writeable <- function(f) {
tmp <- tempfile()
file.copy(f, tmp)
success <- file.copy(tmp, f)
return(success)
}
openfiles /query /v|(findstr /i /c:"C:\Users\David Candy\Documents\Super.xls"&&echo File is open||echo File isn't opened)
Output
592 David Candy 1756 EXCEL.EXE C:\Users\David Candy\Documents\Super.xls
File is open
Findstr returns 0 if found and 1+ if not found or error.
& seperates commands on a line.
&& executes this command only if previous command's errorlevel is 0.
|| (not used above) executes this command only if previous command's errorlevel is NOT 0
> output to a file
>> append output to a file
< input from a file
| output of one command into the input of another command
^ escapes any of the above, including itself, if needed to be passed to a program
" parameters with spaces must be enclosed in quotes
+ used with copy to concatinate files. E.G. copy file1+file2 newfile
, used with copy to indicate missing parameters. This updates the files modified date. E.G. copy /b file1,,
%variablename% a inbuilt or user set environmental variable
!variablename! a user set environmental variable expanded at execution time, turned with SelLocal EnableDelayedExpansion command
%<number> (%1) the nth command line parameter passed to a batch file. %0 is the batchfile's name.
%* (%*) the entire command line.
%<a letter> or %%<a letter> (%A or %%A) the variable in a for loop. Single % sign at command prompt and double % sign in a batch file.
.
--
I am trying to check for files which are larger than a given threshold. I know that the 'du' comand gives me the output for each file/folder, but how to put that in a single line on shell (using awk with if clause?).
find with -size parameter. Prepending + will yield all files of equal or greater size. For example to find all files of at least 10MB in current directory:
find . -size +10M