How to make ltrace -S show all arguments of system calls? - ltrace

I was using ltrace -S to see what system calls dlopen was making but then I noticed that SYS_mmap was limited to only 4 arguments:
SYS_mmap(0x7f1c325fe000, 8192, 3, 2066)
while it takes a total of 6 arguments. In particular, the file descriptor, which is the sixth argument is not shown, which is crucial for my analysis.
Is there a way to make ltrace show all my arguments?
Tested in ltrace 0.7.3, Ubuntu 16.04.

As mentioned by Mark Plotnick:
sed 's/;addr SYS_mmap/addr SYS_mmap'/ /etc/ltrace.conf > ltrace.conf
ltrace -S -F ltrace.conf ./dlopen.out
and now the mmaps look just right:
SYS_mmap(0, 285983, 1, 2, 3, 0) = 0x7f7db3ea6000
Tested on Ubuntu 18.04, ltrace 0.7.3.

Related

r + hpc + git question: submitting multiple jobs with different values for a parameter list [duplicate]

I am running R on a multiple node Linux cluster. I would like to run my analysis on R using scripts or batch mode without using parallel computing software such as MPI or snow.
I know this can be done by dividing the input data such that each node runs different parts of the data.
My question is how do I go about this exactly? I am not sure how I should code my scripts. An example would be very helpful!
I have been running my scripts so far using PBS but it only seems to run on one node as R is a single thread program. Hence, I need to figure out how to adjust my code so it distributes labor to all of the nodes.
Here is what I have been doing so far:
1) command line:
> qsub myjobs.pbs
2) myjobs.pbs:
> #!/bin/sh
> #PBS -l nodes=6:ppn=2
> #PBS -l walltime=00:05:00
> #PBS -l arch=x86_64
>
> pbsdsh -v $PBS_O_WORKDIR/myscript.sh
3) myscript.sh:
#!/bin/sh
cd $PBS_O_WORKDIR
R CMD BATCH --no-save my_script.R
4) my_script.R:
> library(survival)
> ...
> write.table(test,"TESTER.csv",
> sep=",", row.names=F, quote=F)
Any suggestions will be appreciated! Thank you!
-CC
This is rather a PBS question; I usually make an R script (with Rscript path after #!) and make it gather a parameter (using commandArgs function) that controls which "part of the job" this current instance should make. Because I use multicore a lot I usually have to use only 3-4 nodes, so I just submit few jobs calling this R script with each of a possible control argument values.
On the other hand your use of pbsdsh should do its job... Then the value of PBS_TASKNUM can be used as a control parameter.
This was an answer to a related question - but it's an answer to the comment above (as well).
For most of our work we do run multiple R sessions in parallel using qsub (instead).
If it is for multiple files I normally do:
while read infile rest
do
qsub -v infile=$infile call_r.pbs
done < list_of_infiles.txt
call_r.pbs:
...
R --vanilla -f analyse_file.R $infile
...
analyse_file.R:
args <- commandArgs()
infile=args[5]
outfile=paste(infile,".out",sep="")...
Then I combine all the output afterwards...
This problem seems very well suited for use of GNU parallel. GNU parallel has an excellent tutorial here. I'm not familiar with pbsdsh, and I'm new to HPC, but to me it looks like pbsdsh serves a similar purpose as GNU parallel. I'm also not familiar with launching R from the command line with arguments, but here is my guess at how your PBS file would look:
#!/bin/sh
#PBS -l nodes=6:ppn=2
#PBS -l walltime=00:05:00
#PBS -l arch=x86_64
...
parallel -j2 --env $PBS_O_WORKDIR --sshloginfile $PBS_NODEFILE \
Rscript myscript.R {} :::: infilelist.txt
where infilelist.txt lists the data files you want to process, e.g.:
inputdata01.dat
inputdata02.dat
...
inputdata12.dat
Your myscript.R would access the command line argument to load and process the specified input file.
My main purpose with this answer is to point out the availability of GNU parallel, which came about after the original question was posted. Hopefully someone else can provide a more tangible example. Also, I am still wobbly with my usage of parallel, for example, I'm unsure of the -j2 option. (See my related question.)

How to execute linux commands from R via bash under the Windows Subsystem for Linux (WSL)?

The WSL on Windows 10 allows execution of Linux commands and command-line tools via bash.exe. Very usefully, a Linux tool/command can be called from the Windows command-line (cmd.exe) by passing it as an argument to bash.exe like so:
bash.exe -c <linux command>
This is very useful because it should allow Windows-based scripts to combine Windows and Linux tools seamlessly.
Unfortunately, I have failed to call Linux commands from an R script (see below).
0) System
Win10 x64 + Anniversary Update + WSL installed
1) Comparison cases where calling Linux commands work
The following all work for me; shown here just with an example call to ls.
from the windows command-line (cmd.exe prompt)
bash -c "ls /mnt/a"
bash -c "ls /mnt/a > /mnt/a/test.txt"
Same works if started from WinKey + R
Same works from within a .bat file.
It can be called from compiled code. I tried with Delphi XE2 32-bit and 64-bit using ShellExecute:
For example, these work (32 and 64 bit):
ShellExecute (0, PChar('open'), PChar('cmd.exe'), PChar('/c c:\windows\system32\bash.exe -c "ls /mnt/a > /mnt/a/test.txt"'), nil, SW_SHOWNORMAL);
Or (32-bit code):
ShellExecute (0, PChar('open'), PChar('c:\windows\sysnative\bash.exe'), PChar('-c "ls /mnt/a > /mnt/a/test.txt"'), nil, SW_SHOWNORMAL);
Or (64-bit code):
ShellExecute (0, PChar('open'), PChar('c:\windows\system32\bash.exe'), PChar('-c "ls /mnt/a > /mnt/a/test.txt"'), nil, SW_SHOWNORMAL);
All of these seem to work (and ShellExecute returns 42).
2) Failure to call Linux commands from R
using R 3.3.1 x64
All of the below (and several similar things I've tried) fail with status 65535:
shell('c:/windows/system32/bash.exe -c "ls /mnt/a"', shell="cmd.exe", flag = "/c")
shell("ls", shell="c:/windows/system32/bash.exe", flag = "-c")
system('cmd /c c:/windows/system32/bash.exe -c "ls /mnt/a > /mnt/a/test.txt"')
system('bash -c "ls /mnt/a"')
system('c:/windows/system32/bash.exe -c "ls /mnt/a > /mnt/a/test.txt"')
3) Question
Given that examples under 1) work, I find 2) very puzzling. Am I missing anything obvious here?
I would be very grateful for a simple example where running a Linux command via bash.exe under WSL works.
Your failing examples should now be working correctly in Windows 10 Insider builds >= 14951 which introduced many "interop" improvements and new capabilities:
> system('bash -c "ls /"')
Generates:
bin cache dev home lib media opt root sbin srv tmp var
boot data etc init lib64 mnt proc run snap sys usr

Multithreaded program only runs on a single processor after compiling, how do I troubleshoot?

I am trying to run a compiled program that is supposed to be running on multiple processors. But with the same data, sometimes this program runs in parallel and sometimes it won't (with the identical PBS script file!). I am suspecting that something is wrong with some of the compute nodes that won't let it run on parallel (I don't get to choose the compute node I want). How can I troubleshoot if this is a bug in the program or it is problem with the compute node?
As per the sys admin's adivce, I am using ulimit -s 100000, but this don't change anything. Also, this program is not an mpi program (runs only on a single node, with multiple processors).
The code that I run is as follows:
quorum_error_correct_reads -q 68 \
--contaminant=/data004/software/GIF/packages/masurca/2.3.0rc1/bin/../share/adapter.jf \
-m 1 -s 1 -g 1 -a 3 --thread=32 -w 10 -e 3 \
quorum_mer_db.jf aa.renamed.fastq ab.renamed.fastq ac.renamed.fastq ad.renamed.fastq ae.renamed.fastq af.renamed.fastq ag.renamed.fastq \
--no-discard -o pe.cor --verbose
Thanks for any advice you can offer. I will greatly appreciate your help!
PS: I don't have sudo access.
EDIT: I know it is supposed to be using multiple processors because, when I SSH into the node and do top -c I can see (above command) sometimes running like 3200 % CPU (all the time) and sometimes only 100 % CPU all the time. This is the only step involved and there are no other sub-process within this program. Also, I am using HPC, where I submit the job to a compute node, each with 32 procs, 512GB RAM.

Redirect not working correctly, 2> /dev/null becomes 2 > /dev/null and stderr doesn't get redirected

I am hoping someone can help me figure out what setting I might need to overwrite. I am working on a Unix terminal server, running a Linux Xterm linux shell. Everytime I use a command like grep "blah" 2> /dev/null at the shell prompt, the command is run as grep "blah" 2 > /dev/null and needless to say the redirection fails.
xterm version is X.Org 6.8.99.903(238)
I can not update or install anything, this is a locked down production server.
Thanks for any help and illumination on the topic, it is making my grep useless at high directory levels with recursion.
That's Bourne shell syntax, and it doesn't work in c-shell.
The best you can do is
( command >stdout_file ) >&stderr_file
Where you get stdout to one file, and stderr to another. Redirecting just stderr is not possible.
In a comment, you say "A minor note, this is csh". That's not a minor note, that's the cause of the problem. xterm is just a terminal emulator, not a shell; all it does is set up a window that provides textual input and output. csh (or bash, or ...) is the shell, the program that interprets the commands you type.
csh has different syntax for redirection, and doesn't let you redirect just stderr. command > file redirects stdout; command >& file redirects both stdout and stderr.
You say the system doesn't have bash, but it does have ksh. I suggest just using ksh; it will be a lot more familiar to you. Both bash and ksh are derived from the old Bourne shell.
All (?) Unix-like systems will have a Bourne-like shell installed as /bin/sh. Even if you're using csh (or tcsh?) as your interactive shell, you can still invoke sh, even in a one-liner. For example:
sh -c 'command 2>/dev/null'
will invoke sh, which in turn will invoke command and redirect just its stderr to /dev/null.
The purpose of an interactive shell is (mostly) to let you use other commands that are available on the system. sh, or any shell, can be used as just another command.

strange behavior of fc -l command

I have two unix machines, both running AIX 5.3
My $HOME is mounted on machine1.
Using NFS, login machine2 will go to the same $HOME
I login machine2 first, then machine1.
Both using telnet.
The 2 sessions will share the same .sh_history file.
I found out that the fc -l behavior very strange.
In machine2, I issue the commands in telnet:
fc -l
ksh fc -l
Both give the same output.
In machine1,
fc -l
ksh fc -l
give DIFFERENT results
The result for ksh fc -l
is the same as /usr/bin/fc -l
Also, when I run a script like this:
#!/usr/bin/ksh
fc -l
The result is same as /usr/bin/fc -l
Could anyone tell me what happened?
Alvin SIU
Ah, wisdom of the ancients... (Since this post is over a year old.)
Anyway, I just encountered this problem in Solaris 10. Issue seems to be this: When you define a function in /etc/profile, or in any file called by /etc/profile, your HISTFILE variable gets ignored by the Korn shell, and the shell instead uses ".sh_history" when accessing its history. Not sure why this is.
Result is that you see other root shell's commands. You can test it with :
lsof -p $$
or
cat /proc/$$/fd/63
It's possible that the login shell is not ksh or that $HISTFILE is being reset. One thing you can do is echo $HISTFILE in the various situations and see if it's different. Another thing to check is to see what shell you're in using ps.
Bash (default $HOME/.bash_history), for example, will have a different $HISTFILE than ksh (default $HOME/.sh_history).
Another possible reason for the difference is that the builtin fc may be able to see in-memory history that hasn't been written to disk yet (which the external /usr/bin/fc wouldn't be able to see). If this is true, it may be version dependent. Bash, for example, doesn't write history to the file until the shell exits. Ksh (at least the version I'm using) writes it immediately.

Resources