I have a function in script but I can't find who is calling it - unix

I have a function in script but I can't find who is calling it.
I tried using grep but I can't find it. Maybe it is in different path.
How can I get it from tcl ?
For example, in csh there is an option to use with "echo $0"
I use with linux.

You could try using info frame -1:
proc foo {} {
bar
}
proc bar {} {
puts [info frame -1]
}
foo
# => type proc line 1 cmd bar proc ::foo level 1
So you could read that using dict like dict get [foo] proc and you'll get ::foo, meaning the proc which called bar is foo (in the global namespace).
EDIT: Something you may try to get all the commands executed along the way:
proc a {} {b}
proc b {} {c}
proc c {} {
for {set i 1} {$i < [info level]} {incr i} {
puts [info frame -$i]
}
}
a
# => type proc line 1 cmd c proc ::b level 1
# type proc line 1 cmd b proc ::a level 2

You can use grep to walk thru all the files:
grep -l -R name_of_the_finction /
this will print the filename when this function is called or defined.
Of course there is another way, you activate audit daemon, set to be monitored read operations for this particular file (where the function is defined) and wait. Time to time check the audit logs and you will see process, user, file which open the file in question for read.
If you want to use audit you can check in those articles 1, 2 in my personal blog.

Related

path not being detected by Nextflow

i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}

How to expand variable value to build a

Sorry if this is included somewhere, looked for a good 30-60 minutes for something along these lines. I am sure I just missed something! Total jq nub!
Basically I am trying to do a pick operation that is dynamic. My thought process was to do something like this:
pickJSON() {
getSomeJSON | jq -r --arg PICK "$1" '{ $PICK }'
}
pickJSON "foo, bar"
but this produces
{ "PICK": "foo, bar" }
Is there a way to essentially ask it to expand shell-style?
Desired Result:
pickJSON() {
getSomeJSON | jq -r --arg PICK "$1" '{ $PICK }'
# perhaps something like...
# getSomeJSON | jq -r --arg PICK "$1" '{ ...$PICK }'
}
pickJSON "foo, bar"
{ "foo": "foovalue", "bar": "barvalue" }
Note that I am new to jq and i just simplified what i am doing - if the syntax is broken that is why :-D my actual implementaiton has a few pipes in there and it does work if i dont try to pick the values out of it.
After a fairly long experimentation phase trying to make this work, I finally came up with what seems like a feasible and reliable solution without the extremely unsettling flaws that could come from utilizing eval.
To better highlight the overall final solution, I am providing a bit more of the handling that I am currently working with below:
Goal
Grab a secret from AWS Secrets Manager
Parse the returned JSON, which looks like this:
{
"ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyTestDatabaseSecret-a1b2c3",
"Name": "MyTestDatabaseSecret",
"VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
"SecretString": "{\n \"username\":\"david\",\n \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
"VersionStages": [
"AWSPREVIOUS"
],
"CreatedDate": 1523477145.713
}
Run some modifications on the JSON string received and pick only the statically requested keys from the secret
Set and export those values as environment variables
Script
# Capture a AWS Secret from secretsmanager, parse the JSON and expand the given
# variables within it to pick them from the secret and return given portion of
# the secret requested.
# #note similar to _.pick(obj, ["foo", "bar"])
getKeysFromSecret() {
aws secretsmanager get-secret-value --secret-id "$1" \
| jq -r '.SecretString | fromjson' \
| jq -r "{ $2 }"
}
# Uses `getKeysFromSecret` to capture the requested keys from the secret
# then transforms the JSON into a string that we can read and loop through
# to set each resulting value as an exported environment variable.
#
## Transformation Flow:
# { "foo": "bar", "baz": "qux" }
# -->
# foo=bar
# baz=qux
# -->
# export foo=bar
# export baz=qux
exportVariablesInSecret() {
while IFS== read -r key value; do
if [ -n "$value" ]; then
export "${key}"="${value}";
fi
done < <(getKeysFromSecret "$1" "$2" | jq -r 'to_entries | .[] | .key + "=" + .value')
}
Example JSON
{
...othervalues
"SecretString": "{\"foo\": \"bar\", \"baz\": \"qux\"}"
}
Example Usage
exportVariablesInSecret MY_SECRET "foo, bar"
echo $foo
# bar
Some Notes / Context
This is meant to set a given set of values as variables so that we aren't just setting an entire arbitrary JSON object as variables that could possibly cause issues / shadowing if someone adds a value like "path" to a secret
A critical goal was to absolutely never use eval to prevent possible injection situations. Far too easy to inject things otherwise.
Happy to see if anyone has a nicer way of accomplishing this. I saw many people recommending the use of declare but that sets the var to the local function scope only so its essentially useless.
Thanks to #cas https://unix.stackexchange.com/a/413917/308550 for getting me on the right track!

Default Representation/Drawing method in VMD

In VMD I want to load every new file with the drawing method CPK. This doesn't seem not to be an option in the .vmdrc file for some technical reasons.
How can I do this from the VMD command line (so that I can make a script)?
Or is there some other solution/workaround/hack to make this work?
There are several ways to achieve what you want:
(1) put the following line in the correct location of your .vmdrc
mol default style CPK
(2) use the VMD Preferences Panel (last item in the Extensions menu of the main window) to generate a .vmdrc file that meets your expectations(s). The setting you're looking for is in the Representations tab.
(3) for more advanced settings (i.e. default settings applied to molecules already loaded when vmd read the startup .vmdrc file), you can use the following (works for me on VMD 1.9.2):
proc reset_viz {molid} {
# operate only on existing molecules
if {[lsearch [molinfo list] $molid] >= 0} {
# delete all representations
set numrep [molinfo $molid get numreps]
for {set i 0} {$i < $numrep} {incr i} {
mol delrep $i $molid
}
# add new representations
mol representation CPK
# add other representation stuff you want here
mol addrep $molid
}
}
proc reset_viz_proxy {args} {
foreach {fname molid rw} $args {}
eval "after idle {reset_viz $molid}"
}
## put a trace on vmd_initialize_structure
trace variable vmd_initialize_structure w reset_viz_proxy
after idle {
if { 1 } {
foreach molid [molinfo list] {
reset_viz $molid
}
}
}
This piece of code is adapted from this Axel Kohlmeyer website.
HTH,
I found a convenient solution.
In .bashrc add:
vmda () {
echo -e "
mol default style CPK
user add key Control-w quit
" > /tmp/vmdstartup
echo "mol new $1" > /tmp/vmdcommand
vmd -e /tmp/vmdcommand -startup /tmp/vmdstartup
}
Look at a structure with
vmda file.pdb
and close the window (quit the application) with Ctrl+w, like other windows.

How to log data of a call

I want to log data of asterisk command line. But the criteria is I want log data for calls separately, i.e. I want to log data for each call in separate file.
Is there a way to do that?
In case there is no inbuild feature in asterisk to do this, here is a bash solution:
#!/bin/bash
echo "0" >/tmp/numberoflines
IFS=''
pathToLogFile = /path/to/log/file
while [ 1 ]
do
NUMBER=$(cat /tmp/numberoflines)
LINECOUNT=$(wc -l < $pathToLogFile)
DIFFERENCE=$(($LINECOUNT-$NUMBER))
if [ $DIFFERENCE != 0 ]; then
lines=($(tail -n $DIFFERENCE $pathToLogFile))
for line in $lines; do
callID = `expr "$line" : 'CALLID_REGEX (see below)'`
$(echo "$line" >> /path/to/log/directory/$callID)
done
fi
sleep 5;
echo "$LINECOUNT" >/tmp/numberoflines
done
untested
it should be used to get ab idea to solve this problem.
the regular expression: normaly: /\[(C\d{8})\]/. sadly I don't know the syntax in bash. I'm sorry. you have to convert it by yourself into bash-syntax.
The idea is: remember the last line in the logfile that was processed by the bash script. check the line count of the log file. if there are more lines then the remembered line: walk through the new lines and extract the call id at the beginning of each line (format: C******** (* are numbers). in words: a C followed by a number with 8 digits). now append the whole line at the end of a log file. the name of the file is the extracted callid.
EDIT Information about the call id (don't mistake it with the caller id): https://wiki.asterisk.org/wiki/display/AST/Unique+Call-ID+Logging

Invoke custom function in zsh completion?

I can't seem to figure out a way to call a zsh completion function that I can tap into and provide a return result of available items. For example, I'd like to be able to call out to a web service and return back an array of potentials.
I've tried something like this:
#compdef test
local arguments
_run(){
reply=(1 2 3)
}
arguments=(
'--test[foo]:bar:_run'
)
_arguments -s $arguments
If I put an echo in the _run function I can see it getting executed, but zsh always says there are no matches
Took me a while to figure this out (and only because I stole it from the brew zsh completions file:
#compdef test
local arguments
_run(){
val=(1 2 3)
_wanted val expl "Items" compadd -a val
}
_biz(){
val=(4 5 6)
_wanted val expl "Biz" compadd -a val
}
local expl
local -a val
arguments=(
'--test[foo]:bar:_run'
'--biz[boo]:boo:_biz'
)
_arguments $arguments
Now you can do
$ test --test
-- Items --
1 2 3
and
$ test --test 2 --biz 4
-- Biz --
4 5 6

Resources