I have a python script that needs to be called from within an Extendscript script. Is there any library function available which can do this? I tried finding a solution in the documentation and many other online resources but nothing has worked for me so far. Any help is appreciated.
ExtendScript File objects have an execute() method, as already shown by Fabian.
I did not know about .term files and would have used a .command file instead, that's a plain shell script rather than XML.
If running within InDesign, you can bypass Terminal.app and use AppleScript:
myScript = 'do shell script "diff f1 f2 > o" ';
app.doScript(myScript, ScriptLanguage.applescriptLanguage);
Take a look at this example.
It creates a .term file next to the script file and executes it.
This is a cleaned version:
main();
function main() {
var script_file = File($.fileName); // get the full path of the script
var script_folder = script_file.path; // get the path from that
var new_termfile = createTermFile("execute_something", script_folder);
new_termfile.execute(); // now execute the termfile
}
/**
* creates a .term file
* #param {String} term_file_name --> the name for the .term file
* #param {Strin} path --> the path to the script file
* #return {File} the created termfile
*/
function createTermFile(term_file_name, path) {
/*
http://docstore.mik.ua/orelly/unix3/mac/ch01_03.htm
1.3.1.1. .term files
You can launch a customized Terminal window from the command line by saving some prototypical Terminal settings to a .term file,
then using the open command to launch the .term file (see "open" in Section 1.5.4, later in this chapter).
You should save the .term file someplace where you can find it later, such as ~/bin or ~/Documents.
If you save it in ~/Library/Application Support/Terminal, the .term file will show up in Terminal's File Library menu.
To create a .term file, open a new Terminal window, and then open the Inspector (File Show Info, or -I)
and set the desired attributes, such as window size, fonts, and colors. When the Terminal's attributes
have been set, save the Terminal session (File Save, or -S) to a .term file (for example, ~/Documents/proto.term).
Now, any time you want to launch a Terminal window from the command line, you can issue the following command:
*/
var termfile = new File(path + "/" + term_file_name + ".term");
termfile.open("w");
termfile.writeln(
"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" +
"<!DOCTYPE plist PUBLIC \"-//Apple Computer//DTD PLIST 1.0//EN\"" +
"\"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n" +
"<plist version=\"1.0\">\n" +
"<dict>\n" +
"<key>WindowSettings</key>\n" +
"<array>\n" +
" <dict>\n" +
"<key>CustomTitle</key>\n" +
"<string>My first termfile</string>\n" +
"<key>ExecutionString</key>\n" +
"<string>python\nprint \"Hello World\"\nexit()</string>\n" +
"</dict>\n" +
"</array>\n" +
"</dict>\n" +
"</plist>\n");
termfile.close();
return termfile;
};
Related
i'm new to nf-core/nextflow and needless to say the documentation does not reflect what might be actually implemented. But i'm defining the basic pipeline below:
nextflow.enable.dsl=2
process RUNBLAST{
input:
val thr
path query
path db
path output
output:
path output
script:
"""
blastn -query ${query} -db ${db} -out ${output} -num_threads ${thr}
"""
}
workflow{
//println "I want to BLAST $params.query to $params.dbDir/$params.dbName using $params.threads CPUs and output it to $params.outdir"
RUNBLAST(params.threads,params.query,params.dbDir, params.output)
}
Then i'm executing the pipeline with
nextflow run main.nf --query test2.fa --dbDir blast/blastDB
Then i get the following error:
N E X T F L O W ~ version 22.10.6
Launching `main.nf` [dreamy_hugle] DSL2 - revision: c388cf8f31
Error executing process > 'RUNBLAST'
Error executing process > 'RUNBLAST'
Caused by:
Not a valid path value: 'test2.fa'
Tip: you can replicate the issue by changing to the process work dir and entering the command bash .command.run
I know test2.fa exists in the current directory:
(nfcore) MN:nf-core-basicblast jraygozagaray$ ls
CHANGELOG.md conf other.nf
CITATIONS.md docs pyproject.toml
CODE_OF_CONDUCT.md lib subworkflows
LICENSE main.nf test.fa
README.md modules test2.fa
assets modules.json work
bin nextflow.config workflows
blast nextflow_schema.json
I also tried with "file" instead of path but that is deprecated and raises other kind of errors.
It'll be helpful to know how to fix this to get myself started with the pipeline building process.
Shouldn't nextflow copy the file to the execution path?
Thanks
You get the above error because params.query is not actually a path value. It's probably just a simple String or GString. The solution is to instead supply a file object, for example:
workflow {
query = file(params.query)
BLAST( query, ... )
}
Note that a value channel is implicitly created by a process when it is invoked with a simple value, like the above file object. If you need to be able to BLAST multiple query files, you'll instead need a queue channel, which can be created using the fromPath factory method, for example:
params.query = "${baseDir}/data/*.fa"
params.db = "${baseDir}/blastdb/nt"
params.outdir = './results'
db_name = file(params.db).name
db_path = file(params.db).parent
process BLAST {
publishDir(
path: "{params.outdir}/blast",
mode: 'copy',
)
input:
tuple val(query_id), path(query)
path db
output:
tuple val(query_id), path("${query_id}.out")
"""
blastn \\
-num_threads ${task.cpus} \\
-query "${query}" \\
-db "${db}/${db_name}" \\
-out "${query_id}.out"
"""
}
workflow{
Channel
.fromPath( params.query )
.map { file -> tuple(file.baseName, file) }
.set { query_ch }
BLAST( query_ch, db_path )
}
Note that the usual way to specify the number of threads/cpus is using cpus directive, which can be configured using a process selector in your nextflow.config. For example:
process {
withName: BLAST {
cpus = 4
}
}
I want to execute Java code from R. I used rJava package and I was able to execute a simple code of Java such as create object or print on screen.
require("rJava")
.jinit()
test<-new (J ("java.lang.String") , "Hello World!")
However what I want to do is to send a dataframe from R or CSV file and execute a code in Java then return the output file to R. At the same time, it is difficult in my case to call the R code from Java, as I want to process the CVS file first in R , then apply the Java code on it and return the result again to R to complete the analysis.
I'd go following way here.
Process CSV file inside R
Save this file somewhere and make sure you know explicit location (e.g. /home/user/some_csv_file.csv)
Create adapter class in Java that will have method String processFile(String file)
Inside method processFile read the file, pass it to your code in Java and do Java based processing
Store output file somewhere and return it's location
Inside R, get the result of processFile method and do further processing in R
At least, that's what I'd do as a first draft of a solution for your problem.
Update
We need Java file
// sample/Adapter.java
package sample;
public class Adapter {
public String processFile(String file) {
System.out.println("I am processing file: " + file);
return "new_file_location.csv";
}
public static void main(String [] arg) {
Adapter adp = new Adapter();
System.out.println("Result: " + adp.processFile("initial_file.csv"));
}
}
We have to compile it
> mkdir target
> javac -d target sample/Adapter.java
> java -cp target sample.Adapter
I am processing file: initial_file.csv
Result: new_file_location.csv
> export CLASSPATH=`pwd`/target
> R
We have to call it from R
> library(rJava)
> .jinit()
> obj <- .jnew("sample.Adapter")
> s <- .jcall(obj, returnSig="Ljava/lang/String;", method="processFile", 'initial_file')
> s
I am processing file: initial_file
> s
[1] "new_file_location.csv"
And your source directory looks like this
.
├── sample
│ └──Adapter.java
└── target
└── sample
└── Adapter.class
In processFile you can do whatever you like and call your existing Java code.
I have a simple PowerShell script in which I try to write some log data.
The script that's not working:
Add-Content -Path C:\temp\logg.txt -Value $file.FullName
This line I placed in the middle of the script and I never get any text in the logg.txt. The Script executes fine except this, it compiles some files to a zip file.
This is on a Windows server.
If I, however, run the same script on my local machine, Win7, it does work!?
So I think it must be something with the server but I can't figure it out. Am using -ExecutionPolicy Bypass when I run the script.
EDIT: more info.
The script is called from an ASP.NET webpage:
Process.Start("Powershell.exe", "-ExecutionPolicy Bypass -Command ""& {" & destinationFolderRoot.Replace("\\", "\") & "\scriptParameters.ps1}""")
BaseZipScript:
function New-Zip
{
param([string]$zipfilename)
Set-Content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
function Add-Zip
{
param([string]$zipfilename)
if(-not (Test-Path($zipfilename)))
{
Set-Content $zipfilename ("PK" + [char]5 + [char]6 + ("$([char]0)" * 18))
(dir $zipfilename).IsReadOnly = $false
}
$shellApplication = New-Object -COM Shell.Application
$zipPackage = $shellApplication.NameSpace($zipfilename)
foreach($file in $input)
{
$zipPackage.CopyHere($file.FullName)
Add-Content -Path "C:\temp\logg.txt" -Value $file.FullName
Start-Sleep -Milliseconds 750
}
}
scriptParameters1:
. "C:\temp\multiDownload\baseZipScript.ps1"
New-Zip c:\temp\multiDownload\user379\2016-09-15_14.39\files_2016-09-15_14.39.zip
dir c:\temp\multiDownload\user379\2016-09-15_14.39\tempCopy\*.* -Recurse |
Add-Zip c:\temp\multiDownload\user379\2016-09-15_14.39\files_2016-09-15_14.39.zip
Start-sleep -milliseconds 250
exit
As I said earlier the script works on local machine and also on the server, e.g. the files are being zipped, except that on the server nothing is logged in logg.txt.
Though the logg.txt works and exists on the server because the webpage also log some info there and that is being written.
Update:
As the comments guessed it was as simple as write access to the file. Though, it is kind of wierd because all files are created from the webservice if they doesn't exists? Any how, it works now. Thx! : )
I am trying to redefine the \nomencl_command in LyX, in order to be able to use the glossaries package in stead of the obsolete nomencl.
LyX allows you to specify the Nomenclature command, which by default is set to:
makeindex -s nomencl.ist
For glossaries the command is thus changed to:
makeglossaries
However, the LyX implementation of nomencl uses the more recent .nlo as an input file, and the .nls as an output file. Whereas glossaries uses the 'older' .glo and .gls Unfortunately, the extensions cannot be specified.
I found that the preferences file only says:
\nomencl_command "makeglossaries"
but the log output says:
makeglossaries "[filename].nlo" -o [filename].nls
So my question is where is \nomencl_command defined further?
The relevant code is in src/LaTeX.cpp.
Note below that some diagnostic information is written to the latex debug flag. You can see this info on the terminal if you run LyX with lyx -dbg latex.
The following are excerpts from the file src/LaTeX.cpp from the soon-to-be-released (a matter of days) LyX 2.1.
FileName const nlofile(changeExtension(file.absFileName(), ".nlo"));
// If all nomencl entries are removed, nomencl writes an empty nlo file.
// DepTable::hasChanged() returns false in this case, since it does not
// distinguish empty files from non-existing files. This is why we need
// the extra checks here (to trigger a rerun). Cf. discussions in #8905.
// FIXME: Sort out the real problem in DepTable.
if (head.haschanged(nlofile) || (nlofile.exists() && nlofile.isFileEmpty()))
rerun |= runMakeIndexNomencl(file, ".nlo", ".nls");
FileName const glofile(changeExtension(file.absFileName(), ".glo"));
if (head.haschanged(glofile))
rerun |= runMakeIndexNomencl(file, ".glo", ".gls");
and
bool LaTeX::runMakeIndexNomencl(FileName const & file,
string const & nlo, string const & nls)
{
LYXERR(Debug::LATEX, "Running MakeIndex for nomencl.");
message(_("Running MakeIndex for nomencl."));
string tmp = lyxrc.nomencl_command + ' ';
// onlyFileName() is needed for cygwin
tmp += quoteName(onlyFileName(changeExtension(file.absFileName(), nlo)));
tmp += " -o "
+ onlyFileName(changeExtension(file.toFilesystemEncoding(), nls));
Systemcall one;
one.startscript(Systemcall::Wait, tmp, path);
return true;
}
and
// nomencl file
FileName const nls(changeExtension(file.absFileName(), ".nls"));
nls.removeFile();
// nomencl file (old version of the package)
FileName const gls(changeExtension(file.absFileName(), ".gls"));
gls.removeFile();
Can anyone tell me how to run SSIS package with powershell on the server in asp.net. Please note i need to specify config file path also.
var processStartInfo = new ProcessStartInfo
{
Arguments = "dtexec /F " + "\"" + pkgPath + packageName +"\"",
FileName = "C:\\Windows\\System32\\WindowsPowerShell\\v1.0\\powershell.exe"
};
Process process = Process.Start(processStartInfo);
How to specify the config file path in the above code.