differences between .sage and .spyx in numerical evaluation - sage

the question seems very basic, i'm sorry but i could not find an answer in the documentation.
the content of both files test.sage and test.spyx is identical; it's just
a = 1/sqrt(2)
print a
if i run test.sage with
$ sage test.sage
i get
1/2*sqrt(2)
but the outcome is different from if i run the file test.spyx with
$ sage test.spyx
where i get
Compiling test.spyx...
0.707106781187
how can i prevent sage from numerically evaluating 1/\sqrt(2) in .spyx mode?
(i asked the question on ask.sagemath.org as well but got no answer yet...)

Related

What are ppearson and spearson in datamash, and how do you use them?

When I look at the documentation, there is no "correlation", but there is "ppearson" and "spearson". They are mentioned exactly once, as a "group-by statistical operation." But .. how exactly are they defined?
Also, when I try to use one, there is an error message, but I don't understand how to fix it. How do you use ppearson or spearson?
$ cat > foo.tsv
1^I2
2^I3
$ cat foo.tsv | datamash ppearson 1,2
datamash: operation ‘ppearson’ requires field pairs
EDIT: This documentation section says
GNU Datamash is designed to closely follow R project’s (https://www.r-project.org/) statistical functions. See the files/operators.R file for the R equivalent code for each of datamash’s operators. When building datamash from source code on your local computer, operators are compared to known results of the equivalent R functions.
Looking in R, I don't see an spearson:
> ?spearson
No documentation for ‘spearson’ in specified packages and libraries:
you could try ‘??spearson’

Array job based on R [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to construct and submit an array job based on R in the HPC of my university.
I'm used to submit array jobs based on Matlab and I have some doubts on how to translate the overall procedure to R. Let me report a very simple Matlab example and then my questions.
The code is based on 3 files:
"main" which does some preliminary operations.
"subf" which should be run by each task and uses some matrices created by "main".
a bash file which I qsub in the terminal.
1. main:
clear
%% Do all the operations that are common across tasks
% Here, as an example, I create
% 1) a matrix A that I will sum to the output of each task
% 2) a matrix grid; each task will use some rows of the matrix grid
m=1000;
A=rand(m,m);
grid=rand(m,m);
%% Tasks
tasks=10; %number of tasks
jobs=round(size(grid,1)/tasks); %I split the number of rows of the matrix grid among the tasks
2. subf:
%% Set task ID
idtemp=str2double(getenv('SGE_TASK_ID'));
%% Select local grid
if idtemp<tasks
grid_local= grid(jobs*(idtemp-1)+1: idtemp*jobs,:);
else
grid_local= grid(jobs*(idtemp-1)+1: end,:); %for the last task, we should take all the rows of grid that have been left
end
sg_local=size(grid_local,1);
%% Do the task
output=zeros(sg_local,1);
for g=1:sg_local
output(g,:)=sum(sum(A+repmat(grid_local(g,:),m,1)));
end
%% Save output by keeping track of task ID
filename = sprintf('output.%d.mat', ID);
save(filename,'output')
3. bash
#$ -S /bin/bash
#$ -l h_vmem=6G
#$ -l tmem=6G
#$ -l h_rt=480:0:0
#$ -cwd
#$ -j y
#Run 10 tasks where each task has a different $SGE_TASK_ID ranging from 1 to 10
#$ -t 1-10
#$ -N Example
date
hostname
#Output the Task ID
echo "Task ID is $SGE_TASK_ID"
export PATH=/xx/xx/matlab/bin:$PATH
matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID = $SGE_TASK_ID; subf; exit"
These are my questions:
Suppose I'm able to translate "main" and "subf" into R language. Should I be extra-careful about anything in particular concerning the parallelisation? For example, do I have to declare some parallel environment, such as parLapply or dopar?
In the "main" file I should also install some R packages. Can I do them locally in my folder directly at the beginning of the "main" file, or should I contact the HPC administrator to install them globally?
I could not find any example of bash file for R in the instructions given by my university. Therefore, I have doubts on how to re-adapt the above bash file. I suppose that the only lines to change are:
export PATH=/xx/xx/matlab/bin:$PATH
matlab -nodisplay -nodesktop -nojvm -nosplash -r "main; ID = $SGE_TASK_ID; subf; exit"
Could you give some hints on how I should change them?
The parallelization is handled by the HPC, right? In which case, I think "no", nothing special required.
It depends on how they allow/enable R. In a HPC that I use (not your school), the individual nodes do not have direct internet access, so it would require special care; this might be the exception, I don't know.
Recommendation: if there is a shared filesystem that both you and all of the nodes can access, then create an R "library" there that contains the installed packages you need, then use .libPaths(...) in your R scripts here to add that to the search path for packages. The only gotcha to this might be if there are non-R shared library (e.g., .dll, .so, .a) requirements. For this, either "docker" or "ask admins".
If you don't have a shared filesystem, then you might ask the cluster admins if they use/prefer docker images (you might provide an image or a DOCKERFILE to create one) or if they have preferred mechanisms for enabling various packages.
I do not recommend asking them to install the packages, for two reasons: First, think about them needing to do this with every person who has a job to run, for any number of programming languages, and then realize that they may have no idea how to do it for that language. Second, package versions are very important, and you asking them to install a package may install either a too-new package or overwrite an older version that somebody else is relying on. (See packrat and renv for discussions on reproducible environments.)
Bottom line, the use of a path you control (and using .libPaths) enables you to have complete control over package versions. If you have not been bitten by unintended consequences of newer-versioned packages, just wait ... congratulations, you've been lucky.
I suggest you can add source("main.R") to the beginning of subf.R, which would make your bash file perhaps as simple as
export PATH=/usr/local/R-4.x.x/bin:$PATH
Rscript /path/to/subf.R
(Noting that you'll need to reference Sys.getenv("SGE_TASK_ID") somewhere in subf.R.)

First token could not be read or is not the keyword 'FoamFile' in OpenFOAM

I am a beginner to programming. I am trying to run a simulation of a combustion chamber using reactingFoam.
I have modified the counterflow2D tutorial.
For those who maybe don't know OpenFOAM, it is a programme built in C++ but it does not require C++ programming, just well-defining the variables in the files needed.
In one of my first tries I have made a very simple model but since I wanted to check it very well I set it to 60 seconds with a 1e-6 timestep.
My computer is not very powerful so it took me for a day aprox. (by this I mean I'd like to find a solution rather than repeating the simulation).
I executed the solver reactingFOAM using 4 processors in parallel using
mpirun -np 4 reactingFOAM -parallel > log
The log does not show any evidence of error.
The problem is that when I use reconstructPar it works perfectly but then I try to watch the results with paraFoam and this error is shown:
From function bool Foam::IOobject::readHeader(Foam::Istream&)
in file db/IOobject/IOobjectReadHeader.C at line 88
Reading "mypath/constant/reactions" at line 1
First token could not be read or is not the keyword 'FoamFile'
I have read that maybe some files are empty when they are not supposed to be so, but I have not found that problem.
My 'reactions' file have not been modified from the tutorial and has always worked.
edit:
Sorry for the vague question. I have modified it a bit.
A typical OpenFOAM dictionary file always contains a Foam::Istream named FoamFile. An example from a typical system/controlDict file can be seen below:
FoamFile
{
version 2.0;
format ascii;
class dictionary;
location "system";
object controlDict;
}
During the construction of the dictionary header, if this Istream is absent, OpenFOAM ceases its operation by raising an error message that you have experienced:
First token could not be read or is not the keyword 'FoamFile'
The benefit of the header is possibly to contribute OpenFOAM's abstraction mechanisms, which would be difficult otherwise.
As mentioned in the comments, adding the header entity almost always solves this problem.

UNIX - Finding all empty files in source directory & finding files edited X days ago

The following is from a school assignment.
The question asks
"What command enables you to find all empty files in your source
directory."
My answer; find -size 0 However, my instructor says that my answer is incorrect. The only hint (Regarding the entirety of the assignment) he gives me is "...minor errors such as missing a file name or outputting too much information" I was thinking, perhaps I should include the source directory within my find command.
I've been trying to figure this out for the past few hours. I've referenced my textbook and according to that I should be correct.
There's some other questions I'm having similar issues with. I've wracked my brain with this for hours. I just don't know. I don't understand what I'm doing wrong.
Since your assignment was to find all empty files in your source directory, the following command will do exactly what you want:
find . -size 0
Notice the dot (.) to tell the command to search in the current folder.
For other folders, you replace the "dot" with the folder you want.

Is there a working distribution of sqlite available for OpenVMS?

I am looking for a working distribution of SQLite for OpenVMS. I tried building SQLite 3.7.9 from the amalgamation file, using patches I found in a mailing list, but it does not quite work.
I am using HP C V7.1-015 on OpenVMS Alpha 7.3-2.
Since I cannot install python, which seems to include SQLite3, I have to build from sources.
I compile using the following commands:
$ CC /OPTIMIZE -
/DEFINE=(SQLITE_THREADSAFE=0, -
SQLITE_OMIT_LOAD_EXTENSION=1, -
SQLITE_OMIT_COMPILEOPTION_DIAGS=1, -
SQLITE_OMIT_MEMORYDB=1, -
SQLITE_OMIT_TEMPDB=1, -
SQLITE_OMIT_DEPRECATED=1, -
SQLITE_OMIT_SHARED_CACHE=1, -
_USE_STD_STAT=ENABLE) -
/FLOAT=IEEE_FLOAT -
sqlite3.c
$ CC /OPTIMIZE -
/DEFINE=(SQLITE_THREADSAFE=0, -
SQLITE_OMIT_LOAD_EXTENSION=1, -
SQLITE_OMIT_COMPILEOPTION_DIAGS=1, -
SQLITE_OMIT_MEMORYDB=1, -
SQLITE_OMIT_TEMPDB=1, -
SQLITE_OMIT_DEPRECATED=1, -
SQLITE_OMIT_SHARED_CACHE=1, -
_USE_STD_STAT=ENABLE) -
/FLOAT=IEEE_FLOAT -
shell.c
I copied the defines from the mailing list, and added /FLOAT=IEEE_FLOAT to get rid of most warnings regarding floating points (related to overflows due to exponent 308).
During compilation I got some informationals and warnings.
I get the following messages while linking:
$ LINK shell.obj,sqlite3.obj
...
%LINK-W-NUDFSYMS, 2 undefined symbols:
%LINK-I-UDFSYM, __STD_FSTAT
%LINK-I-UDFSYM, __STD_STAT
...
Since I am a little bit lost here, I rather have SQLite3 sources which compile on OpenVMS.
The specific problem you're getting from the linker arises from the fact that you've requested capability at compile time that your system doesn't have. I believe the _USE_STD_STAT option first became available in OpenVMS v8.2, yet you're on 7.3-2. Your compiler and your headers know what to do when _USE_STD_STAT is defined, but the functions to process the X/Open-compliant stat structure do not exist in the C run-time (CRTL in VMS parlance) on your system, and your linker is telling you, "ain't got those functions."
Ideally you would be able to upgrade your operating system. Current as of this writing is v8.4. v7.3-2 was released eight and a half years ago and v8.2 over seven years ago. I understand that there are technical, budgetary, and even political reason that upgrades aren't always possible. If it were me, and I were stuck on OpenVMS Alpha v7.3-2, I would try removing the _USE_STD_STAT=ENABLE from the compilation and see what blows up.
One of the side effects of enabling _USE_STD_STAT is that you also get _LARGEFILE along with it. If that's the only reason SQLite needs the option, you may be fine but limited to 4GB databases. I suspect there's more to it than that, i.e., SQLite very likely makes use of elements in the stat structure that do actually require the newer structure.
You can read up on the differences in the traditional and standards-compliant stat structures at http://h71000.www7.hp.com/doc/84final/5763/5763profile_062.html#index_x_1699.
I've recently improved my VMSish patch for SQLite and made it available for SQLite version 3.7.14.1: http://www.mail-archive.com/sqlite-users#sqlite.org/msg73570.html (or http://sqlite.1065341.n5.nabble.com/Building-SQLite-3-7-14-1-for-OpenVMS-td65277.html).
POSIX locking still doesn't work though, and I was unable to find out why.
Well, there was a message on the sqlite-users mailing list on getting SQLite 3.7.9 working on OpenVMS. I don't know how relevant that is to the version you've got (or if the patch was adopted by the SQLite developers; they're a bit picky for legal reasons IIRC) but it looks likely to be useful. Good luck.

Resources