How to plot Bode diagrams under scilab / xcos V6.1? - scilab

I use scilab / xcos in my teaching and among others the CPGE atoms module: https://atoms.scilab.org/toolboxes/CPGE
This module has not yet been updated for version 6.1 and only works for version 5.5.2.
The majority of its functionalities are found in other modules but one of those which is only found in this module.
The functionality which allows to draw the diagrams of BODE.
Do you know how to find this functionality for version 6 under xcos?

The REP_FREQ block used to do the job, and is still proposed in the CPGE release for Scilab 6.0

Related

Fortran plotting without Gnuplot

I am new to Fortran and am trying to learn how to do simple plots. I already have a program that creates a file of the values that I'm looking to test out in a simple plotting exercise, but every example I've seen so far uses gnuplot. As the computer I'm using is not a personal computer, installing or downloading gnuplot is not really the easiest option at first glance.
Would it be correct to assume that without gnuplot, plotting using Fortran 90 is very difficult?
Fortran is a general purpose programming language. It is designed to work on any type of computer, even those without any screen or operating system (with some new possibilities to ineract with an OS if it is present).
All such languages, like Fortran, C or C++ cannot directly do any graphical output or plotting. They require external libraries which are written in a system-specific way to interact with the graphical interface. There are such libraries available for Fortran, but using them is not trivial. It is much (MUCH!) harder than installing gnuplot, if you already know how to use gnuplot.
I will not recommend any such libraries as it is off-topic here.
You can use gtk-fortran. It is a GTK / Fortran binding and it offers also an interface to PLplot:
https://github.com/vmagnin/gtk-fortran/wiki
But you need a Fortran 2003 compliant compiler (it is the case of all recent compilers).
Plotting with Fortran is generally not easy because you need to install such libraries and need to learn their functioning.

Pareto Frontier generation for multi-objective prob. using openMDAO 1.x?

I am new to the OpenMDAO framework and currently using the 1.5.0 version. I'm interested in generating a Pareto front for Zitzler–Deb–Thiele's functions using the same.
I found a solution for the legacy version here which uses 'pareto_filter' but was unable to locate the same in the new version.
So, how do I set up a multi-objective problem to generate pareto front in 1.x version?
Thanks to all.
You should be able to us NSGA2 from pyopt-sparse directly in OpenMDAO. You just install the pyopt-sparse package and OpenMDAO has a driver already built in that will let you use it. Then you pick NSGA2 as your optimizer.
The only issue is that, if you look at the source, that driver is currently labeled as single-objective. So you should change that line to True, so that you can specify multiple objectives.
We haven't tested NSGA2 via the pyopt-sparse. So it might take a little bit of hacking around to get it to work. If you'd prefer to us the regular pyopt package, you should be able to start with our current pyopt-sparse wrapper and make some small changes to get it to work.

lineprof equivalent for Rcpp

The lineprof package in R is very useful for profiling which parts of function take up time and allocate/free memory.
Is there a lineprof() equivalent for Rcpp ?
I currently use std::chrono::steady_clock and such to get chunk timings out of an Rcpp function. Alternatives? Does Rstudio IDE provide some help here?
To supplement #Dirk's answer...
If you are working on OS X, the Time Profiler Instrument, part of Apple's Instruments set of instrumentation tools, is an excellent sampling profiler.
Just to fix ideas:
A sampling profiler lets you answer the question, what code paths does my program spend the most time executing?
A (full) cache profiler lets you answer the question, which are the most frequently executed code paths in my program?
These are different questions -- it's possible that your hottest code paths are already optimized enough that, even though the total number of instructions executed in that path is very high, the amount of time required to execute them might be relatively low.
If you want to use instruments to profile C++ code / routines used in an R package, the easiest way to go about this is:
Create a target, pointed at your R executable, with appropriate command line arguments to run whatever functions you wish to profile:
Set the command line arguments to run the code that will exercise your C++ routines -- for example, this code runs Rcpp:::test(), to instrument all of the Rcpp test code:
Click the big red Record button, and off you go!
I'll leave the rest of the instructions in understanding instruments + the timing profiler to your google-fu + the documentation, but (if you're on OS X) you should be aware of this tool.
See any decent introduction to high(er) performance computing as eg some slides from (older) presentation of my talks page which include worked examples for both KCacheGrind (part of the KDE frontend to Valgrind) as well as Google Perftools.
In a more abstract sense, you need to come to terms with the fact that C++ != R and not all tools have identical counterparts. In particular Rprof, the R profiler which several CRAN packages for profiling build on top of, is based on the fact that R is interpreted. C++ is not, so things will be different. But profiling compiled is about as old as compiling and debugging so you will find numerous tutorials.

Plot not defined with Julia

I compiled Julia 0.1 from the source code on my Ubuntu 12.04. It is my first time try with Julia actually.
The compilation got through to the end with no problem but some warnings.
When I try to execute the plot command , here comes the problem,
julia> plot(x->sin(x^2)/x, -2pi,2pi)
ERROR: plot not defined
Did the compilation go wrong somewhere or Do I have to install extra package to plot in Julia?
Thanks
The web-based graphics are outdated and unmaintained (though there's work in progress to get the next generation of web graphics working). Plotting alternatives include the Winston or Gadfly packages at https://github.com/nolta/Winston.jl and https://github.com/dcjones/Gadfly.jl which you can install simply using the Pkg.add("Winston") (or Pkg.add("Gadfly") commands). For documentation and usage examples please refer to the linked repositories.
For MATLAB-style plotting under Julia, type once
Pkg.add("PyPlot")
to install the PyPlot package, which gives you access to Python's matplotlib library. Then try e.g.
using PyPlot
x = -2pi:0.1:2pi;
plot(x, sin(x.^2)./x);
OK I found the solution myself,
Julia uses a web REPL to provide some basic graphics capabilities. Just have to follow the steps here:
https://github.com/JuliaLang/julia#web-repl
Julian Schrittwieser also has a library based on MathGL:
http://www.furidamu.org/blog/2012/02/26/plotting-with-julia/
I am not sure whether it is still under maintenance by the author.
As of right now (a few years passed since the question was asked so the ecosystem has matured), the package I would suggest for easy quick plots would be Gadfly, with some use of PyPlot for publication quality graphs that require a lot of control.
To install, just type
Pkg.add("Gadfly")
in a Julia command line, and to use, type:
using Gadfly
plot([sin, cos], 0, 25)
PyPlot is still the preferred plotting option for when you want a lot of control over your graphs, but it is a wrapper for a Python library and is slightly less user-friendly. It also requires a working python install on your system.

Changing the LAPACK implementation used by IDL linear algebra routines?

Over at http://scicomp.stackexchange.com I asked this question about parallel matrix algorithms in IDL. The answers suggest using a multi-threaded LAPACK implementation and suggest some hacks to get IDL to use a specific LAPACK library. I haven't been able to get this to work.
I would ideally like the existing LAPACK DLM to simply be able to use a multi-threaded LAPACK library and it feels like this should be possible but I have not had any success. Alternatively I guess the next simplest step would be to create a new DLM to wrap a matrix inversion call in some C code and ensure this DLM points to the desired implementation. The documentation for creating DLMs is making me cross-eyed though, so any pointers to doing this (if it is required) would also be appreciated.
What platform are you targeting?
Looking at idl_lapack.so with nm on my platform (Mac OS X, IDL 8.2.1) seems to indicate that the LAPACK routines are directly in the .so, so my (albeit limited) understanding is that it would not be simple to swap out (i.e., by setting LD_LIBRARY_PATH).
$ nm idl_lapack.so
...
000000000023d5bb t _dgemm_
000000000023dfcb t _dgemv_
000000000009d9be t _dgeqp3_
000000000009e204 t _dgeqr2_
000000000009e41d t _dgeqrf_
000000000023e714 t _dger_
000000000009e9ad t _dgerfs_
000000000009f4ba t _dgerq2_
000000000009f6e1 t _dgerqf_
Some other possibilities...
My personal library has a directory src/dist_tools/bindings containing routines for automatically creating bindings for a library given "simple" (i.e., not using typedefs) function prototypes. LAPACK would be fairly easy to create bindings for (the hardest part would probably be to build the package you want to use ATLAS, PLAPACK, ScaLAPACK, etc.). The library is free to use, a small consulting contract could be done if you would like it done for you.
The next version of GPULib will contain a GPU implementation of LAPACK, using the MAGMA library. This is effectively a highly parallel option, but only works on CUDA graphics cards. It would also work best if other operations besides the matrix inversion could be done on the GPU to minimize memory transfer. This option costs money.

Resources