Whether the relevant robust optimization model can be established without using JuMPeR - julia

Since the existing JuMPeR can no longer adapt to the latest JuMP and Julia versions, and the version of the documentation is so backward that the experience of writing code is so bad that even many examples don't run smoothly, I wonder if there is an alternative package for robust optimization.
The same package has DDUS, which provides a data-driven set of uncertainties for the JuMPeR framework, but this package cannot be installed at this time.
Finally, well, if there are really no packages available, I want to know which older version of Julia can solve my problem if it can adapt to the Julia version of the JuMPeR and DDUS packages.

Related

Correct way to create a software install script which can manage dependencies

I'm currently working on an university research related software which uses statistical models in it in order to process some calculations around Item Response Theory. The entire source code was written in Go, whereas it communicates with a Rscript server to run scripts written in R and return the generated results. As expected, the software itself has some dependencies needed to work properly (one of them, as seen before, is to have R/Rscript installed and some of its packages).
Due to the fact I'm new to software development, I can't find a proper way to manage all these dependencies on Windows or Linux (but I'm prioritizing Windows right now). What I was thinking is to have a kind of script which checks if [for example] R is properly installed and, if so, if each used package is also installed. If everything went well, then the software could be installed without further problems.
My question is what's the best way to do anything like that and if it's possible to do the same for other possible dependencies, such as Python, Go and some of its libraries. I'm also open to hear suggestions if installing programming languages locally on the machine isn't the proper way to manage software dependencies, or if there's a most convenient way to do it aside from creating a script.
Sorry if any needed information is missing, I would also like to know.
Thanks in advance

Confusion over Nvidia GPU packages in Julia, CuArrays and ArrayFire

I recently looked into the usage of GPU computation, where the usage of package seemed to be confusing.
For example, CuArrays and ArrayFire seemed to be doing the same thing, where ArrayFire seemed to be the "official" package on Nvidia developers' webpage.(https://devblogs.nvidia.com/gpu-computing-julia-programming-language )
Also, there were CUDAdrv and CUDAnative Packages..., which seemed to be confusing, as their functionality seemed to be not as straightforward as the others.
What does these packages do? Is there any difference between CuArrays and ArrayFire?
As explained in the blog post you shared, it is quite simply as given below
The Julia package ecosystem already contains quite a few GPU-related
packages, targeting different levels of abstraction as Figure 1 shows.
At the highest abstraction level, domain-specific packages like
MXNet.jl and TensorFlow.jl can transparently use the GPUs in your
system. More generic development is possible with ArrayFire.jl, and if
you need a specialized CUDA implementation of a linear algebra or deep
neural network algorithm you can use vendor-specific packages like
cuBLAS.jl or cuDNN.jl. All these packages are essentially wrappers
around native libraries, making use of Julia’s foreign function
interfaces (FFI) to call into the library’s API with minimal overhead.
CUDAdrv and CUDAnative packages are meant for directly using CUDA runtime API and writing kernels from Julia itself. I believe that is where CuArray come in handy - wrapping native Julia objects into CUDA accessible format, roughly speaking.
ArrayFire on the other hand is a generic library that wraps around all(cuBLAS, cuSparse, cuSolve, cuFFT) CUDA provided domain specific libraries into nice interface(functions). Apart from the interface to CUDA's domain specific libraries, ArrayFire by itself provides lot of other functions in the areas of statistics, image processing, computer vision etc. It has nice JIT feature where user's code is compiled to a runtime kernel - simply put. ArrayFire.jl is an language binding with some extra Julia specific improvements at wrapper level.
That's the general difference. From a developers perspective, using a library(like ArrayFire) basically takes out the burden of keeping up with CUDA API and maintaining/tweaking the kernels for optimum performance which I think takes lot of time.
PS. I am a member of ArrayFire development team.

MPICH2 Installation

Given the availability of a new workstation (Intell Xeon X5690, Windows 7 Professional, 64-bit) for numerical analysis of fluid dynamics models, I find it a shame not engage in parallel computing. So far, I have had no or little experience in this field.
What's the difference between MS-MPI and the latest release of MPICH suitable for Windows? I installed MPICH 1.4.1, but I cannot get a test program to work on Ifort. How am I supposed to compile the program? Do I have to change Ifort configurations somehow to add the libraries of MPICH? Isn't there any good manual available online that could meet my needs?
There's lots of questions in this one question, but it all boils down to one basic question: How do I install MPI on Windows?
MPICH has long since worked on Windows. The last version that supported it was 1.4.1p1 as you've found, but it doesn't have any support anymore from the MPICH developers so if you have trouble, you probably won't find much help. I haven't seen anyone on here step up to help with those questions so far.
MS-MPI is a good option if you want to use Windows. It's free to use and still has support directly from Microsoft. You'll have to read their documentation about how to set everything up correctly, but it's probably the right place to start if you want to use MPI on Windows.
Intel MPI also works on Windows, but it isn't free so you might not want to look at that right now.

How to use lme4.0 with lmerTest?

I am wondering if anyone has faced this issue before. I use the package lmerTest to run mixed-effects models in R because it has a handy way of providing p-values. This package by default loads the most current version of the lme4 package. However, the lme4 current version has some issues and it sometimes doesn't converge, so the lme4 developers have made available a new package (named lme4.0), which is a bugfix-only version of the old pre-1.0 lme4. This works great, and the models usually converge, so that is what I use to analyze my data.
I would like to have lmerTest but have it load lme4.0 instead of the current version of lme4.Does anyone know how to achieve this?
Thanks for your help!
This isn't really feasible without serious hacking: essentially, take an older version of lmerTest, download the source, hack it to look for lme4.0 rather than lme4, and install locally. Or download (from the CRAN archives) and install older versions of lme4 and lmerTest (and pbkrtest: maintaining an archaic setup will get progressively more difficult, and you will have to backport or forego bug fixes as they appear in newer versions).
Since many of the problems with new lme4 have been cleaned up with the switch in default optimizers from Nelder-Mead to BOBYQA, my advice would be to run a range of comparisons between lme4.0 and lme4, convince yourself that there are no problems (and send information about persistent problems to the lme4 maintainers, who would greatly appreciate it!), and move on to the new version.

Using R Programming Language with FANN Neural Network Library

I do a lot of computational intelligence research. I have used Matlab almost exclusively as my programming medium for a decade or so. I am now trying to move to OSS. I have settled on R as my new environment.
After a long search for neural net software, the only Matlab-comparable OSS packages are Stuttgart NN and FANN (this can be debated another time =). The former doesn't appear to be maintained so I'd like to go with the latter. So my question is:
Does anyone have experience using R and FANN?
FANN has C++ bindings and R seems to have a couple of packages for a C++ interface, but since I'm a R newbie I need an idea of where exactly to start. Any guidance or recommendations would be appreciated.
Cheers.
I do not know anything abuot FANN but I can assure you that R has an actively maintained interface to the Stuttgart Neural Net Simulator (SNNS) library via the
RSNNS package --- as RSNNS happens to employ the
Rcpp package for interfacing R and C++ which I am involved in.

Resources