Does anyone has an experience in creating a TCP server in C++ for calling R functions and serving the results to the clients?
I implemented my own using POCO C++ libraries, but got an error message which led me to see the fact that RInside can not be used in a multi-threaded application.
I think this is non-sense. Ok, R itself is single threaded but there should be a way of creating a server in C++ and RInside.
You probably want Rserve which has been doing this for a decade, rather than starting something new with our RInside -- though you could look at my RInside/Wt example for a webapp...
Related
QtConcurrent is extremely convenient as a high-level interface for multithreaded processing in my data-heavy/CPU-heavy application. The Qt6 upgrades vaguely referenced "Revamped Concurrency APIs" but I didn't see much change, except a reference to being able to pass a custom QThreadPool.
That got me wondering... is it possible to extend QThreadPool into a class that manages threads/tasks on other machines, not just the host machine? Or is that too far from its original design? Or is there another Qt class that can manage distributed processing?
Don't bother linking me to non-Qt solutions. That's not the point of this question.
QtConcurrent doesn't deal with any of it, unfortunately.
In a most general approach, you only need some worker machines on the network, and a way to connect to them via ssh (if they are Unix), or via Windows credentials (on a Windows network). At that point you can send a binary to the worker and execute it. Doing this in Qt is of course possible, but you'd need to wrap some other libraries (e.g. Samba for RPC calls, or openssh) to do that.
No matter whether the software can "distribute itself" or is otherwise installed on the workers, you got it running on multiple machines. Now they have to communicate, with one being a master, and the other being slaves. Master selection could be done via command line arguments, or even by having two binaries: workers that include only the back-end functionality, and a front end that includes both (and has some sort of UI).
At that point you can leverage Qt Remote Objects, the idea being what you'd "distribute" is QObjects that do work in the slots, and return results either via return value of the slot, by sending a signal. It's not as convenient as using QtConcurrent directly, but in general there's no way to distribute work transparently without some introspection that C++ doesn't quite provide yet.
I know that OpenMPI is not a Qt-based solution, it certainly works and makes life easy, and for sure it can interoperate with Qt code - you can even distribute methods and lambdas that way (with some tricks).
If you manage worker objects encapsulated as QObjects, it's not too hard to distribute the work in e.g. round-robin fashion. You could then have a front-end QObject that acts as a proxy: you submit all the work to it, and it signals all the results, but internally it invokes the slots on the remote QObjects.
Would you be interested in a demo? I could write one up if there was enough demand :)
I'm setting up a parallel optimisation environment using IBM CPLEX 12.9, Julia Language v1.1.0 and JuMP. To start a local optimisation I'm currently using the library CPLEX.jl that provides the connection (using C calls on background) to optimise some model locally. Let's call this machine A.
However, I'm trying to start an optimisation in a remote machine which means that when I start an optimisation on A, Julia will call the CPLEX installed on the machine B (which has more memory, cpus, etc).
Looking the CPLEX documentation I've seen that for a local optimisation we call the function
CPXopenCPLEX(int * status_p)
given by the lib libcplex1290.so. For a remote connection, CPLEX provides another interface to connect to external servers by the function
CPXopenCPLEXremote(char const * transport, int argc, char const *const * argv, int * status_p)
The package CPLEX.jl supports only local optimisation and it uses the CPXopenCPLEX() function. Looking for this package, the connection with the local CPLEX installation is made by the following command:
ccall(("CPXopenCPLEX",libcplex),Ptr{Cvoid}, (Ptr{Cint},),stats)
where libcplex="/opt/ibm/ILOG/CPLEX_Studio129/cplex/bin/x86-64_linux/libcplex1290.so", and stats is an Array{Int32,1}. This command is found at the file cpx_env.jl of the package CPLEX.jl.
What I've tried is to implement a similar function that will call CPXopenCPLEXremote insteat of CPXopenCPLEX with the correct values. My Julia1.1 code is the following:
const libcplex = "/opt/ibm/ILOG/CPLEX_Studio129/cplex/bin/x86-64_linux/libcplex1290remote.so"
argv=["/usr/bin/ssh", "IP_OF_REMOTE_MACHINE","/opt/ibm/ILOG/CPLEX_Studio129/cplex/bin/x86-64_linux/cplex", "-worker=process"]
ret= ccall(("CPXopenCPLEXremote",libcplex),Ptr{Cvoid}, (Ptr{Cchar},Cint,Ptr{Ptr{Cchar}},Ptr{Cint},),"processtransport",Int32(4),argv,stats)
The problem is ret=Ptr{Nothing} #0x0000000000000000 with means that the connection did not succedd.
I'm quite sure that the problem is in the way that I'm giving the arguments to ccall() to call CPXopenCPLEXremote.
Could someone with experience in this tye of call help me with the parameters?
I'm also configuring an automatic identification for the ssh connection. For now I've to inform my user and password on each ssh connection from the machine A to the remote machine B. (I'll update this question later)
Thank you all for any help.If it works, I'm going to create the lib CPLEXremote.jl for the community.
best regards, Isaias
Many things could go wrong here. I don't know Julia but here are the things that could try outside of Julia to sort this out:
You definitely need passwordless ssh connection. There is no way you can supply username/password with the CPLEX remote object API. This is mentioned in the documentation here.
On both machines make sure that not only CPLEX is installed but also that the folder that contains the various libcplex*transport.so and libcplex*worker.so libraries is in LD_LIBRARY_PATH. The remote object code has to load these libraries dynamically at runtime.
For debugging purposes set environment variable ILOG_CPLEX_REMOTE_OBJECT_TRACE to 99. This should give more information about the error that happens.
Try adding either -stdio or -namedpipes=. to the command line.
Take a look at the example cplex/examples/src/remotec/parmipopt.c. This basically does what you plan to do. It also involves user functions, so it is a bit more complicated than what you plan.
Look at example cplex/examples/src/remotec/parbenders.c this does more complicated things in the solution process but the setup of the remote solvers is pretty simple. You can run this example by going to cplex/examples/x86-64_linux/static_pic and saying make -f Makefile.remote remote-run-parbenders. It is a good idea to start with that and trying to modify it so that it does not only run on your localhost but actually connects to the remote machine correctly. This takes Julia out of the picture. Once you have this working go back to Julia and figure out how to invoke CPLEX from there.
My application need to download several web-pages simultaneously and i know this is possible in a single thread because of experience with epoll programming in linux. Currently i use CURL to interact with HTTP but...
update: Discovered the curl's MULTI-interface: http://curl.haxx.se/libcurl/c/libcurl-multi.html I think question is resolved (-;
The cross-platform way is to use select or poll which are specified by POSIX.
Alternatively, and more efficiently, you could use a library. The main advantage of a library is that it can do things way more effectively than select, by employing system-specific mechanisms.
For example, a nice network library would probably use:
epoll on Linux
kqueue on FreeBSD
/dev/poll on solaris
pollset on AIX
iocp on Win32
etc
I think you can use asio for C++ or libevent for C.
I'm trying to learn Ada on Linux by porting simple C++ tools to Ada.
Right now I' trying to write a simple serial communicaton program that sends modem commands and waits for a signalled filedescriptor using select call.
I can't seem to find the package containing the select call - do I have to look for some platform specific package here? Where would I find this? Am I even looking for the right thing here?
select() is an OS call specific to Unix, and thus isn't part of Ada's standard library.
You will either need to find a (non-standard) package that provides a Unix system call interface, wrap it yourself using interfacing pragmas, or take a different approach.
For the first option, I can only help a little, since I don't have a Unix system handy. A Posix package should have it, and I believe you can find one such package (Florist) for Gnat here. I can't speak to its quality.
To make your own bindings, you'd want to check out the facilities provided for this in Appendix B of the LRM. This is kind of an advanced topic though, and should not be attempted unless you either know a lot about how your OS does its subroutine linkages, or are ready to learn.
For "a different approach", look into whatever reference guide you are using has to say about Ada's tasking and/or protected objects (not to be confused with the protected keyword in C++). For example, you might prefer to have one task whose sole job is to read incoming data from the serial port. You can synchronize with it between reads via a rendezvous, or to get really sexy, with a queue implemented via protected object.
I have a large code which was earlier built for Linux environment, which involves call to write() from unistd.h. Is there any port of write() call available for Win32 environment. I am looking to build this large code base 'as it is' on Windows environment(MS-VS 2005 enviroment) without touching the code if possible.
Changing the code to replace the write() calls with fwrite() would be tedious manual process as the signatures of the two are different.
EDITED: Actually many other unix based calls fail as well in Windows environment as well - read(),open(),close()...
Any pointers would be useful.
thank you.
-AD
Microsoft's C runtime has _open _read _write etc. as "low-level I/O". However, these are compatibility wrappers mangled managed by the C runtime layer and subject to restrictions like "limited by _getmaxstdio and can't go higher than 2048".
You can use the NT native CreateFile ReadFile WriteFile for true low-level I/O.
I'd be a little surprised if they didn't work, but if they don't, your best bet is to write a little porting layer library that implements them using Win32API calls.
This would undeniably be quicker than doing search and replace on a lot of code and also mean that your main code base remains unchanged and portable.