I have a Client/Server implementation already done and written in C++ for one project.
I'm starting a new project in Go making a web app and I want it to interact with the server implementation that I did in C++. Is there a way I can re use the C++ client implementation and call that from my Go code or will I just have to rewrite the client code in Go?
A good way to implement this is by turning your client into a locally running server. To do this, you can make a wrapper proto file, that will generate code for both C++ and Go. Implement the stubs in C++ as wrappers to the real function, and then call the stubs from your gRPC Go client. In effect, you are chaining the calls.
Related
QtConcurrent is extremely convenient as a high-level interface for multithreaded processing in my data-heavy/CPU-heavy application. The Qt6 upgrades vaguely referenced "Revamped Concurrency APIs" but I didn't see much change, except a reference to being able to pass a custom QThreadPool.
That got me wondering... is it possible to extend QThreadPool into a class that manages threads/tasks on other machines, not just the host machine? Or is that too far from its original design? Or is there another Qt class that can manage distributed processing?
Don't bother linking me to non-Qt solutions. That's not the point of this question.
QtConcurrent doesn't deal with any of it, unfortunately.
In a most general approach, you only need some worker machines on the network, and a way to connect to them via ssh (if they are Unix), or via Windows credentials (on a Windows network). At that point you can send a binary to the worker and execute it. Doing this in Qt is of course possible, but you'd need to wrap some other libraries (e.g. Samba for RPC calls, or openssh) to do that.
No matter whether the software can "distribute itself" or is otherwise installed on the workers, you got it running on multiple machines. Now they have to communicate, with one being a master, and the other being slaves. Master selection could be done via command line arguments, or even by having two binaries: workers that include only the back-end functionality, and a front end that includes both (and has some sort of UI).
At that point you can leverage Qt Remote Objects, the idea being what you'd "distribute" is QObjects that do work in the slots, and return results either via return value of the slot, by sending a signal. It's not as convenient as using QtConcurrent directly, but in general there's no way to distribute work transparently without some introspection that C++ doesn't quite provide yet.
I know that OpenMPI is not a Qt-based solution, it certainly works and makes life easy, and for sure it can interoperate with Qt code - you can even distribute methods and lambdas that way (with some tricks).
If you manage worker objects encapsulated as QObjects, it's not too hard to distribute the work in e.g. round-robin fashion. You could then have a front-end QObject that acts as a proxy: you submit all the work to it, and it signals all the results, but internally it invokes the slots on the remote QObjects.
Would you be interested in a demo? I could write one up if there was enough demand :)
Suppose I have two or more different server applications developed in Clojure using ZeroMQ and BSON as protocols. How can I deploy them using a single JVM instance while also sharing common dependencies?
It seems a waste of memory to use a JVM instance for each standalone application. I plan to develop several Clojure applications in the future and VPS memory is not cheap.
Although not explicitly said, applications running in an application server (Jetty, Glassfish) seem to share the same JVM while isolating their state. However, they require a container and neither Servlets or Enterprise JavaBeans have an implementation that I could easily adapt to my custom protocol.
I've been thinking about using Servlets and implementing a dummy service() method though I don't like the idea of having a pointless HTTP server overhead. As for the EJB container, I cannot even figure out its implementation.
It would be nice to have a container requiring only init() and destroy() methods but I can't find an application server providing it.
Maybe there is a way around or I don't even need an application server. Could somebody point me in the right direction?
It sounds like you would be okay using an EJB container, but only if it were easier or simpler to work with. Have you looked at Immutant? It's basically a wrapper around JBossAS for Clojure, written by guys at Red Hat (who also own JBossAS).
In addition to being an application server, those guys have wrapped JMS and other Java-EE features around Clojure, such that sending messages between apps appears pretty simple:
Also, they have Daemons and Jobs, which may provide something similar to what you were describing as simple services with init() and destroy().
That being said, I haven't used it, so I'm can't vouch for it's awesomenss/awfulness.
So you have two applications that both share the same dependencies and both want to respond to and/or generate events on a message bus.
If I understand what you're saying, this should be as simple as starting the JVM with access to all code in the classpath and initializing your message bus and your code from a main method.
If you wanted to use a container, you could create some dummy message driven beans that sat between your clojure code and the message bus assuming there is a JMS adapter for your message bus. Using netbeans/glassfish, this may not be that bad. You might gain some in terms of monitoring, but I'm not sure what else you would gain.
I kept searching and found out that some application servers implementing the OSGi service platform have simpler lightweight containers than those offered by Java EE.
Apache Karaf for instance can load POJO applications directly from JAR files.
I am not sure what DDs are but any JAR is a valid bundle. Since clojure is not type safe you will need to bridge the clojure world and OSGi/Java world but the OSGi API is a wet dream for such bridges.
Not that in OSGi bundle do not automatically provide their content, in OSGi a bundle is by default private. However, the API allows you to punch holes where ever you want.
I need to find the most efficient way to communicate from an asp.net web server and a windows C++ application. The windows application does not have any permission to access the database of the asp.net web server.
When the user presses a button, that action with some bytes should be received by the C++ application.
In return, after processing the data on the C++ application, it will send back the result to the web server.
The only way I can think of at the moment is as following:
The asp.net web server will have two web service methods:
the C++ application will call that web service for a method for an interval. if there is a change, then the C++ application will process.
after the C++ application finished its process, it will call a method on that web service to inform about the result.
Any other ways to solve this kind of communication?
Thanks in advance.
If the C++ Application is also on Windows, named pipes would be a good solution. They can be configured to be durable so they can queue messages if either side is not ready to receive the message and they are quite easy to use. They basically look like files that you can read or write from and the data appears on the other side of the "pipe".
Take a look at the documentation (C++) here: http://msdn.microsoft.com/en-us/library/aa365781(v=VS.85).aspx
On the ASP.NET side you would use .NET API's. Here's a nice example to get you started: http://msdn.microsoft.com/en-us/library/bb546085.aspx (This example includes both client and server code.)
Named pipes would be a great solution if the C++ application is located in the same physical server as the ASP.NET application. In that case the OS would be just moving memory between processes for you so it could be very quick.
Additionally, I would configure the C++ Application as a Windows Service so it's always available and can be restarted when the server it's running on is restarted. If keeping it running is very important you could integrate Performance Counters and then have your ops team monitor the counters to make sure it is operating within expected thresholds.
The C++ application can also make a simple GET or POST request with enough information that the webserver can handle in case you don't want to expose a webservice.
You could use network sockets. It's been a long time since I have done anything with them so I can't be much help. Research Winsock (aka Windows Sockets API).
You could use WCF services and connect to them using your C++ client. You will have to research consuming WCF services from C++ client.
As #parapura suggested you could use simple HTTPRequest get & post methods. You could create your own http handler for these request to customize the response.
As you suggested you could use simple web services.
I'm searching for a Library which I can include in a program to open a file with a given internet address. Just like http://foobar.com/foobar.txt.
Like Ada.Text_IO.Open (File, Ada.Text_IO.In_File, "bla.txt");
but which is not limited on local files.
Well, you aren't liable to find something with that exact interface, as Text_IO is a standard library and can't easily be extended by third parties in that way.
If your platform's underlying filesystem were to support HTTP, then it would work just like you want. I don't know of any platform that works that way however.
What you probably want as a general solution is AWS (Ada Web Server). A person could use that to implement a full-blown web server if they want, but it also contains HTTP client facilities. The HTTP client would be what you want (see AWS.Client). It would be a bit more work on your part than just making one standard API call, but probably no too much work.
Here's an example, cribbed from Rosetta Code:
with Ada.Text_IO;
use Ada.Text_IO;
with AWS.Client;
with AWS.Response;
procedure HTTP_Request is
begin
Put_Line (AWS.Response.Message_Body (AWS.Client.Get (URL => "http://www.rosettacode.org")));
end HTTP_Request;
Having used and implemented several HTTP clients, I would advise you to use an established and dedicated client. There are many intricacies to the HTTP standard that are not handled by naive implementations https://www.rfc-editor.org/rfc/rfc2616.
Consider using the Ada Bindings for a mature library like libCURL; http://curl.haxx.se/libcurl/ada95/
I'm building a Qt application that needs to use libssh, a SSH client library. libssh (understandably) performs its own network connections, however Qt has its own infrastructure for network connections (QTcpSocket etc).
Should I worry about these differences? Should I be trying to make libssh make network connections via QTcpSocket... Or if it works fine on the platforms I'm targeting, is that good enough?
The only downside is that you have another library that your code depends on.
The primary rule though is if it works, go with it.
I think it depends on how the abstraction you get from libssh looks like. If it is a socket-like API, you could create an QAbstractSocket implementation for it. If it is just some structure or handle to read from and write to, you could create a QIODevice subclass. Most I/O can be implemented generically operating on QIODevices (instead of explicitely operating on QFile, sockets, etc.).