Currently I know several methods of connecting to a GUI remotely, or running GUI applications remotely:
Microsoft Terminal Services (only
works for remote windows;
installation);
VNC (it's slow);
XDMCP (requires a remote X server
running, has no session persistance);
Local X as remote DISPLAY for
applications (best solution, but -
no session persistance).
We are trying to create Solaris development environments that can replace local workstations for our developers. So one of the requirements is session persistence, and/or session mobility. And another requirement is for it to be fast, and it has to run on Solaris/UNIX.
Are there any lightweight solutions for this?
Either NoMachine (http://www.nomachine.com/) or FreeNX (http://freenx.berlios.de/) sounds like what you want. Fast, keeps session if your connection drops and even works over SSH so your connections are encrypted.
Related
Qt Quick WebGL Streaming is a technology by which any Qt Quick ("QML") application can display its user interface to a user connecting via a web browser. All you have to do is to start the application like this on the host:
./my-qml-program -platform webgl:port=8080
This works, but is limited by design so that only one user can be connected at the same time and see the user interface. As the reason for this, they quote problems with user input, with querying the GPU, and with security (source).
Initially, the Qt developers wanted to support multiple users in WebGL streaming by serving multiple windows from one process:
How will concurrency be supported? Like does each connection get it's own QGuiApplication, or is there only one? […] You create a single QGuiApplication and different windows. There is a signal to notify when new clients connect to the HTTP server. When the signal is emitted, you create a different QWindow. The windows are independent (source)
Now however, the developers want to support multiple users in WebGL streaming by launching one process per user.
We are working in decoupling the HTTP Server from the plugin
A dedicated HTTP Server application will be provided
Instead of running all the users in the same process a new process will be spawned per user
The new process will handle the web socket
(source)
"Decoupling the HTTP Server from the plugin" would mean replacing it with QHttpServer:
I have planned some use-cases for this [QHttpServer] module: Change the current embedded web server (and WebSockets) in the WebGL plugin to make it easy to create your own custom solutions based on the plugin. (source)
So far, no solution has been implemented. What is the simplest way to implement support for multiple users in Qt WebGL streaming myself, without waiting for Qt to implement this?
Here is a solution that uses the load balancer Pen to make a Qt application accessible via WebGL streaming to multiple users at the same time. It forwards an incoming connection to one of multiple Qt processes running on the same host, each of which running its own embedded web server. This kind of forwarding is exactly the job of a load balancer, just that it usually distributes connections to multiple hosts.
Caveat: In my tests, WebGL streaming in Qt 5.12.3 is fast enough for real use only in the local network, not over Internet. So you can't use it to "convert a Qt application into a web application on the cheap".
Instructions
These instructions apply to Ubuntu 19.10, 20.04 and other Debian based distributions.
Install the Qt application on your web host.
Install the Qt WebGL platform plugin on your web host. It is not contained in the Ubuntu 19.10 distribution, for example. In such a case, you'd have to compile and install it yourself. Under Ubuntu Linux, the result should be the following file:
/usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/libqwebgl.so
Start multiple processes of the Qt application. Each should serve one port with Qt WebGL streaming. Here we start three processes, but you can start as many as fit into your memory.
nohup myapplication -platform webgl:port=8080 &
nohup myapplication -platform webgl:port=8081 &
nohup myapplication -platform webgl:port=8082 &
Install the load balancer Pen.
sudo apt install pen
Start the load balancer. Note that with pen 80 … it is started so that the users only have to enter a simple URl such as http://example.com/ into their web browser to visit the application. If port 80 is already in use, you can choose any other port (say, 9090), start the load balancer with pen 9090 … and then let users visit a URL like http://example.com:9090/. Also note the :1:1 suffix for each server process listed, telling pen to only connect at most one client to one process.
pen 80 localhost:8080:1:1 localhost:8081:1:1 localhost:8082:1:1
Test the setup. To test, visit the associated URL http://example.com/ from multiple devices. You shoud be served one process of the application on each device. It is not possible to see two processes in two parallel browser tabs on the same device – pen would then try to connect the second such tab to the same Qt process as the first tab because the request comes from the same IP address. As a result, you'd see a spinning wheel in the second tab, because Qt processes only allow one connection each for WebGL streaming.
Improvements
This solution could be further improved by starting the Qt processes only on demand, once a client connects. This should be possible with systemd socket activation.
I'm writing a Qt/C++ application and i plan to add a network part with socket connection to a server implemented in Qt also.
If i host locally the server there is no real problem.
But if i want to share my application (client part) with some people and then be sure my server is always running, the best way would be to have a distant server.
Could you give me some clue to do it ? It's not still clear for me for steps to follow in this case.
Is it a better way for that ?
Can i find free hosting ?
Thanks a lot! :-)
There are generally 3 options:
1. Local hosting
This is server running at Your physical location. You can set it clearly as You want and the server will do whatever You want. But must be turned on the whole time, when there is no other work it will just consume power. Also You must get all the hardware (server components), software for running (Operation system), network device and connection (some router, which needs to have special set-up [NAT, port-forward, ...], speed and reachability of the internet connection) and most likely also some security device/SW (firewalls or so).
This is best idea for basic developement and testing. But once the service should work for public audience, it is not really worth to run server Yourself.
2. Remote hosting (virtualized or dedicated server)
This option was the top in last 20-30 years, where all the Web developers and App developers were putting their software on some prepared server. Dedicated is physical server running at some providers' location, who are lending You the hardware (and maybe some license for OS/other SW). Virtualized machine is just 1 hardware piece (server) with multiple virtual servers on it (more clients running on same hardware).
This got generally benefits as the networking/security/hardware issues are being carried out by the hosting owner. You are just borrowing some diskspace and computer time/performance. Normally the company will provide whole server, on which You can set up several services, run multiple protocols, etc..
Ideal solution for webs and single/few (not much) instances of server application(s).
3. Cloud hosting
This is the newest technology at the moment (alive around 10-15 years [eg. AWS running since 2006, Azure since 2010]). Datacenter owners (from 2. point) get better and created some applications on the servers, which will do all the work for You (mostly automatically). In few clicks the servers are running and application can be deployed, used database engines, web pages, IOT hubs, ... quite lot of stuff. Benefits are clearly that You just have to spent minimum of time to set up things and they will run. With high uptime (eg.: 99.9995%).
Difference between dedicated & cloud: On dedicated server there can be put almost any OS which fits the needs, run just services You want, have full control. In cloud solution, You don't have so much of "physical" control and the data moreless live somewhere in Datacenters all over the world. But generally it is more scalable solution and once Your app will be used by lot of users from public sector, this is best way to go.
Common ideology:
The most common solution is that when You develop, You create local server on which You deploy, test, improve. Once stable, order a server either on cloud or as dedicated/virtual machine and deploy it there. Some developers knows that their App will run on cloud services from the very beggining so they order it and start developing against it, but in most cases there is no need for that.
I have to construct the infrastructure so that multiple users can work on the same jupyter(ipython notebook) service, yet via different sessions, so the users can't interrupt each other.
I thought jupyterhub( https://github.com/jupyter/jupyterhub) is there to control everything, yet it still seems like the session is bound to one since if I logout of it on one window, an instance on another window also logs out.
Is there a way to control multi-sessions on jupyter?
Jupyter doesn't support multiple users editing the same notebook at the same time without data loss. I don't believe it is meant to. I believe Jupyter is meant to provide a relatively easy to configure and install instance of python that contains the same installed modules and environment to minimize problems caused by environmental differences between developer workstations.
Also, it's meant to make the barrier for entry to programming python and working in data science much lower than it otherwise would be. That is, it's much easier to talk an analyst into visiting a website than learning a new programming language.
More to the point of your question, though: The way Jupyter handles 'sessions' is that (unless configured otherwise), every Jupyter user corresponds to a user on the on the server that is running Jupyter and every time you log in to Jupyter you are effectively creating a new login to that server's operating system. It immediately follows that if you log out of Jupyter from one window, you're logging out of not just that browser's session, but also the login to the Jupyter server's operating system as well, which would kill all other open browser windows.
You question is a bit unclear, JupyterHub is meant to support multi-user across many machines. If course if you use the same browser from the same machine, you get logged out too, as the browser is carrying the connexion information that get revoked.
Jupyterhub is a web based multiuser application, that provides session and authentication services.
Jupyterhub will be hosted in unix/linux server, the client can access it using the ip address and port number,Once it is accessed by client, the client must enter the userid and password which is associated with the sytem users in server (PAM authentication) which will redirect to the home directory of the current user.
You can build a infrastructure by using jupyterhub, which is meant for multi-user. The jupyterhub just provides multi user interface and PAM authentication, you have to configure security, file access permission everything in kernel level using shell script.
Normally, you host a jupyterhub or jupyter notebook in command line. In the same way you can write a shell script to setup multi-user environment.
I have a long running job that I'd like to run using EC2 + RStudio. I setup the EC2 instance then setup RStudio as a page on my web browser. I need to physically move my laptop that I use to setup the connection and run the web browser throughout the course of the day and my job gets terminated in RStudio but the instance is still running on the EC2 dashboard.
Is there a way to keep a job running without maintaining an active connection?
Does it have to be started / controlled via RStudio?
If you make your task a "normal" R script, executed via Rscript or littler, then you can run them from the shell ... and get to
use old-school tools like nohup, batch or at to control running in the background
use tools like screen, tmux or byobu to maintain one or multiple sessions in which you launch the jobs, and connect / disconnect / reconnect at leisure.
RStudio Server works in similar ways but AFAICT limits you to a single user per user / machine -- which makes perfect sense for interactive work but is limiting if you have a need for multiple sessions.
FWIW, I like byobu with tmux a lot for this.
My original concern that it needed to maintain a live connection was incorrect. It turns out the error was from running out of memory, it just coincided with being disconnected from the internet connection.
An instance is started from the AWS dashboard and stopped or terminated from there also. As long as it is still running it can be accessed from an RStudio tab by copying the public DNS to the address bar on the web page and logging in again.
Working on Progress 9.1E on a Windows box. We've got a standard 4GL GUI application up and running which connects to a series of personal databases running on the same box. It's sort of like a big graphical catalogue application with ordering capabilities.
Anyhow, we're looking to run a .Net application on the same box and Progress supplies a Merant ODBC driver along with it's runtime.
My question is, can I have the 4GL GUI client application up and running and connected to the Progress databases while at the same time connecting and running the .Net application which connects via an OBDC System DSN to the same databases?
These "personal" databases are traditionally single user, but I'm wondering (or have heard through rumours) that you can actually run an ODBC client in addition to a 4GL client on the same box at the same time.
Truth to this?
You can run both a 4GL client and an ODBC client, but you can't run them both single user at the same time. You'll need to start a server for each of the DBs you want concurrent access to. You can run the server process on the same machine, if you have the licence, if that helps.