Can one enable media streaming on Windows 10 via an ssh command? - wmp

Problem Y: Is there a command one can ssh to Windows 10 that will enable media streaming?
Problem X: Streaming gets disabled after every reboot.
Repeatedly having to go to the server itself and then use the GUI interface is something I'd like to avoid.
(I.e. alternative solutions to this real problem are welcome.)

Related

Problem communicating over a local area network (LAN) with ROS on WSL2

I am a developer of ROS projects. Recently I am trying using ROS(melodic) on WSL2(Windows Subsystem for Linux), and all things works just great. But I got some trouble when I want to use another PC which also in the same local area network(LAN) to communicate with. Before setting the environment variables like "ROS_MASTER_URI, ROS_IP", I know that since WSL 2 work on Hyper-V so the IP show on WSL2 is not the one in the real LAN. I have to do some command like below in order to make everyone in LAN communicate with the specific host:PORT on WSL2.
netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr
But here comes a new question:
The nodes which use TCPROS to communicate with each other have a random PORT every time I launch the file.
How can I handle this kind of problem?
Or is there any information on the internet that I can have a look?
Thank you.
The root problem is described in WSL issue #4150. To quote from that thread,
WSL 2 seems to NAT it's virtual network, instead of making it bridged
to the host NIC.
Option 1 - Port forwarding script on login
Note: From #kraego's comment (and the edited question, which I'm just seeing based on the comment), this is probably not a good option for ROS, since the port numbers are randomly assigned. This makes port forwarding something that would have to be dynamically done.
There are a number of workarounds described in that issue, for which you've already figured out the first part (the port forwarding). The primary technique seems to be to create a PowerShell script to detect the IP address and create the port forwarding rules that runs upon Windows login. This particular comment near the top of the thread seems to be the canonical go-to answer, although many people have posted their tweaks or alternatives throughout the very long thread.
One downside - I believe the script that is mentioned there needs to be run at logon since the WSL subsystem seems to only want to run when a user is logged in. I've found that attempting to run a WSL service or instance through Windows OpenSSH results in that instance/service shutting down soon after the SSH session is closed, unless the user is already logged into Windows with a WSL instance opened.
Option 2 - WSL1
I would also propose that, assuming it fits your workflow and if the ROS works on it (it may not, given the device access you need, but not sure), you can simply use WSL1 instead of WSL2 to avoid this. You can try this out by:
Backing up your existing distro (from PowerShell or cmd, use wsl --export <DistroName> <FileName>
Import the backup into a new WSL1 instance with wsl --import <NewDistroName> <InstallLocation> <FileNameOfBackup> --version 1
It's possible to simply change versions in place, but I tend to like to have a backup anyway before doing it, and as long as you are backing up, you may as well leave the original in place.

How to serve a Qt application to multiple users via Qt WebGL streaming?

Qt Quick WebGL Streaming is a technology by which any Qt Quick ("QML") application can display its user interface to a user connecting via a web browser. All you have to do is to start the application like this on the host:
./my-qml-program -platform webgl:port=8080
This works, but is limited by design so that only one user can be connected at the same time and see the user interface. As the reason for this, they quote problems with user input, with querying the GPU, and with security (source).
Initially, the Qt developers wanted to support multiple users in WebGL streaming by serving multiple windows from one process:
How will concurrency be supported? Like does each connection get it's own QGuiApplication, or is there only one? […] You create a single QGuiApplication and different windows. There is a signal to notify when new clients connect to the HTTP server. When the signal is emitted, you create a different QWindow. The windows are independent (source)
Now however, the developers want to support multiple users in WebGL streaming by launching one process per user.
We are working in decoupling the HTTP Server from the plugin
A dedicated HTTP Server application will be provided
Instead of running all the users in the same process a new process will be spawned per user
The new process will handle the web socket
(source)
"Decoupling the HTTP Server from the plugin" would mean replacing it with QHttpServer:
I have planned some use-cases for this [QHttpServer] module: Change the current embedded web server (and WebSockets) in the WebGL plugin to make it easy to create your own custom solutions based on the plugin. (source)
So far, no solution has been implemented. What is the simplest way to implement support for multiple users in Qt WebGL streaming myself, without waiting for Qt to implement this?
Here is a solution that uses the load balancer Pen to make a Qt application accessible via WebGL streaming to multiple users at the same time. It forwards an incoming connection to one of multiple Qt processes running on the same host, each of which running its own embedded web server. This kind of forwarding is exactly the job of a load balancer, just that it usually distributes connections to multiple hosts.
Caveat: In my tests, WebGL streaming in Qt 5.12.3 is fast enough for real use only in the local network, not over Internet. So you can't use it to "convert a Qt application into a web application on the cheap".
Instructions
These instructions apply to Ubuntu 19.10, 20.04 and other Debian based distributions.
Install the Qt application on your web host.
Install the Qt WebGL platform plugin on your web host. It is not contained in the Ubuntu 19.10 distribution, for example. In such a case, you'd have to compile and install it yourself. Under Ubuntu Linux, the result should be the following file:
/usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/libqwebgl.so
Start multiple processes of the Qt application. Each should serve one port with Qt WebGL streaming. Here we start three processes, but you can start as many as fit into your memory.
nohup myapplication -platform webgl:port=8080 &
nohup myapplication -platform webgl:port=8081 &
nohup myapplication -platform webgl:port=8082 &
Install the load balancer Pen.
sudo apt install pen
Start the load balancer. Note that with pen 80 … it is started so that the users only have to enter a simple URl such as http://example.com/ into their web browser to visit the application. If port 80 is already in use, you can choose any other port (say, 9090), start the load balancer with pen 9090 … and then let users visit a URL like http://example.com:9090/. Also note the :1:1 suffix for each server process listed, telling pen to only connect at most one client to one process.
pen 80 localhost:8080:1:1 localhost:8081:1:1 localhost:8082:1:1
Test the setup. To test, visit the associated URL http://example.com/ from multiple devices. You shoud be served one process of the application on each device. It is not possible to see two processes in two parallel browser tabs on the same device – pen would then try to connect the second such tab to the same Qt process as the first tab because the request comes from the same IP address. As a result, you'd see a spinning wheel in the second tab, because Qt processes only allow one connection each for WebGL streaming.
Improvements
This solution could be further improved by starting the Qt processes only on demand, once a client connects. This should be possible with systemd socket activation.

Looking for SFTP-Stresser/Fuzzer

I am working for a company that is providing File-Share-Software for all sorts of Protocols such as FTP, SFTP, FTPS and so on. One of our customers is facing an issue with Key-Auth and spontaneously login-problems.
Going trough the code I am pretty certain that the server collapses with too many requests at the same time. What I need right now is a simple tool to test a situation just like this. I need a simple SFTP-Fuzzer or Stresser, sending invalid or broken Auth-Attempts to the SFTP-Server.
I am not a developer but a technician and instead of writing something myself (which would take forever) I would love to have a simple script or toolset to go...if there is one.
Ok, found one faster than I thought.
Steps:
Download Kali Linux (or any Distro that contains Metasploit)
Fire up Kali Linux and put it in the same subnet as your SFTP-Server
Start Metasploit and use the SSH-Fuzzer /auxiliary/fuzzer/ssh/ssh_version_2
Set RHOST and RPORT to the relevant IP and port your server is listening to
Exploit and see what will happen

Accessing Serial Ports with an Application Runs in Flatpak

I recently updated my IDE to Monodevelop 6 using Flatpak, on Ubuntu 16.04 LTS, from an older version 5.
I have an application that interacts with serial ports which is basically a USB/RS232 adapter connecting a device to my computer.
I have no issue accessing USB port (/dev/ttyUSB0) when I debug the application in Monodevelop5. However, the device directory (/dev/) that I have access to, using Monodevelop6 is completely different than the one I have access to in Linux, and there is no ttyUSB0 in that folder.
I believe this is because Flatpak runs the application in sandbox. So, if that is the reason, how can I access to a serial port then?
Thanks.
Most likely that's because Flatpak is blocking access to the serial device.
Unfortunately at the moment I don't think there is a way to give access specifically to the serial devices, so you'd need to give access to all:
$ flatpak run --device=all com.xamarin.MonoDevelop
What this does is essentially mount the host's /dev inside the sandbox, so the app has full access to it.
It's a pretty big hole in the sandbox, but sometimes it's needed until all the permission handling stuff gets implemented.

How to render a remote ncurses console?

I wanna write a remote console, working like a telnet server. User is able to use telnet to log into the server, then write some commands to do some works.
A good example for this is the console of router os. What I'm confusing right now is, I can accept user's input, do someting then print some texts back, but I wanna use ncurses to make the console has more features(such as "cmd auto-complete", syntax color...), so how can I do that? Because the console is in user side, if the server calls ncurses APIs it'll just change stuffs on server...
Maybe this is a stupid question but I'm really newbie on this. Any suggestions are appreciated.
This is more difficult than you might think.
You need to understand how terminals work - they use special control sequences for e.g. moving the cursor or color output. This is described by a terminfo file which is terminal-specific. Ncurses translates API calls (e.g. move cursor to a certain position) to such control sequences using terminfo.
Since the terminal (nowadays xterm, gnome-terminal, screen, tmux, etc) is on the client side, you have to pass the type of terminal from the client to the server. That's why e.g. ssh passes this information from the ssh client to the server (try echo $TERM in your ssh session - it might be 'linux' if you are logged in via the console, or 'xterm', if you are using X and an xterm). Also, you better have the respective terminfo available on the server.
Another piece of the puzzle is pseudo terminals. As nowadays relatively few people use serial terminals, their semantics are emulated so that applications and libraries (e.g. curses and its friends) originally developed for serial consoles keep working. This is achieved via pseudo terminals - these are like pipes, a master and a slave device communicates, anything written on one side comes out on the other side. For a login process, getty, for example, can just use one side of a pty device and think it's a serial line - your server program must handle the other side of the pty, sending everything it gets from the pty to your client via the network.
Terminal emulators also use ptys, type tty into your terminal, and you'll get something like /dev/pts/9 if you're using a terminal emulator. On the other side of the pty it's usually your shell, communicating with your terminal emulator via the pty.
Your client program can more or less just use standard input and standard output. If your terminal information is correct, the rest will be handled by your terminal emulator, just pass anything you receive from your server program to stdout, and send anything you read from stdin to your server program.
Hopefully I haven't left out any important detail. Good luck!
It is possible to have ncurses operate on streams other than stdin and stdout. Call newterm() before initscr() to set the input and output file handles for ncurses.
But you will need to know what sort of terminal is on the remote end of the connection (ssh and telnet both have mechanisms for communicating this to the server) and you will also want a fall back to a non-ncurses interface in case the remote end is not a supported terminal type (or if you can't determine the terminal type).

Resources