I want to execute a Linux binary from a QT application running on W10.
In QT we have QProcess to launch additional processes. However, since my binary is for Linux, I've thought of two possible approaches:
Running the binary in a container (i.e.: Docker, Kubernetes, Singularity...).
Executing the binary through WSL (Ubuntu) bash.
In any case, the QT application should initiate the process (the container or the bash) and, in turn, this process should launch my binary.
I've been searching on the web and I could not find something related, what makes me think that it will be difficult. For this reason, I am posting the question in order to know the viability of the proposed approaches.
EDITED
It looks like the WSL is easier, the problem is that the user has to have install ed it. Apart from requiring the sudo password when installing new software via apt-get.
The binary that I have to execute only exists for Linux, and let's say that cross-compiling is dismissed because of its complexity. Furthermore, this application needs CGAL, BOOST, MPI, among other pieces of software.
If you want to go with WSL, you can just run wsl myLinuxProgram --options.
Using WSL is the easiest way I believe as the current directory (PWD), is the current one i.e. the same as the PWD of your Qt app.
You can read Microsoft documenation for more info: https://learn.microsoft.com/en-us/windows/wsl/interop
If your linux binary depends on a lots of things, I really suggest you use docker for windows. Then, you have chance to pre-build an own docker image which put all dependency software also the linux binary you need to run in it.
Of course, to let your customer to use it, you should put it to dockerhub, register an account for yourself.
Then, the solution is simple: let the QT application to call docker run to setup a container base on your own image, execute it, and also let the linux binary to write the log or others to the bind mount volume among linux container & windows. After it run, the QT application fetch the linux binary output from this shared folder.
Finally, I give a minimal workable example for your reference:
Suppose the shared folder between windows & linux container is: C:\\abc\\log_share, it will mapped to linux container as /tmp folder. Of course you need to allow volume share by right click the docker icon in windows tray area & choose settings, like described here
Simplify the windows application as bat file, and simplfy the docker image as ubuntu, you should use your own prebuilt docker image with all dependency in it:
win_app.bat:
ECHO OFF
::New a shared folder with linux container
RD /s/q C:\\abc\\log_share > NUL 2>&1
MKDIR C:\\abc\\log_share
::From windows call docker to execute linux command like 'echo'
echo "Start to run linux binary in docker container..."
docker run -it -v C:\\abc\\log_share:/tmp ubuntu:16.04 bash -c "echo 'helloworld' > /tmp/linux_log_here.txt"
::In windows, get the log from shared bind mount from linux
echo "Linux binary run finish, print the log generated by the container..."
type C:\\abc\\log_share\linux_log_here.txt
Simplify the linux binary just as echo command in linux, the output things should be all write to shared directory:
echo 'helloworld' > /tmp/linux_log_here.txt
Now, execute the bat file with command win_app.bat:
C:\abc>win_app.bat
C:\abc>ECHO OFF
"Start to run linux binary in docker container..."
"Linux binary run finish, print the log generated by the container..."
helloworld
You can see the windows application already could fetch things(here is helloworld) which generated by linux binary from docker container.
Related
I have been working on an ASP.NET application using Docker, and when I launch it through Visual Studio it works great! However, if I try to run anything from the command line (or powershell, or VS's CLI/Powershell) it will run, but the container it generates refuses all connections.
I am on Windows 10 NT with Docker Desktop installed trying to run an ubuntu:18.04 image (i've tried Alpine, ubuntu:16.04 as well).
Steps to reproduce:
-Create a default ASP.NET application in Visual Studio
-Add Docker Support
-Run with 'Docker' selected
-Open browser, navigate to localhost:[YourPort]
-Success! Works as intended.
Then, either using the same image or a downloaded one (I tried dockersamples/static-site to confirm it wasn't a problem with the specific project):
-Open CMD
-Run docker run -p [HostPort]:[ContainerPort] [SameImageVSUses:tag] on a different port
-See that docker ps shows both containers running next to each other
-Open browser (Firefox), get error
The connection was reset
Update
I changed the ASP.NET app's program class to use 0.0.0.0 instead of localhost, I believe this was necessary but now I see
Secure Connection Failed
PR_END_OF_FILE_ERROR_
If I curl localhost:[MyPort], I get (52) empty reply from server
/Update
Well, maybe Visual Studio does more that I'm not aware of.
A little bit of digging shows yes, it throws in a ton of extra arguments! Using the copy/pasted command Visual Studio does gives me... the exact same error.
To clarify, the containers still run from the command line, I can ssh in or docker inspect them (in fact, the VS-started and CMD-started containers' docker inspect is identical other than network addresses it's bound to). I get no error messages at all from the process of building and starting the container, so if some part of it is failing it is doing so silently.
I'm relatively new to Docker but I can't seem to find a fix for this, or even a reason behind it. What is Visual Studio doing that I'm not? I've tried everything I'm aware of, I even had to wipe my machine (unrelated) and the exact same thing happened when I got everything reinstalled. My gut tells me it's something on my machine, but then the VS-launched one should fail too, right?
I can't find anything that tells me to flip a magic switch if I'm running CLI stuff, and nothing I do to the dockerfile or command arguments seems to work. I've never used VirtualBox or Docker Toolbox, this shouldn't be a wonky configuration screwed up by an old program because It works fine when launched from Visual Studio! Agh!
I hope that this is indeed a magic switch I haven't flipped, otherwise there is something very basic that I don't understand about what I'm working with.
If you are trying to run recent VS template you just need to follow this instruction:
Go to the Api project directory:
cd ./src/YourApiDirectory
Build Command:
docker build -f ./Dockerfile --force-rm -t yourapiimage:dev ..
Run Command:
docker run -it --rm -e "ASPNETCORE_ENVIRONMENT=Development" -p 58817:80 --name yourapiname yourapiimage:dev
please note that "-it" flag in last command will run your image in "interactive" mode. Also please note I am using only http connection via port 58817.
Thank you for the suggestions, it ended up being something rather frustrating. I think that it was a combination of two problems:
This stuff could be causing problems for others but I was mistaken, this did not work for me
First and foremost, no amount of docker configuration tells your website to listen for anything inside the container. I believe the website wasn't listening for anything when I initially tried most fixes.
The real problem was that the launchSettings.json in the .csproj Properties folder apparently overrides arguments from the command line!
Remember how I said '...run it alongside the first...'? That means I was never running the website on the correct set of ports. Apparently, -p 8001:443 -e ASPNETCORE_HTTPS_PORT=443 is not enough to make the site listen on 443. You must also set the sslPort in the launchSettings.json. Such is life, I suppose.
This is what finally worked
I ran docker-compose up in the solution directory. That's it. I didn't see a docker-compose.yml when I was looking in VS so I didn't think about it, but that's only because VS doesn't show solution-level items. I guess the thing that VS was doing that I wasn't was running docker-compose instead of individual commands.
When directly launch with Docker profile which is done via docker-compose file in Visual Studio, visual studio behind the screen merges different override files and does different tasks and one of them is attaching remote debugger in the container etc.
To help you I've created a sample asp.net core api via Visual Studio 2019 selecting .Net Core 3.0.
The following is the docker-compose that VS2019 generated on my machine when I launched my API via VS2019.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" -f "C:\Users\myuser\source\repos\testwebcore\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose14364360289538262671 --no-ansi up -d --build --force-recreate --remove-orphans
I can get it work directly on powershell by running the following command, here I am using the same settings used in the override file by default created by VS2019. You have to run this command from parent folder outside the project folder.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" up
If you have directly build and run with the docker file instead of docker-compose
You can build with the following command and like before should run from outside folder of the project file.
docker build -f testwebcore/Dockerfile -t testcore
After building the image, you can run it with the below command but before that you need to create a certificate and pass couple of environment variables to the run command. The details of this is mentioned in the following page.. Especially the section Windows subsystem for Linux. I am running Linux containers on my Windows 10 laptop.
So you have to run the following command to generate certificate
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p testpassword
So the complete run command with environment variables and certificate generated above the command is as follows.
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="testpassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v c:\users\myuser\.aspnet\https:/https/ testcore:latest
I'm trying to run xv6 operating system on VirtualBox or VMWare in a Linux host. The official instructions said how to run the OS on qemu only. However, the official page (https://pdos.csail.mit.edu/6.828/2014/xv6.html) mentioned that xv6 can be booted directly on hardware also, but it's not clear how.
I want to boot xv6 on VirtualBox or VMware first. I extracted the following command from the Makefile, which runs xv6 from the command line after it's compiled using make command.
/usr/bin/qemu-system-i386 -serial mon:stdio -drive file=fs.img,index=1,media=disk,format=raw -drive file=xv6.img,index=0,media=disk,format=raw -smp 2 -m 512
Please help me how to proceed. If the procedure is already documented some reference will be helpful.
The instructions are here which is linked (via 6.828 tools page) from your link though they are a bit terse:
Using a Virtual Machine
Otherwise, the easiest way to get a compatible toolchain is to install
a modern Linux distribution on your computer. With platform
virtualization, Linux can cohabitate with your normal computing
environment. Installing a Linux virtual machine is a two step process.
First, you download the virtualization platform.
VirtualBox (free for Mac, Linux, Windows) — Download page
VMware Player (free for Linux and Windows, registration required)
VMware Fusion (Downloadable from IS&T for free).
VirtualBox is a little slower and less flexible, but free!
Once the virtualization platform is installed, download a boot disk
image for the Linux distribution of your choice.
Ubuntu Desktop is what we use.
This will download a file named something like
ubuntu-10.04.1-desktop-i386.iso. Start up your virtualization platform
and create a new (32-bit) virtual machine. Use the downloaded Ubuntu
image as a boot disk; the procedure differs among VMs but is pretty
simple. Type objdump -i, as above, to verify that your toolchain is
now set up. You will do your work inside the VM.
I can see how one could read that and not see the answer.
After the virtual machine is installed, download the Ubuntu Desktop .iso. Install that into the VM and fire it up. Presumably the Desktop will provide a clear mechanism for loading your OS. (Wait, I'm giving it a try. Will update with the result.)
Turns out that is simply a Ubuntu client desktop, and isn't anything special for running a sub-operating system.
Looking around some more, I found the commentary to be the best potential clue. It contains this (head scratcher) phrase:
To run xv6, install the QEMU PC simulators. To run in QEMU, run "make qemu".
If only it specified the context to get to that point! (Sorry I am not more help.)
I see that you want to boot it on VirtualBox or VMware, but another option would be to using docker to run xv6. A great guide for getting started with xv6 through docker is here.
The full guide is elaborate and can help you with getting started.
It is an alternative option, but one that can get you going fast hopefully.
It will only take 4 steps to get going with the xv6:
Step 1
Download and set up docker here
Step 2
- Run this command in PowerShell or bash to pull the ubuntu image with xv6 docker pull grantbot/xv6
Step 3
- To run the docker image and get going with xv6 run this command docker run -it grantbot/xv6
Step 4
- Now inside the shell in the ubuntu image run cd /home/a/xv6-public/ to enter the root folder of the xv6.
Done
- Now you can compile and run the xv6 with make qemu-nox
Step 1.Compile xv6
Download the code, unzip it and enter the directory, compile the operating system image and root file system, the command is as follows:
make xv6.img&&make fs.img
Step 2. Write image to disk
Create two disks in a existed vmware virtual machine(my vmware version is 15.2.2, linux version is Centos7.8), the operation steps are: virtual machine settings -> add -> disk -> SCSI -> create a new virtual disk -> size 0.005 (allocate immediately, single file) -> name the disk "os", which means this disk is the operating system.
Create another disk named "fs" in the same way to put the root file system.
At this time, there should be "sdb" and "sdc" in the /dev/ directory (sda is the current operating system itself). If you do not see the "sdb" and "sdc", restart the guest operating system.
Write the operating system and root file system to the disk with the following command:
dd if=./xv6.img of=/dev/sdb bs=4k count=1000
dd if=./fs.img of=/dev/sdc bs=4k count=1000
shutdown the current virtual machine to ensure that the file has sync to the disk. At this time, the two images have been written to the disk, vmware saves the disk as a file, the location is in the directory of the current virtual machine, named os.vmdk, fs.vmdk, the next step will load these two files into the new virtual machine.
Step 3. Create xv6 virtual machine
To create an empty virtual machine, the operation steps are: customize (advanced) -> next -> install the operating system later -> choose other operating system type (choose other versions) -> take the virtual machine name as xv6 (name depend on you) ) -> Then use the default configuration all the way to "Next" to completion.
Right-click the created virtual machine and delete the disk created by default. Add the disk file created in the previous step to the current virtual machine. The operation steps are: add -> "disk" -> ide (note that this is an IDE instead of a SCSI disk, because xv6 reads an IDE format disk) -> use an existing virtual disk -> select the os.vmdk generate in the step 2->complete
Add fs.vmdk in the same way. Note that you must add os.vmdk first. Because os.vmdk is the operating system, it needs to be the first hard disk.
Now, you create a virtual machine which has two disk. one is os disk, another is root file system disk, all is ready.
Start the virtual machine, and the xv6 will start successfully.
How am i able to execute UNIX commands on my PC Command prompt? Note i do not have cygwin installed, although i was going to before i discovered this.
This is a development machine so i have a lot installed on it like ruby, python, git, github, node and so on.
What does this mean? can i use this without cygwin?
Here is a list of programs installed on my PC program list
How am I able to execute UNIX commands on my PC Command prompt?
You can use the where command in a cmd shell to find out the exact location of your Unix commands, for example:
where ls
This assumes, of course, that ls is located somewhere in your current PATH.
The location returned will show you in which directory your Unix commands are installed and may be enough for you to determine how they were installed.
The where command is roughly equivalent to the Unix which command.
By default, the search is done in the current directory and in the
PATH.
Syntax
WHERE [/r Dir] [/q] [/f] [/t] Pattern ...
WHERE [/q] [/f] [/t] [$ENV:Pattern
Source where
Further Reading
An A-Z Index of the Windows CMD command line - An excellent reference for all things Windows cmd line related.
where - Locate and display files in a directory tree.
Running Unix commands in windows can be done by having a tool like Cygwin which has those commands.
You can also get many of those commands compiled for windows and then run them using the command with the full path or only the command if the executable is in a path known by adding the paths to the executable files in Windows by :
1) Running in the terminal: PATH %PATH%;C:\<new_path>
2) Creating command aliases like: doskey np=C:\<new_path>\new_command.exe $*. $* is used to be able to transmit parameters
I am trying to convert Varying Vagrant Vagrant's wordpress-trunk (or development) site to be provisioned via git instead of svn.
There seems to be a script (I presume it is a script even though it has no file extension) as part of the VVV project that will switch after the machine has been provisioned:
https://github.com/Varying-Vagrant-Vagrants/VVV/blob/master/config/homebin/develop_git
And the author told me that running the following from command line should do it:
vagrant ssh -c "develop_git"
but when I run that I get the following error:
Unknown cipher type 'develop_git'
There appears to be some code in the provision script that mentions git, but I have no idea what I am looking at.
So, does anyone know how to run/implement that script? Or otherwise convert the www/wordpress-trunk folder to git? Are there options somewhere to direct VVV to provision the trunk folder from git in the first place?
Contrary to the Vagrant-Documentation of vagrant ssh the -c option is delivered to the ssh command and therefore interpreted as the cipher-specification.
I would suggest you to try vagrant ssh -- "develop_git", since everything after "-- (two hyphens)[is] passed directly into the ssh executable".
I'm trying to build an application from source in windows that requires some Unix tools. I think it's the apparently standard ./configure; make; make install (there's no INSTALL file). First I tried MinGW but got confused that there was no bash, autoconf, m4, or automake exes in \bin. I'm sure I missed something obvious but I installed Cygwin anyways just to move forward. For some reason when I run
sh configure.sh
I get:
platform unix
compiler cc
configuration directory ./builds/unix
configuration rules ./builds/unix/unix.mk
My OS has identity problems. Obviously the makefile is all wrong since I'm not on unix but win32. Why would the configure script think this? I assume it has something to do with Cygwin but if I remove that I can't build it at all. Please help; I'm very confused.
Also is it possible to build using MinGW? What's the command for bash and is mingw32-make the same as make? I noticed they're different sizes.
Everything is fine. When you are inside CygWin, you are basically emulating an UNIX. sh runs inside CygWin, and thus identifies the OS correctly as Unix.
Have a look at GCW - The Gnu C compiler for Windows
Also, you might be interested in this help page, that goes into some detail about the minimal system (MSYS), such as how to install, configure et. c.
That should help you get bash, configure and the rest to work for MinGW as well.
From the Cygwin home page
Cygwin is a Linux-like environment for Windows. It consists of two parts:
A DLL (cygwin1.dll) which acts as a Linux API emulation layer providing substantial Linux API functionality.
A collection of tools which provide Linux look and feel.
Since configure is using the Cygwin environment, it is interacting against the emulation layer and so it is just like it's working on a Unix environment.
Have you tried building the application and seeing if it works?