Nuclide remotee development setup with anaconda on the server side - atom-editor

I am using nuclide and nuclide-server for remote development for Python. However, the anaconda / virtualenv envirionment is on the server side. How do I make sure that I am inside the anaconda / virtualenv environment?
For example, do I need to do this:
source activiate ...blah...
start the nuclide-server inside the virtualenv
so that I can make sure that the nuclide-server is inside my desired virtualenv?

I suggest creating a shell script on your remote server that should consist of:
initialization of your environment
any other preparation
and finally running the nuclide-start-server command
After your script is done, it should be somewhere in your $PATH so you could fill the script's name in the Remote Server Command field as shown in the .

Related

Is it possible to install Rstudio server on Linux without root access?

I'm an undergraduate research assistant working on a Linux server without root privilege. I'm trying to install the Rstudio server but the Rstudio website only provides the installation method for sudoers. Is it possible to install it without root access? I'm asking because I'm really not sure if I could get access from the manager. Any help will be appreciated!
No, you can't install it without root access. But there are a couple of things you could do to piece together a solution. Here are two options:
Extract the server and run it directly
You have to be root to install packages, so you can't install the .deb/.rpm file yourself. However, you could extract the contents of the file to a directory inside your home directory and run RStudio Server from there, by executing the rserver program in a regular shell.
Note that this will probably require an afternoon of editing the rserver.conf file to tell it where to find the rest of the files in the installation (since it presumes they are installed in /usr/lib by default). You can get some inspiration for how to do this here: https://github.com/rstudio/rstudio/blob/master/src/cpp/conf/rserver-dev.conf
Run the desktop version and forward the graphics
The other route is to run RStudio Desktop on the server; we make several builds of RStudio Desktop that are installer-less and can just be unpacked into your home directory. Then run an X11 server on your own computer and an X11 client on the RStudio server, so that the RStudio Desktop instance appears on your computer instead of the server.
Yes, you can run rserver without root priveliges.
For RStudio 1.4 I patched the following line into src/cpp/core/LogOptions.cpp
const FilePath kDefaultLogPath = core::system::xdg::userDataDir().completePath("log");
Then you need to set the system environment variables to some location read-writeable for the user, like
RSTUDIO_CONFIG_DIR=$HOME/.config/rstudio
RSTUDIO_CONFIG_HOME=$HOME/.config/rstudio
RSTUDIO_DATA_HOME=$HOME/.local/share/rstudio
And start rserver with the option
--server-data-dir={directory writeable for user}
--server-pid-file={file-path creatable for user}
--database-config-file={config-file}
With these adjustments it runs for me when I start it as a simple user (no root privileges) with
rserver --auth-none=1 --www-frame-origin=same --www-port={port} --www-verify-user-agent=0 --server-data-dir={my-tmp-path} --server-pid-file={my-tmp-path}/rstudio.pid --database-config-file={my-tmp-path}/db.conf}
ATTENTION:
But be aware, that anyone who can reach your system and the specified port from the network has access to the running RStudio in his browser and therefore can run any command in the name of the user on your system now.

Docker containers do not work when run from command line

I have been working on an ASP.NET application using Docker, and when I launch it through Visual Studio it works great! However, if I try to run anything from the command line (or powershell, or VS's CLI/Powershell) it will run, but the container it generates refuses all connections.
I am on Windows 10 NT with Docker Desktop installed trying to run an ubuntu:18.04 image (i've tried Alpine, ubuntu:16.04 as well).
Steps to reproduce:
-Create a default ASP.NET application in Visual Studio
-Add Docker Support
-Run with 'Docker' selected
-Open browser, navigate to localhost:[YourPort]
-Success! Works as intended.
Then, either using the same image or a downloaded one (I tried dockersamples/static-site to confirm it wasn't a problem with the specific project):
-Open CMD
-Run docker run -p [HostPort]:[ContainerPort] [SameImageVSUses:tag] on a different port
-See that docker ps shows both containers running next to each other
-Open browser (Firefox), get error
The connection was reset
Update
I changed the ASP.NET app's program class to use 0.0.0.0 instead of localhost, I believe this was necessary but now I see
Secure Connection Failed
PR_END_OF_FILE_ERROR_
If I curl localhost:[MyPort], I get (52) empty reply from server
/Update
Well, maybe Visual Studio does more that I'm not aware of.
A little bit of digging shows yes, it throws in a ton of extra arguments! Using the copy/pasted command Visual Studio does gives me... the exact same error.
To clarify, the containers still run from the command line, I can ssh in or docker inspect them (in fact, the VS-started and CMD-started containers' docker inspect is identical other than network addresses it's bound to). I get no error messages at all from the process of building and starting the container, so if some part of it is failing it is doing so silently.
I'm relatively new to Docker but I can't seem to find a fix for this, or even a reason behind it. What is Visual Studio doing that I'm not? I've tried everything I'm aware of, I even had to wipe my machine (unrelated) and the exact same thing happened when I got everything reinstalled. My gut tells me it's something on my machine, but then the VS-launched one should fail too, right?
I can't find anything that tells me to flip a magic switch if I'm running CLI stuff, and nothing I do to the dockerfile or command arguments seems to work. I've never used VirtualBox or Docker Toolbox, this shouldn't be a wonky configuration screwed up by an old program because It works fine when launched from Visual Studio! Agh!
I hope that this is indeed a magic switch I haven't flipped, otherwise there is something very basic that I don't understand about what I'm working with.
If you are trying to run recent VS template you just need to follow this instruction:
Go to the Api project directory:
cd ./src/YourApiDirectory
Build Command:
docker build -f ./Dockerfile --force-rm -t yourapiimage:dev ..
Run Command:
docker run -it --rm -e "ASPNETCORE_ENVIRONMENT=Development" -p 58817:80 --name yourapiname yourapiimage:dev
please note that "-it" flag in last command will run your image in "interactive" mode. Also please note I am using only http connection via port 58817.
Thank you for the suggestions, it ended up being something rather frustrating. I think that it was a combination of two problems:
This stuff could be causing problems for others but I was mistaken, this did not work for me
First and foremost, no amount of docker configuration tells your website to listen for anything inside the container. I believe the website wasn't listening for anything when I initially tried most fixes.
The real problem was that the launchSettings.json in the .csproj Properties folder apparently overrides arguments from the command line!
Remember how I said '...run it alongside the first...'? That means I was never running the website on the correct set of ports. Apparently, -p 8001:443 -e ASPNETCORE_HTTPS_PORT=443 is not enough to make the site listen on 443. You must also set the sslPort in the launchSettings.json. Such is life, I suppose.
This is what finally worked
I ran docker-compose up in the solution directory. That's it. I didn't see a docker-compose.yml when I was looking in VS so I didn't think about it, but that's only because VS doesn't show solution-level items. I guess the thing that VS was doing that I wasn't was running docker-compose instead of individual commands.
When directly launch with Docker profile which is done via docker-compose file in Visual Studio, visual studio behind the screen merges different override files and does different tasks and one of them is attaching remote debugger in the container etc.
To help you I've created a sample asp.net core api via Visual Studio 2019 selecting .Net Core 3.0.
The following is the docker-compose that VS2019 generated on my machine when I launched my API via VS2019.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" -f "C:\Users\myuser\source\repos\testwebcore\obj\Docker\docker-compose.vs.debug.g.yml" -p dockercompose14364360289538262671 --no-ansi up -d --build --force-recreate --remove-orphans
I can get it work directly on powershell by running the following command, here I am using the same settings used in the override file by default created by VS2019. You have to run this command from parent folder outside the project folder.
docker-compose -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.yml" -f "C:\Users\myuser\source\repos\testwebcore\docker-compose.override.yml" up
If you have directly build and run with the docker file instead of docker-compose
You can build with the following command and like before should run from outside folder of the project file.
docker build -f testwebcore/Dockerfile -t testcore
After building the image, you can run it with the below command but before that you need to create a certificate and pass couple of environment variables to the run command. The details of this is mentioned in the following page.. Especially the section Windows subsystem for Linux. I am running Linux containers on my Windows 10 laptop.
So you have to run the following command to generate certificate
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p testpassword
So the complete run command with environment variables and certificate generated above the command is as follows.
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="testpassword" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx -v c:\users\myuser\.aspnet\https:/https/ testcore:latest

Can I execute a Linux binary from a Windows application?

I want to execute a Linux binary from a QT application running on W10.
In QT we have QProcess to launch additional processes. However, since my binary is for Linux, I've thought of two possible approaches:
Running the binary in a container (i.e.: Docker, Kubernetes, Singularity...).
Executing the binary through WSL (Ubuntu) bash.
In any case, the QT application should initiate the process (the container or the bash) and, in turn, this process should launch my binary.
I've been searching on the web and I could not find something related, what makes me think that it will be difficult. For this reason, I am posting the question in order to know the viability of the proposed approaches.
EDITED
It looks like the WSL is easier, the problem is that the user has to have install ed it. Apart from requiring the sudo password when installing new software via apt-get.
The binary that I have to execute only exists for Linux, and let's say that cross-compiling is dismissed because of its complexity. Furthermore, this application needs CGAL, BOOST, MPI, among other pieces of software.
If you want to go with WSL, you can just run wsl myLinuxProgram --options.
Using WSL is the easiest way I believe as the current directory (PWD), is the current one i.e. the same as the PWD of your Qt app.
You can read Microsoft documenation for more info: https://learn.microsoft.com/en-us/windows/wsl/interop
If your linux binary depends on a lots of things, I really suggest you use docker for windows. Then, you have chance to pre-build an own docker image which put all dependency software also the linux binary you need to run in it.
Of course, to let your customer to use it, you should put it to dockerhub, register an account for yourself.
Then, the solution is simple: let the QT application to call docker run to setup a container base on your own image, execute it, and also let the linux binary to write the log or others to the bind mount volume among linux container & windows. After it run, the QT application fetch the linux binary output from this shared folder.
Finally, I give a minimal workable example for your reference:
Suppose the shared folder between windows & linux container is: C:\\abc\\log_share, it will mapped to linux container as /tmp folder. Of course you need to allow volume share by right click the docker icon in windows tray area & choose settings, like described here
Simplify the windows application as bat file, and simplfy the docker image as ubuntu, you should use your own prebuilt docker image with all dependency in it:
win_app.bat:
ECHO OFF
::New a shared folder with linux container
RD /s/q C:\\abc\\log_share > NUL 2>&1
MKDIR C:\\abc\\log_share
::From windows call docker to execute linux command like 'echo'
echo "Start to run linux binary in docker container..."
docker run -it -v C:\\abc\\log_share:/tmp ubuntu:16.04 bash -c "echo 'helloworld' > /tmp/linux_log_here.txt"
::In windows, get the log from shared bind mount from linux
echo "Linux binary run finish, print the log generated by the container..."
type C:\\abc\\log_share\linux_log_here.txt
Simplify the linux binary just as echo command in linux, the output things should be all write to shared directory:
echo 'helloworld' > /tmp/linux_log_here.txt
Now, execute the bat file with command win_app.bat:
C:\abc>win_app.bat
C:\abc>ECHO OFF
"Start to run linux binary in docker container..."
"Linux binary run finish, print the log generated by the container..."
helloworld
You can see the windows application already could fetch things(here is helloworld) which generated by linux binary from docker container.

how to monitor a directory and include new files with tail -f in Centos(for shiny-server logs in Docker)

Due to the need to direct shiny-server logs to stdout so that "docker logs" (and monitoring utilities relying on it) can see them i'm trying to do some kind of :
tail -f <logs_directory>/*
Which works as needed when no new files are added to the directory, the problem is shiny-server dynamically creates files in this directory which we need to automatically consider.
I found other users have solved this via the xtail package, the problem is i'm using Centos and xtail is not available for centos.
The question is , is there any "clean" way of doing this via standard tail command without needing xtail ? or maybe there exists an equivalent package to xtail for centos?
You will probably find it easier to use the docker run -v option to mount a host directory into the container and collect logs there. Then you can use any tool you want that collects log files out of a directory (logstash is popular but far from the only option) to collect those log files.
This also avoids the problem of having to both run your program and a log collector inside your container; you can just run the service as the main container process, and not have to do gymnastics with tail and supervisord and whatever else to try to keep everything running.

Jar file run on a server background with close putty session

I have tried the run spring boot jar file using putty. but the problem is after closed the putty session service was stopped.
then i tried up the jar file with following command. its working fine .
**nohup java -jar /web/server.jar **
You should avoid using nohup as it will just disassociate your terminal and the process. Instead, use the following command to run your process as a service.
sudo ln -s /path/to/your-spring-boot-app.jar /etc/init.d/your-spring-boot-app
This command creates a symbolic link to your JAR file. Which then, you can run as a service using the command sudo service your-spring-boot-app start. This will write console log to /var/log/your-spring-boot-app.log
Moreover, you can configure spring-boot/application.properties to write console logs at your specified location using logging.path=path-to-your-log-directoryor logging.file=path-to-your-log-file.txt. Also, it may be worth noting that logging.file takes priority over logging.path

Resources