Xquartz ssh -X sessions: how to detach/reattach to persistent windows - tmux

I've installed xquartz on my OSX machine, and upon connecting to a remote server with ssh -X user#server.domain I'm able to launch gui windows (let's say Rstudio for example --I see the window show up on my screen even though it's running on the remote server -neato!).
What I'd like to do is create stable, persistent sessions to disconnect/reconnect to (i.e. close and re-open the same window with my environment and variables still there, instead of closing it and opening another one).
Hence, I'm using a tmux session from the terminal so that I can detach from my ssh connection to the server and connect back later. What I'd like to do then is re-launch the gui windows that I started from that session previously. Unfortunately, I don't know how to "store" or "detach" from the GUI windows once they are created --if I close them, then the unsaved data is deleted and the session is lost.
Is there a way to launch a persistent window from within ssh -X, and then "hide" that window, and re-open it after connecting again later?

Not sure if this will still help now but I'll put it here for posterity. For the specific RStudio case you'd be better off installing a server version of RStudio - either the paid RStudio Workbench or the free open-source version. The server versions provide the persistent sessions you're looking for, including for long-running jobs.
The open source version has a login, but all passwords are sent in unencrypted - don't expose it to the internet without other protections. In the open-source version you can only have one session per user.
Even without allowing access to the public internet you could forward port 8787 over ssh ( -L 8787:localhost:8787) & log into the remote RStudio Server instance from your local browser by visiting localhost:8787.

Related

Is there a way to make VSCode remember password for an insecure (http) remote jupyter connection?

I am working on a vscode-remote-ssh session. On my server, I launch a jupyter notebook that I then access on vscode through the remote-ssh session but connecting to my compute node.
Everytime I reload a project window, VSCode gives me a warning that I am connecting to an insecure connection (which is fine for me). After clicking yes, I need to enter the password everytime.
Is there a way to make VSCode remember the password for a jupyter connection? My browser seems to do it just fine.
I have tried using an https connection but I do not have a certified ssl certificate so VScode doesn't work on that. That's why I am using an insecure connection with just a password.

AWS Workspaces with NordVPN - Status Unhealthy

I've got an AWS Workspace (Windows 10) running in N. Virginia and installed NordVPN on it.
I'm trying to use NordVPN to change my location so I can access certain geo-restricted websites in Chrome while on my Workspace.
When I turn on NordVPN and connect to Atlanta (for example), after a few minutes the Workspace status changes to "Unhealthy" and eventually disconnects automatically.
I believe this is because the AWS Workspaces monitoring service can no longer see my Workspace, and thinks it's offline or something.
I also tried to use NordVPN's "Split Tunnelling" feature to only protect traffic going through the Chrome Browser application, but as soon as I turn that on my Workspace is disconnected immediately requiring a reboot.
Is there any way I can configure my Workspace to allow the Workspace monitoring service to still reach my Workspace when the Workspace has a VPN connection? Or, has anyone been able to use a Workspace with NordVPN's Split Tunneling on the Chrome Browser application?
I also tried the NordVPN Chrome Extension, but that doesn't work either.
Thanks

Cannot open new SSH connections after a certain amount of time

I have a web server running Alpine linux and OpenSSH. When I power on the server, for about an hour or two I am able to open SSH connections and send commands fine. However, after that, even though the server is up, it does not respond to pings and I cannot SSH in to it. The server is still running, and I can still access the website being served from it. Why does this happen, and how can I avoid it?

Unable to request a user password reset

I am in Plone 4.1.6, if you go in Site setup > Users and Groups and then click the checkbox "Reset Password" for a user and click "Apply changes", than the system hangs and after 5 min I have this error from Apache:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /##usergroup-userprefs.
Reason: Error reading from remote server
Apache/2.2.22 (Ubuntu) Server at 192.168.1.4 Port 443
After the error, I have to restart Plone to make Plone respond again.
My Environment:
Plone 4.1.6 (4115)
CMF 2.2.6
Zope 2.13.15
Python 2.6.8 (unknown, Apr 27 2013, 22:01:31) [GCC 4.6.3]
Addons:
Diazo theme support 1.0b8
Installs a control panel to allow on-the-fly theming with Diazo
Thème Plone classique 1.1.2
L'ancien thème utilisé dans Plone 3 et versions antérieures.
Static resource storage 1.0b5
A folder for storing and serving static resource files
I am running Plone behind Aapache
Testing locally
Running a virtual machine with VirtualBox 4.2.12
Plone is install on the Virtual machine
Plone version is 4.1.6
Virtual machine is running Ubuntu 12.04 AMD64
Zeocluster with 2 clients
Email is properly configured in the Plone instance
As I know, everything is working fine with my Plone instance including the other checkboxes available in Users and Groups.
I did a test with ssmtp to send an email to myself from my node on the vm and I have no problem sending the email.
I did try fg mode and everything seems OK.
I did check the Apache logs and everything seems OK too.
If a create a ssh tunnel to avoid Apache and access Plone directly, I don't have a proxy error but the system hang forever.
I don't know what to do to solve this stuff problem. Any idea?
Does the python process use a lot of CPU when it hangs? Check using top.
Install ZopeHealthWatcher, then when it hangs again, use zope health watcher to get a list of what each thread is doing. That will often give you an idea of where the code is sitting either in a loop, in infinite recursion of some sort (this can happen in the zodb, especially with acquisition and similarly named things), or if it is merely blocking on something (eg, mtu issues on the network link to the smtp server for example, so small emails work but big ones hang).
You could also stop the smtp server (or just change the port in the plone control panel) and see if you at least get an exception out of that. Plone should, by default, raise an exception if it cannot connect to the smtp server.
In really extreme cases, you can use gdb to connect to the hanging python process (I usually use "top" to find the one sitting at 100% CPU), and you could then find where it is hanging. This is a lot more complex than using ZopeHealthWatcher, but I successfully traced a hang to a race condition in reportlabs font code recently using precisely this method, it is very powerful. What is nice about gdb, is it stops the process and allows you to step through the code, and up and down the calling stack, unlike ZopeHealthWatcher which just gives you a snapshot (a bit like that Heissenberg uncertainty thing, you can observe where it is now...)

GlassFish 3.1.2 - validate-dcom fails with "The remote file, C: doesn't exist" (Centralized Administration with Windows DCOM)

OS - Windows 2008 server R2 X 2 (firewall disabled on both machines)
I wish to take advantage of GlassFish 3.1.2 Windows DCOM feature to setup communication between GlassFish DAS and a remote node. I've successfully followed Byron Nevins instructions on using GlassFish 3.1.2 DCOM Configuration Utility
However I'm having an issue validating DCOM following the instructions in GlassFish 3.1.2 Guide - 2 Enabling Centralized Administration of GlassFish Server Instances
When I run command validate-dcom --passwordfile C:/Sun/AppServer/password.txt -v 192.168.0.80 I get the following output:
asadmin> validate-dcom --passwordfile C:/Sun/AppServer/password.txt -v 192.168.0.80
remote failure:
Successfully verified that the host, 192.168.0.80, is not the local machine as required.
Successfully resolved host name to: /192.168.0.80
Successfully connected to DCOM Port at port 135 on host 192.168.0.80.
Successfully connected to NetBIOS Session Service at port 139 on host 192.168.0.80.
Successfully connected to Windows Shares at port 445 on host 192.168.0.80.
The remote file, C: doesn't exist on 192.168.0.80 : Logon failure: unknown user name or bad password.
Password file, password.txt, contains a single entry:
AS_ADMIN_WINDOWSPASSWORD=my-windows-password
I have double-checked I can successfully login with my windows password on the remote machine 192.168.0.80. I've also tried this test with two Windows XP professional machines and get the same error.
Also performed this operation by creating a New Node in Admin Console, got the same error:
Cannot figure what is going wrong or what I may be missing
Thanks in advance
I have had similar issues while setting up the new production env. at work last friday, and could not find any useful information on the interwebs, except people encountering the same issue, some with comments as fresh as the day I was looking it up.
So after a rather excessive amount of painful, in-depth debugging, I was able to figure out a few things:
You must explicitly specifiy the local windows user you create for the purpose of running glassfish in both the add-node dialog, and the validate-dcom subcommand (option -w), else it will either default to 'admin' or the user the DAS is running as.
There is a bug in validate-dcom that causes it to ignore whatever you specify as the test directory. No matter what you do it will always use C:\, and result in "access-denied".
The documentation omits another registry key that must be given access to in order for WMI to work
Regarding the first issue, you will most likely encounter it if your nodes are not part of a domain or you are using a local account. Windows NT6+ has a new default security policy that prevents local users from elevating privileges over the network, which causes that test to fail, necessarily, seeing how writing to the root of a system drive not something one can do without elevation.
I previously blogged about it for someone to stumble upon it if needed:
http://www.raptorized.com/2008/08/19/access-administrative-shares-on-server-2008vista/
The gist of it is that you have to navigate to the following registry key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System
and create a new DWORD named LocalAccountTokenFilterPolicy with a value of 1.
There is no need to reboot, the first, broken test should pass. However you will then see an error about being unable to connect to WMI, and it will fail again.
To remedy this, you must also take ownership and grant your local service account user full control over the following registry key, in addition to the other ones described in the HA Administration Guide:
HKEY_CLASSES_ROOT\CLSID\{76A64158-CB41-11D1-8B02-00600806D9B6}
Afterwards, validate-dcom should report success and you will be able to add it as a node, and create instances on it.
I hope this helps, because the seeming lack of activity from Oracle on that issue was infuriating.
I am also less than pleased by the hackish, ugly, insecure nature of the DCOM support in Glassfish 3 :(

Resources