Stripe CLI on DEBIAN server, how do I make it "listen" to new requests in the background and continue using the console? - console

I want to use Stripe CLI and WEBHOOKS events on my debian(10.1) server. I've managed to get everything working but my problem is that when I run:
stripe listen --forward-to localhost:8000/foo/webhooks/stripe/
that I can't use the console anymore, because its listening to incoming events, which I still need. The only shown option is ^C to quit, but I need the CLI listener to continue to run at all times while being able to do other stuff at the same time.
On my local version/editor I can add sessions and run the listen command from one terminal and use another terminal session to continue interact with the system. But I dont know how to do that with debian yet. It would be great if the listen function could just run in the background and I could continue with what I need to do without stopping to listen. My next idea was to tunnel via ssh to the server but im unsure if that would solve my problem. Wouldnt that mean that my computer at home running that session would need to be running at all time? Im lost here...
Btw the server is a droplet on Digital Ocean if that matters...which I dont think.
Please let me know if anything is unclear.

UPDATE/SOLVED:
I missunderstood my problem, the Stripe CLI is just for testing local. Once a stripe solution is in production, Stripes servers send requests to my server/endpoints.
If you are wondering about this or want to read more start here how it works in production: https://stripe.com/docs/webhooks/go-live

Related

R studio server browser freezes upon login

I have been working on my R studio session hosted by a Linux server and recently, ran a piece of code that was taking way too long to execute and I decided to kill it.
Here is the sequence of steps that I took - none of them helped me restore the health of my session.
1) Hit the stop button on R studio and be patient.
2) Ssh into my Linux server and ran the following command to kill all the processes running with my userid
killall -u myuserid
3) Removed the.RData,.Renviron,.Rhistory files from my workspace.
4) Ran the following R command via the Linux server for garbage collection
gc(reset=TRUE)
4) Restarted the entire Linux server.
I am running out of ideas and would really appreciate any other suggestions before I take more drastic steps like revoking access and granting it again(not sure if that would be the right fix)
Note: The browser window freezes every time I login, and it happens only for my R studio session, the rest of the users in the same network have no issues.
I solved this problem - Rstudio-serverfreezing. I think it was a network problem since I couldn't receive any response from calling "~~~~~~.cache.js". In this case, you can find out "~~~~~~~~~.cache.js" no response with pushing key before you click log-in button.
Anyway, here is my way.
Reset your Network with following orders
you can insert these into cmd terminal as an admin mode.
netsh winsock reset
netsh int ip reset
Reboot
The IP information may be erased. So if you're using fixed IP address, fill the blanks with as-is IP address.
That's all.
You may follow this way to recover the connection.

How does jupyterhub work?

I have to construct the infrastructure so that multiple users can work on the same jupyter(ipython notebook) service, yet via different sessions, so the users can't interrupt each other.
I thought jupyterhub( https://github.com/jupyter/jupyterhub) is there to control everything, yet it still seems like the session is bound to one since if I logout of it on one window, an instance on another window also logs out.
Is there a way to control multi-sessions on jupyter?
Jupyter doesn't support multiple users editing the same notebook at the same time without data loss. I don't believe it is meant to. I believe Jupyter is meant to provide a relatively easy to configure and install instance of python that contains the same installed modules and environment to minimize problems caused by environmental differences between developer workstations.
Also, it's meant to make the barrier for entry to programming python and working in data science much lower than it otherwise would be. That is, it's much easier to talk an analyst into visiting a website than learning a new programming language.
More to the point of your question, though: The way Jupyter handles 'sessions' is that (unless configured otherwise), every Jupyter user corresponds to a user on the on the server that is running Jupyter and every time you log in to Jupyter you are effectively creating a new login to that server's operating system. It immediately follows that if you log out of Jupyter from one window, you're logging out of not just that browser's session, but also the login to the Jupyter server's operating system as well, which would kill all other open browser windows.
You question is a bit unclear, JupyterHub is meant to support multi-user across many machines. If course if you use the same browser from the same machine, you get logged out too, as the browser is carrying the connexion information that get revoked.
Jupyterhub is a web based multiuser application, that provides session and authentication services.
Jupyterhub will be hosted in unix/linux server, the client can access it using the ip address and port number,Once it is accessed by client, the client must enter the userid and password which is associated with the sytem users in server (PAM authentication) which will redirect to the home directory of the current user.
You can build a infrastructure by using jupyterhub, which is meant for multi-user. The jupyterhub just provides multi user interface and PAM authentication, you have to configure security, file access permission everything in kernel level using shell script.
Normally, you host a jupyterhub or jupyter notebook in command line. In the same way you can write a shell script to setup multi-user environment.

Unable to connect to UDK server in a sane manner

Some friends and I are trying to create a game using the UDK 3. Right now it's nothing special; we've got a Pawn that spawns and moves around a custom map, and it's all written over the example game that comes with the UDK. I'm trying to get a dedicated server set up so we can test our changes (it's going to be a multiplayer only game). We're all on beefy Windows machines on the same network, and the server is not being run through Steam.
I've been using the Unreal Frontend to compile and package the game. The installer works fine, and the game it installs works as well. We can set up a simple peer-to-peer multiplayer game, and that works. The problem is when I try to run it as a dedicated server from the command line. The command I type in is
UDK.exe server provinggrounds.udk?bIsLanMatch=true
This executes and brings up a second console that says the game engine has initialized, and then waits. Unfortunately, none of the other copies of the game on the network can see this server, which is a problem. Now here's where it gets crazy.
I discovered this in the "try random things to see what works" phase of troubleshooting. If I run the game as a dedicated server from the command line, then open a second instance of the game on the same machine but in normal game mode, and then have that instance host a multiplayer match, any other instances of the game on the network will see one server and when they connect to it they will connect to the dedicated command line server on my computer. Once they join, I can close the normal hosting game without affecting the server, but then nobody can see the server anymore.
I really don't understand what is going on here. Why can't anybody's game find the server under normal circumstances? Why is the server only visible when there is a game instance hosting a peer to peer game on the same computer? Is there a way to fix this?
Try:
UDK.exe server map.udk?listen=true?bIsLanMatch=true?dedicated=true
That is what I use to launch a dedicated server. What you were properly missing was the listen=true part.
For more details see the documentation.
EDIT 1:
As a workaround you could force your game to connect to a given IP. In your game open with console with Tab and type Open #SERVER_IP# (replace #SERVER_IP# with the actual IP of course).
You can have your game connecting to a server passing the server's IP to it as an argument in the command line: UDK.exe #SERVER_IP#
EDIT 2:
Another problem might the firewall, perhaps UDK uses different ports when run as a dedicated server. Although unlikely, here are the ports that UDK needs to be opened/forwarded 6500, 7777, 7778, 7787, 13000, 27900 for UDP.

Best way to collect logs from a remote server

I need to run some commands on some remote Solaris/Linux servers and collect their output in a log file on my local server.
Currently, I'm using a simple Expect script, residing on the local server to fire the commands on the target system. I then redirect the output of the expect script to a log file, like this,
/usr/local/bin/expect script.exp >> logfile.txt
However, this is proving to be very unreliable as the connection to the server fluctuates a lot, leading to incomplete logs and hung scripts.
Is there a better and more reliable way to go about this task?
I have implemented fedorqui's answer,
Created a (shell) script that runs the required commands on the target servers.
Deployed this script to all servers.
Executed this script via expect, from my local (central) server.
Finally collected logs individually from each server after successful completion, and processed them.
The solution has been working fine without a glitch till now.

Amazon EC2 / RStudio : Is there are a way to run a job without maintaining a connection?

I have a long running job that I'd like to run using EC2 + RStudio. I setup the EC2 instance then setup RStudio as a page on my web browser. I need to physically move my laptop that I use to setup the connection and run the web browser throughout the course of the day and my job gets terminated in RStudio but the instance is still running on the EC2 dashboard.
Is there a way to keep a job running without maintaining an active connection?
Does it have to be started / controlled via RStudio?
If you make your task a "normal" R script, executed via Rscript or littler, then you can run them from the shell ... and get to
use old-school tools like nohup, batch or at to control running in the background
use tools like screen, tmux or byobu to maintain one or multiple sessions in which you launch the jobs, and connect / disconnect / reconnect at leisure.
RStudio Server works in similar ways but AFAICT limits you to a single user per user / machine -- which makes perfect sense for interactive work but is limiting if you have a need for multiple sessions.
FWIW, I like byobu with tmux a lot for this.
My original concern that it needed to maintain a live connection was incorrect. It turns out the error was from running out of memory, it just coincided with being disconnected from the internet connection.
An instance is started from the AWS dashboard and stopped or terminated from there also. As long as it is still running it can be accessed from an RStudio tab by copying the public DNS to the address bar on the web page and logging in again.

Resources