Jupyter Hub the littlest : The server is shutting down often - jupyter-notebook

I have installed Jupiter-hub-the-littlest with the tutorial on a google cloud compute engine. However, when I work for more than 1 or 2 hours on it, the server is shutting down and I have to reconnect and relaunch the server. Is there an option that I should set up to avoid this. I don't see anything very clear in the logs. Thanks for your help

Related

Stripe CLI on DEBIAN server, how do I make it "listen" to new requests in the background and continue using the console?

I want to use Stripe CLI and WEBHOOKS events on my debian(10.1) server. I've managed to get everything working but my problem is that when I run:
stripe listen --forward-to localhost:8000/foo/webhooks/stripe/
that I can't use the console anymore, because its listening to incoming events, which I still need. The only shown option is ^C to quit, but I need the CLI listener to continue to run at all times while being able to do other stuff at the same time.
On my local version/editor I can add sessions and run the listen command from one terminal and use another terminal session to continue interact with the system. But I dont know how to do that with debian yet. It would be great if the listen function could just run in the background and I could continue with what I need to do without stopping to listen. My next idea was to tunnel via ssh to the server but im unsure if that would solve my problem. Wouldnt that mean that my computer at home running that session would need to be running at all time? Im lost here...
Btw the server is a droplet on Digital Ocean if that matters...which I dont think.
Please let me know if anything is unclear.
UPDATE/SOLVED:
I missunderstood my problem, the Stripe CLI is just for testing local. Once a stripe solution is in production, Stripes servers send requests to my server/endpoints.
If you are wondering about this or want to read more start here how it works in production: https://stripe.com/docs/webhooks/go-live

How do I resolve an "Arg_NullReferenceException" error when trying to connect through CheckPoint VPN?

I use CheckPoint VPN to log in to my place of work's servers to work remotely. The VPN has been working (mostly) fine all year, and I haven't changed any of the settings, but this morning, when I tried to log in, it's giving me the "Arg_NullReferenceException." I can't seem to find anything on this particular error on google.
I have tried restarting my computer, because it's not the first issue I've had with CheckPoint VPN (though it is the first time I've seen that error message), and a restart usually resolves whatever issue I'm having. I've also tried creating a new connection with the same settings, but I'm getting the same error with that one, too.
I'm not entirely sure what other information I would need to provide. I'm also not sure if it's a problem on my end, or on the company servers. I have already emailed tech support, but I thought I should be thorough.
This is a known issue. I have been jumping through hoops trying to get the capsule client to work. Raise a ticket with TAC if you have support. If not then you can download the E86 Endpoint connect client and run it. That has been my work around for this issue.
They just issued an update to the Capsule via the Microsoft Store. It seems one of the recent Windows Security Update broke the L2TP protocol within windows.

How to disallow Perforce UNIX server to generate thousands of IDLE processes

Im' asking this question because we run out of ideas on how to handle the current situation of our perforce versioning server.
The Server
The server is hosted on Scaleway and has a baremetal machine with two SSD under the hood (we know it is no hardware issue).
We are currently using the free license of perforce to evaluate it.
P4 info yields the following:
The Problem
We are using perforce on a UNIX server to version our Unreal Engine 4 project. Lately we discovered that the server stockpiled an amount of 2771 processes where around 80% of them are p4d processes. We suspect these IDLE connections / processes to swamp the server and to be the root of the connectivity issues we encounter at the office.
We enabled monitoring to keep an eye on RUNNING and IDLE processes
p4 configure set monitoring=2
When we now display the monitored processes we see IDLE ones running for more than one hour
p4 monitor show
We already tried disabling leepalive connections with
p4 configure set net.keepalive.disable=1
And we see the following which is going on for a while
The Question
Now the question I want to ask is:
Does anybody else ever has encountered this behaviour with a perforce server on UNIX?
Does anybody knows how we can tell the server that we want to discard IDLE connections ?
EDIT
So after some tracking we discovered that the proxy our office network is behind causes the problems and for some reasons don't allow the connections to close. Does anyone has some clues how to get around these issue?
Based on the monitor output it appears that these clients are opening a bunch of connections and holding them open, basically DOSing the server. You could go through and kill the pids on the server side, but this sounds like a bug in the client that should be raised with Perforce technical support.

XMRig docker using 100% CPU after testing shiny-proxy software. Why?

Unless someone proves otherwise, after installing ShinyProxy from ShinyProxy.io software, which is a well documented piece of software, the machine started a docker image that runs XMRig that takes 100% CPU usage and might be for bitcoin mining. Below some print-screens. If anyone with similar problem, please let us know.
first thing is to ensure that the docker daemon API is not reachable from the outside world. Lots of scans are being performed all days long to track down open docker daemon api service and launch docker instance from there.
Second, as this issue does not relate to a software issue but a suspected breach, I suggest we close this topic and start a thread via mail. You can reach OA security support at itsupport.at.openanalytics.eu
Could you send us a md5sum of the jar file deployed to the above mentioned e-mail?

MSMQ Inconsistent State After Restart

I'm seeing a really strange error that I'm having a difficult time
tracking down. I think its related to my configuration of Rhino ESB, though I'm not sure
if RSB is actually causing it, so I figured I'd ask and see if
anyone else has come across this in any other usages of MSMQ.
I'm using RSB as a client in a web app (ASP.NET, the client runs in the background). The client talks to a windows service via the MSMQ binding for RSB. Restarting the service never appears to have an effect on MSMQ, neither does restarting IIS by hand. However, whenever I actually restart the computer itself, MSMQ always refuses to start back up, claiming that a "queue is in an inconsistent state". Attempting to start MSMQ manually results in the same error, effectively rendering the MSMQ install completely useless. The only way to solve it is to actually remove then reinstall MSMQ.
The only information I've found via the almighty Google are references to a problem in MSMQ 2.0 (this problem is occurring in MSMQ 4.0). I've verified that Dispose is being called on on the bus at shutdown, in both the service and the web site.
Does anyone have any idea why this could be occurring? Thanks!
I faced the same issue on Window 2008 Server (Virtual Machine). Although the environment was not related to rhino tools.
The error in the event log:
"The Message Queuing service cannot start because a queue is in an inconsistent state. For more information, see Microsoft Knowledge Base article 827493 at support.microsoft.com."
As Roy pointed out, this is happening every 2-3 days. Every time we would follow the steps below to recover - instead re-installing the MSMQ.
1) Stop all applications and services that uses MSMQ.
2) Kill the mqsvc.exe from the Task Manager
3) Go to C:\Windows\System32\msmq\storage and delete any .mq files
4) Start the MSMQ Service
4) Start your application
In my scenario I've been able to fix "queue is in an inconsistent state" error after MSMQ service restart.
Turns out the computer name was too long, so changing computer name to a name with less than 15 characters fixed the issue.
My team is experiencing a similar issue, with MSMQ getting called by NSB 2.5. The issue came up recently after Infrastructure moved our VM to another physical server and for some reason lowered available RAM. We think the issue may be memory-related.
EDIT
After a week of no more issues with this, I can confidently say that raising RAM on the server solved our MSMQ's "Inconsistent state" issue. Mind you, we did have to re-install MSMQ first -- but the issue never came back, and before the RAM update the issue popped up every 2 days.
Regularly on Windows 2008RC2, MSMQ cannot start after reboot.
The two regular issues for me are:
"The Message Queuing service cannot start because a queue is in an inconsistent state"
and
"The dependency service does not exist or has been marked for deletion"
Sometimes, the following has helped (although we are seeking a more solid answer)
rename msmq folder to msmq_old
net stop wuauserv
net stop bits
Delete “%windir%\softwaredistribution” directory
Reboot
This has occurred 5 times this year, and each time, a variety of the above with plenty of reboots.
Sometimes, we revert to Remove Feature / Add Feature, however you may get yourself in a loop. As it boots up, a rollback occurs in the windows update service, so the Feature is never uninstalled, and the problem is never repaired.
Following the steps above can help with that.

Resources