Stop NestJs server process from client - next.js

I have a nextJs web application with a nestJS backend, from the webapp I can call the server and it calls a gRPC process and returns information back to the client. How do I terminate this process when it's still running? Simply closing the browser window or using new AbortController() only kills the API call between server and client but does not kill the server process that is still fetching data from gRPC.

If you want to shut down your nestJS application use
process.exit()
which terminates the running node process.

Related

How do I implement a gRPC IPC?

There is a server application (ASP.NET Core (gRPC service)) and a client application running on another PC (WPF).
One of the functions of the gRPC service is to send screenshots to the client application, but it has to be run as a Windows service, so I can't get screenshots directly from the service to the client application.
Question: how to implement an "agent" application running on a remote computer where the service is, but in a user session to get screenshots through it and then pass them to the client?
As I understand it is possible to do it with the help of IPC, but I can't understand the details of implementation, how can I call from gRPC service to the agent to make a screenshot, return the result (as an array of bytes) to the service and it sends it to the client...

Why do we need a web server along side an app server in an orchestrated containerised architecture?

Assuming I am using framework like Flask to serve requests, I understand that web server handles static file requests and directs any program execution requests to the app server. Example: nginx. Where as app server can handle both static files as well as program executions. Example: gunicorn.
It makes sense to have a web server to handle static files, caching, request redirection, load balancing. The request first comes to the web server and it knows how to handle it and redirect any program executions to the app server.
However, in architectures where we use orchestration and containerization, that is - there is cluster of nodes, each node running a container - assume the container has got only the app server (example: gunicorn), and the request arrives at the API management/gateway(which has same features as a web server - other than serving static files), gets redirected to the cluster of nodes (which does load balancing), eventually the request reaches a node containing the appserver (example: gunicorn) that serves the request.
Is there any benefit of having a web server running along side an app server inside such a configuration?
In azure does API gateway play the role of webserver equivalant?
It depends. It's common to have some proxy / routing logic (e.g. url rewrite) in the API Gateway, so probably this is why you can have the app server and the web server inside a container.
In Azure, API Management is a fully managed API Gateway which allows you to implement caching, routing, security, api versioning, and more.
More info:
https://microservices.io/patterns/apigateway.html
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern

CRITICAL WORKER TIMEOUT when using eventlet and Firebase in Flask on gunicorn

I am developing a Flask application that uses WebSockets ( Flask-SocketsIO ) and Google Firebase. Theres a function which retrieves the user data from firebase, and sends notification to them. If I use flask run to start the web server, everything works fine, including Sockets and the notification method. But, when I use gunicorn --worker-class eventlet -w 1 "app:create_app()"" for starting the web server, as soon as the notification sending method is called, the server kinda freezes, and in terminal, it shows following :
[CRITICAL] WORKER TIMEOUT
exception calling callback for <Future at 0x7fb681db61d0 state=finished raised TypeError>
The full error stack is shared here
Note that I can't use multiple workers since I am using Flask-SocketsIO. It doesn't supports multiple workers. Thanks!

NServiceBus doesn't start picking up messages until an endpoint is "touched"

So when I run my two services locally, I can hit service A which sends a command to service B which picks it up and processes it. Pretty straight forward. However, when I publish these to my web server and I send a request to service A which sends it to service B (I can see the message in service B's queue) but it won't get picked up and processed. I created an endpoint on service B that simply returns an OK response -- if I call this endpoint, effectively "touching" the service, everything kicks on and the messages get processed from that point on.
I figured maybe this had something to do with with late compiling, so I changed the publish to precompile on publish, but I get the same result.
Is there a way to have the service start processing as soon as it is published? Also worth noting that both services are WebAPI 2
Another option (probably more “standard”) would be to move the “handlers” into a Windows Service instead of a web application.
For this Windows Service you can leverage the NServiceBus Host which will turn a standard class library into a Windows Service for you. They have a good amount of documentation about this here: https://docs.particular.net/nservicebus/hosting/nservicebus-host/?version=Host_6
I would argue that this is more stable as you can separate the processing of sending commands (Web Application / WebApi) and processing commands / publishing events (NSB Host). The host can sit on the web server itself or you can put these on a different server.
Our default architecture is to have a separate server run our NSB Hosts as you scale web applications and NSB Hosts differently. If you run the NSB Host on the web server, you can get issues where a web app gets too much traffic and takes the NSB Host processing down. You can always start simple with using 1 server for both, monitor the server and then move things around as traffic increases.
Not sure if this is the "right way" to do things, but what I ended up doing is setting up each site to be always running and auto initialize.
App pool set to "Always Running"
Website set preload enabled = true
Web.config has webServer entry for application initialization doAppInitAfterRestart="true"
Web Role added to server "Application Initialization"
With those things being set, the deployment process is basically to publish the site and iisreset. If there are better options, I'm still looking :)

SignalR connection disconnects upon OnBeforeUnload

I'm having C# self hosted server application using SignalR.
My client is SPA (angualr,javascript HTML5).
My application have some termination process which should run when leaving the page (running when OnBeforeUnload occurs and the user asks to leave...).
The termination process contains some server calls.
Till today I worked with WebSockets and all worked well. I'm now trying to deploy my application on a server machine that doesn't support WebSockets so I'm using SignalR instead.
My problem is that SignalR closes/disconnects the connection when "OnBeforeUnload" happens. This means that I can't perform the termination process.
I tried to workaround this by creating new connection but it also fails to open.
Is there a way to overcome the above problem?
thanks,
R.
You can use the connection lifetime event OnDisconnected instead of making an explicit call to the server. Using OnBeforeUnload for asynchronous code is not a good idea in general.

Resources