Symfony : multiple mercure subscriptions - symfony

I've developed an app based on this tutorial : https://symfonycasts.com/screencast/turbo
By the way, my app is broadcasting HTML updates on several topics, ie : chat-messages, notifications, chat-channels
So, every pages of my app is subscribing to these topics, and everything works beautifully on a local server, but as soon as I'm deploying this app online, every ajax calls are taking very much time, and the app is stucked.
When I'm disabling the mercure "subscribing" part, everything is working perfectly, but I really don't understand why...
Here is my setup :
Debian 11 VPS
PHP8.1
Mercure 0.14.2
Nginx with a reverse proxy configured on the mercure hub
I've tried to configure apache instead of nginx, but I've got the same problem

Related

Websocket Nodej which is better Apache2 Or Nginx for ubuntu instamce

I'm working on a project where I have to use websocket apis in nodejs to update data in real time, such as open orders, pricing updates, and other things. Since my front end is react and I need to create a subdomain like api.example.com, I was wondering whether apache2 or nginx is the better platform for implementing a websocket server. If anyone knows, it would be helpful.

Cloudflare Timeout Issue with Git pull and Curl Request in AWS EC2 instance

I have my WordPress project running in the AWS EC2 instance. I have my DNS managed in Cloudflare and it is proxied. When I did the setup of the project in AWS instance, for some days git pull and push and all the APIs were working fine. But suddenly after some time, I was not able to take the pull on the server due to a timeout issue.
Also from the WordPress form when I submit the form it should call the API but it gives me the 504 gateway timeout error.
So every time when I need to take a pull I have to reboot the server and then for 5 mins. everything works fine and then again it gives me the same error.
What should I do with Cloudflare? As per my knowledge, there must be something from Cloudflare as on the server-side I have tried everything for this kind of problem.

Is this possible to get mercure protocol with Heroku using API platform?

I deployed a API Platform api and client using Heroku but it seems that
Mercure is not working out of the box, but I think my application may
have missing configuration.
The only thing I configured is this MERCURE_SUBSCRIBE_URL=http://my-random-herokuapp-name-generated.herokuapp.com/hub.
In production I'm getting 404 error on my hub address and in local I get an answer saying I didn't provide a topic (which makes sense because I just requested the address to make a test without providing any parameter).
In local environment, the full package API platform is given with a docker running a mercure server, I think this might be the answer that herokuapp doesn't support Mercure, but it's not very clear to me.
So basically I am getting a 404 not found error on the hub address instead of getting a 200
To use Mercure with API Platform, you have to also deploy a Mercure hub. The hub can be downloaded on https://mercure.rocks. Then you can deploy it easily on Heroku as a Go application.
Your Procfile should look like:
web: ADDR=$PORT ./mercure
To everyone, even if unsecure (look at mercure doc to know more https://mercure.rocks/docs/hub/install)
this Procfile just work out of the box with heroku
web: ADDR=:$PORT ./mercure --jwt-key='!ChangeMe!' --debug --allow-anonymous --cors-allowed-origins='*' --publish-allowed-origins='*'

AWS Toolkit breanstalk deploy on Visual Studio 2017 - 500 Error

I've recently attempted to reploy a simple .Net Core instance on AWS, using the toolkit, everything suggests it deployed correctly, and the security groups are set correctly...
Yet I can't RDP to the server or view the .net Core ASP web pages... rather I get a 500 error.
For those more experienced, I'm wondering what kind of trouble shooting is available.
I resolved the RDP issue by modifying the Security Settings in the EC2 dashboard - find the instance that was set up by beanstalk, then on the EC2 (not ELB) service page, find the Security Group links, and then View Inbound Rules. If you have the most common problem, ELB created your instance and opened port 80 (HTTP) and port 22 (SSH), but nothing else. Edit the rule for the security group, change the port 22 to SSH, and then you should be able to connect. Note, for me I set the security to MyIP, but it means you can only RDP from your "home" network.
Once you can RDP into the machine, you should be able to get to the logs (inetpub/logs)

How to enable a maintenance page on the frontend server while you are restarting the backend service?

I am trying to improve the user experience while a backend service is down due to maintenance, shutdown manually.
We do had a frontend web proxy, which happens to be nginx but it could also be something else like a NetScaler instance. An important note is that the frontend proxy is running on a different machine than the backend application.
Now, for the backend service it takes a lot of time to start, even more than 10 minutes in some cases.
Note, I am asking this question on StackOverflow, as opposed to ServerFault because providing a solution for this problem is more likely to require writing some bash code inside the daemon startup script.
What we want to achive:
service mydaemon stop should enable the maintenance page on the frontend proxy
service mydaemon start should disabled the maintenance page on the frontend proxy
In the past we used to create a maintenance.html page and had nginx configured to check the existence of this page using try, before falling back to the backend.
Still, because we decided to move nginx to another machine we cannot do this and doing this using ssh raises security concerns.
We already considered writing this file to a NFS drive which would be accessible by both machine, but even this solution does not scale for a service that has a lot of traffic. Nginx will end-up trying this for every request, slowing down the responses quite a lot.
We are looking for another solution for this problem, one that would ideally be more flexible.
As a note, we still want to be able to trigger this behaviour from inside the daemon script of the backend application. So if the backend application stops responsing for other reasons, we do expect to see the same from the frontend.

Resources