I'm working on a project where I have to use websocket apis in nodejs to update data in real time, such as open orders, pricing updates, and other things. Since my front end is react and I need to create a subdomain like api.example.com, I was wondering whether apache2 or nginx is the better platform for implementing a websocket server. If anyone knows, it would be helpful.
Related
We are creating an application following an microservice architecture using Jhipster, and now someone suggested putting an Nginx in front of the Jhipster gateway so user access goes through Nginx instead of directly through the Jhipster gateway, and my question is there any benefit in doing this? Since from my perspective we are just proxying twice the requests nothing else, or am I missing something?
It could be useful for:
load balancing multiple instances of your gateway
restrict external access to some URLs if you have an internal access to your gateway
blue/green deployments
Is it possible to enable HTTP2 in cloud foundry using NGINX buildpack or any? I understand that GoRouter will not support HTTP2 but not sure if there is any workaround for this?
My original requirement is to serve large JS file from Cloud Foundry so to improve performance looking for enabling HTTP2.
Thanks,
Not exactly the same question, but the solution here applies: https://stackoverflow.com/a/55552398/1585136.
If you have the need for public clients (i.e.clients outside CF) to connect to your app, you need to use TCP routing. If your provider doesn't enable this by default, find another provider (see this list of public providers, hint Pivotal Web Services will provide TCP routes upon request) or self host.
If you only need to use HTTP/2 and/or gRPC between apps running on CF, you can use the container to container network. When you talk app to app, there are no restrictions (so long as you properly open required ports). You can use TCP, UDP and any protocol built on top of those. There some details about how this works here.
You'll also need the Nginx http_v2_module. This is a very recent addition and isn't yet in a build of the Nginx or Staticfile buildpack as I write this. It's should be, if everything goes right, in the next release though. That should be Nginx buildpack 1.1.10+ and Staticfile buildpack 1.5.8+.
My original requirement is to serve large JS file from Cloud Foundry so to improve performance looking for enabling HTTP2.
It might, it might not. Your mileage may vary. HTTP/2 isn't a silver bullet. This explains this well.
https://www.nginx.com/blog/http2-module-nginx/
I'm trying to set up SSL on my wordpress site.
I've an EC2 instance running wordpress on nginx and ubuntu. Database running on RDS.
I've launched an application load balancer with listeners on ports 80 and 443 and attached the SSL certificate which I got via ACM. I've set my targets to point to the EC2 instance I am using.
At this point the how-to guides and information stops. Apparently that's all there is to it and it should now all be working. However it's not. I'm getting connection refused errors when I add the https to my site's URL.
When I put my URL into https://www.sslchecker.com/sslchecker I'm told that no certificates are found.
So clearly I need to something more to get this working - can anyone point me to the next step?
Using the ELB and ACB is the way to go here. It sounds like you might be using the wrong type of ELB though. You mentioned application load balancer, you should use a classic load balancer. Also make sure your security groups are setup correctly to allow your ELB to talk to the EC2 instance.
You didn't mention Route53 but I assume you have the DNS entry setup to point at the ELB as well.
Share more and I will help more. Good luck.
First, let me explain why. I've had some rough luck with third party meteor hosting providers. But I'd really rather not run my own servers (I have a meteor app running with SSL on digital ocean, so I know how to do that, I just would rather dedicated professionals run as much of my infrastructure as possible). From what I can see, meteor.com hosting is wonderful, with the caveat of not being able to have a custom domain with ssl.
So, would it make sense to put up an nginx server that just proxied https://example.com to https://example.meteor.com? For starters, would that work, and if it did, would it be performant?
For your info, Meteor has a roadmap to roll out Galaxy (managed "meteor deploy" to your own servers) in list Under consideration for 1.1+. And it should be a perfect choice for you. Here is their Trello
This is MDG's commercial product -- a managed cloud platform for
deploying Meteor apps. You have control of the underlying hardware
(you own the servers or the EC2 instances, and Galaxy manages them for
you).
General Availability for Galaxy will be sometime after 1.0, since we
want to focus on Meteor 1.0 and get it out as quickly as possible.
So in the mean time if you just care about using your own domain, you can use something like Domain name forwarding which lets you automatically direct your domain name's visitors to a different website. And Masking prevents visitors from seeing your domain name forwarding by keeping your domain name in the Web browser's address bar.
Also in your case, you don't necessarily need to add SSL as Meteor has already got one when you deploy your apps. Just try input the url in your browser with https://yourappnamehere.meteor.com and you can see a SSL certificate is already in place.
I am trying to improve the user experience while a backend service is down due to maintenance, shutdown manually.
We do had a frontend web proxy, which happens to be nginx but it could also be something else like a NetScaler instance. An important note is that the frontend proxy is running on a different machine than the backend application.
Now, for the backend service it takes a lot of time to start, even more than 10 minutes in some cases.
Note, I am asking this question on StackOverflow, as opposed to ServerFault because providing a solution for this problem is more likely to require writing some bash code inside the daemon startup script.
What we want to achive:
service mydaemon stop should enable the maintenance page on the frontend proxy
service mydaemon start should disabled the maintenance page on the frontend proxy
In the past we used to create a maintenance.html page and had nginx configured to check the existence of this page using try, before falling back to the backend.
Still, because we decided to move nginx to another machine we cannot do this and doing this using ssh raises security concerns.
We already considered writing this file to a NFS drive which would be accessible by both machine, but even this solution does not scale for a service that has a lot of traffic. Nginx will end-up trying this for every request, slowing down the responses quite a lot.
We are looking for another solution for this problem, one that would ideally be more flexible.
As a note, we still want to be able to trigger this behaviour from inside the daemon script of the backend application. So if the backend application stops responsing for other reasons, we do expect to see the same from the frontend.