HTTP requests between apps on different PCF domains - http

I currently have a node server deployed in PCF that periodically makes GET requests to other applications in PCF.
Some of the applications are in the same domain, and those GET requests are working; however, when I make requests to apps in a different domain, they timeout.
To summarize:
MyApplication = NodeJS server running at MyApp.apps.dev.company.int
DevApplication = Application running at DevApp.apps.dev.company.int
ProdApplication = Application running at ProdApp.apps.prod.company.int
GET Requests from MyApplication to DevApplication work,
but GET requests from MyApplication to ProdApplication do not.
GET requests from localhost work for both DevApplication and ProdApplication.
What is causing this issue, and how can I resolve it?

Are these apps deployed on same space or different? You might be missing the ASG (Application security group) if they are on different space... ASG manages the egress traffic from your app and works at space level..
If prod and dev are completely different foundations then possibly it might be blocked and allowed to access the app via some other domain and possibly fronted by F5

Related

Why do we need a web server along side an app server in an orchestrated containerised architecture?

Assuming I am using framework like Flask to serve requests, I understand that web server handles static file requests and directs any program execution requests to the app server. Example: nginx. Where as app server can handle both static files as well as program executions. Example: gunicorn.
It makes sense to have a web server to handle static files, caching, request redirection, load balancing. The request first comes to the web server and it knows how to handle it and redirect any program executions to the app server.
However, in architectures where we use orchestration and containerization, that is - there is cluster of nodes, each node running a container - assume the container has got only the app server (example: gunicorn), and the request arrives at the API management/gateway(which has same features as a web server - other than serving static files), gets redirected to the cluster of nodes (which does load balancing), eventually the request reaches a node containing the appserver (example: gunicorn) that serves the request.
Is there any benefit of having a web server running along side an app server inside such a configuration?
In azure does API gateway play the role of webserver equivalant?
It depends. It's common to have some proxy / routing logic (e.g. url rewrite) in the API Gateway, so probably this is why you can have the app server and the web server inside a container.
In Azure, API Management is a fully managed API Gateway which allows you to implement caching, routing, security, api versioning, and more.
More info:
https://microservices.io/patterns/apigateway.html
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/architect-microservice-container-applications/direct-client-to-microservice-communication-versus-the-api-gateway-pattern

Tools to mock network requests

I hosted my Angular 6 and Laravel application on an AWS EC2 Instance, the angular container is running on(or mapped to) port 80 and 443 served by Nginx while the Laravel application is running on another container running on (or mapped to) port 8000 also served by Nginx.
I configured the angular app to be running at https://example.com and the Laravel app on https://api.example.com.
To be clear, the containers are task on separate services in the same EC2 Cluster on a cloud formation and there is no load-balancer.
The set up works perfectly for like 97% of customers but the remaining customers cannot get content on the site. I worked with one of the customers and realized that the Angular app(at https://example.com) loaded successfully but https://api.example.com:8000 cannot be reached.
What on earth can cause this?
Is there a way (may be tool) I can use to simulate different kinds of network request so that I can simulate the problematic network of the customers that cannot assess the site, for me to be able to trace and debug, because right now I am not having issue, making the problem very dicey for me to solve.

Configuring nginx web server with multiple app server of aws stack

I am a DevOps guy and presently I am running my Ruby on Rails application on ubuntu ec2 where the app and also the web server reside inside the same box but we are using mysql RDS cluster. I can see lot of spikes due to more traffic to the web site. So I am planning to change the system. I wanna put web server nginx in a separate instance and web app in a separate instance. But this needs a load balancer which should reside in nginx box, but once the traffic goes up, the nginx instance can be configured to auto scale. What about the app server instance? It can be configured to auto scale but it needs to attach itself to the web server and web server needs to discover the new app server which was created. How can achieve this? Kindly help me out to get this done.
When you are using one single web server at the moment, a transition to using nginx as static webserver and proxy for another backend webserver on another instance really makes sense and will give you performance boost.
However I am not sure if you really need autoscaling. Autoscaling mostly makes sense if you want to react on fast traffic spikes etc. If you have a more or less continuous workload that might increase over time, it should be easier to manually launch and add another backend server in the nginx config. If this does not work for you, you can still have a look at Amazon's Elastic Loadbalancers and Autoscaling afterwards.

Can Nginx be used instead of Gunicorn to manage multiple local OpenERP worker servers?

I'm currently using Nginx as a web server for Openerp. It's used to handle SSL and cache static data.
I'm considering extending it's use to also handle fail over and load balancing with a second server, using the upstream module.
In the process, it occurred to me that Nginx could also do this on multiple Openerp servers on the same machine, so I can take advantage of multiple cores. But Gunicorn seems to the the preferred tool for this.
The question is: can Nginx do a good job handling traffic to multiple local OpenERP servers, bypassing completely the need for Gunicorn?
Let first talk what they both are bascially.
Nginx is a pure web server that's intended for serving up static content and/or redirecting the request to another socket to handle the request.
Gunicorn is based on the pre-fork worker model. This means that there is a central master process that manages a set of worker processes. The master never knows anything about individual clients. All requests and responses are handled completely by worker processes.
If you see closely Gunicorn is Designed from Unicron, Follow the link for the detail more diff
which show the ngix and unicrom same model work on Gunicron also.
nginx is not a "pure web server" :) It's rather a web accelerator capable of doing load balancing, caching, SSL termination, request routing AND static content. A "pure web server" would be something like Apache - historically a web server for static content, CGIs and later for mod_something.

windows service cannot access a webservice

We have two servers, both are containing a local application connecting to local web service, applications and services are identical on both servers.
One of the servers work just fine,
The other one is just dead, I have impression the the security configuration are different on those servers.
What prevents an application X from connecting a web-service, given that another application y on the same server can connect to it. and X is a windows service.
What I should check, what is chances?
Thanks
Check if there is any firewall that might need to some ports opened up.
Could there be any kind of AntiVirus or similar set up on one of the servers?
Basic troubleshooting of loosely-coupled applications means independent testing/verification of those services.
Can you access the web service locally through a different application, i.e. a web browser? If you can't reach the service through the browser, then the server configurations (at some level) are not identical.
Only after you're certain the service is reachable should you look into issues with the windows service.

Resources