I have a frontend instance (Angular app on nginx), which proxies calls to backend under a specific domain (let's say backend-app). Everything's easy when there is only one instance of both backend and frontend - I name the Service backend-app and DNS resolves to the correct backend Deployment.
Let's say I have another version of backend which I would like to test before merging to master. As nginx configuration of frontend instance is hardcoded to proxy to backend-api, creating another Service under the same name for a newer version of backend doesn't work.
I considered these options:
Making an environment variable and substituting domain name in nginx proxy configuration during runtime. This way I could be flexible about where do I want to route frontend calls to. Cons of this solution, as far as I have investigated, is that this approach beats the purpose of self-containment, that is, it becomes ambiguous what is the frontend's backend client and this type of configuration is prone to errors.
Creating different namespace every time I want to test things. While this allows spinning the whole stack without any problem or conflict, it seems to me that it's a huge overhead to create namespaces just for testing something once.
Having some fancy configurations combining labels and selectors. I couldn't think or find how to do it.
Any other opinions/suggestions you might have?
Try this approach
add label name:backend-1 to backend1 pod
add label name:backend-2 to backend2 pod
create a service using backend-1 selector.
to test against other backend, say backend2, all you have to do is edit the service yaml file and update the selector. you can toggle this way to test between backend1 and backend2
Are you using open shift. If yes then you can divide load between services by percentage using route.
Check blue/green and canary deployment options for more details
Related
I know that Next.js can do SSR.
I have questions about production.
In my experience(no SSR). Frontend build static files, and then give the folder to backend to integrate.And there is only one server.
I want to know that if we want to use SSR with Next.js (not static site).
Do we need host two server? One for host backend(nodes, java…), another for host frontned(next.js)?
If I use nodejs as backend language.Can I write all api in next.js?(I mean frontend and backend code all use next.js, so that there is only one server).
If question one's answer is yes, I see the document use next start to host server, is it strong enough to host many users?
Do we need host two server? One for host backend(nodes, java…), another for host > frontned(next.js)?
In most cases you would have a single server producing the SSR as well as rendering the markup required for the client. The associated Javascript files that only the browser can be sent via a asset serving server ( e.g: An S3 bucket ) - You would front the whole thing using a CDN so your server would not get all public requests
If I use nodejs as backend language.Can I write all api in next.js?(I mean frontend and backend code all use next.js, so that there is only one server).
Yes, for simplistic uses you can checkout the api solve that NextJS ships with. https://nextjs.org/docs/api-routes/introduction
If question one's answer is yes, I see the document use next start to host ? server, is it strong enough to host many users?
You would use a next build and next start - With the latest optimizations nextjs adds Static site generation (SSG) - Sorry one more confusing term but this lets your backend nodejs app receive much lesser requests and be smart about serving repetitive requests, However even with these abilities you should front the whole thing using a CDN to ensure high availability and low operating costs.
We are trying to implement our product with SOA and currently using IBM Integration Bus v9 as our ESB.
We have 3 different environments (Sets of servers used for different purposes) that we deploy our product on:
development: Used during the testing and development process
customer test: More stable builds for customer's approval before going for the main release
main/production: This is the final thing.
The challenge we have encountered: Setting Base URL for HTTP nodes of our message flows for different environments; without compromising the DRY principle!
It seems that it's only possible to set the whole URL at once in HTTP Request nodes with the mqsiapplybaroverride command. The problem is that multiple resources can be exposed from a single server and thus have a common base URL.
Using UDP seems to be a promising approach. We can set base URL for each of our services in a UDP and build HTTP Request URLs in compute nodes just before the HTTP Request node using the UDP. Then the UDPs can be overrided with the mqsiapplybaroverride... Problem? It seems that UDPs don't have a scope of more than a single message flow... so anytime I want to call a resource from a server I have to define a UDP for that message flow or the BAR override won't affect the base URL for that message flow... This would lead to base URLs being repeated in each message flow... DRY applies.
This should be a common problem in a typical SOA application... So is there any better way to solve it? Anything like JNDI feature in typical Java EE Containers?
IIB v10.0.0.6 seems to have introduced a RestRequest node which provides Base URL setting capability... Unfortunately, we don't have that luxury for the time being.
You can use a user defined configurable service to achieve this.
You can read and set URL from configurable service using a java compute node or using a mapping node and custom java.
A good solution is to set baseUrl in database for each environment and set value as bellow:
SET OutputLocalEnvironment.Destination.REST.Request.BaseURL = GetCachedOrFromDB('custom_service_baseUrl');
GetCachedOrFromDB -> define function to get value from cache or db (if not in cache)
'custom_service_baseUrl' - > define property as key in settings table for each environment.
If i want to use Firebase on the server side, in place of REST routes using express and node.js, how would I go about dealing with scaling and load balancing? So for example, if I have an express app that uses Firebase on the server side, will every single server that spins up contain these listeners and react to them? Is there a scale-able solution to using Firebase on the server-side with elastic load balancing in mind?
I think your question is too broad in its current form, but will give you at least a few (equally broad) options.
There are probably dozens of solutions possible, but most of them will be variations on these two broad scenarios: centralized vs decentralized.
You can use a centralized authority, which assigns each task to one of the worker nodes. This is normally what a load balancer does, so you might want to search for load balancing algorithms.
Alternatively you can have each node simply try to claim the work. The nodes should then use a transaction to update the "work queue", so that only one node ends up doing the work.
Related: https://github.com/FirebaseExtended/firebase-queue
I'm searching for a way to change the way Meteor loads the Mongo database. Right now, I know I can set an environment variable when I launch Meteor (or export it), but I was hoping there was a way to do this in code. This way, I could dynamically connect to different instances based on conditions.
An example test case would be for the code to parse the url 'testxx.site.com' and then look up a URL based on the 'textxx' subdomain and then connect to that particular instance.
I've tried setting the process.env.MONGO_URL in the server code, but when things execute on the client, it's not picking up the new values.
Any help would be greatly appreciated.
Meteor connects to Mongo right when it starts (using this code), so any changes to process.env.MONGO_URL won't affect the database connection.
It sounds like you are trying to run one Meteor server on several domains and have it connect to several databases at the same time depending on the client's request. This might be possible with traditional server-side scripting languages, but it's not possible with Meteor because the server and database are pretty tightly tied together, and the server basically attaches to one main database when it starts up.
The *.meteor.com hosting is doing something similar to this right now, and in the future Meteor's Galaxy commercial product will allow you to do this - all by starting up separate Meteor servers per subdomain.
Say we have a website that responds to a host header "kebab-shop.intra.net"
Is is possible to have both SOAP and RESTful in this URL?
That is, both of these are handled within the deployed code.
kebab-shop.intra.net/takeaway.asmx
kebab-shop.intra.net/kebab/get/...
I've been told this can't be done, without much explanation. Like this answer. This could be, I'm a database monkey, but I'd like some thoughts on what options I do or don't have please.
Thoughts so far
Separate host headers eg add kebab-shop-rest.intra.net
Separate web sites
Ideally, I'd like to have one web site, one URL domain name, one host header. Zero chance?
This is IIS 6 with .net 4. And we have some large corporate limitations that mean we are limited to a zip file to drop into the relevant folder used by the web site. This is intended to allow our clients to migrate without incurring the large corporate support, infrastructure and deployment overhead. The co-existence will only be for a month or three.
Edit: I'm asking because I'm not web developer. If my terms are wrong, this is why...
So... I want both SOAP and REST on kebab-shop.intra.net on IIS 6 without complexity.
That is, both of these are handled
within the deployed code.
* kebab-shop.intra.net/takeaway.asmx
* kebab-shop.intra.net/kebab/get/...
Yes, that should definitely be possible. If you have a single WCF service, you could easily expose two separate endpoints for the same service - one using e.g. basicHttpBinding (roughly equivalent to ASMX), and another with webHttpBinding (REST).
The complete URL's must be different - but the first part can be the same, I believe.
If you're hosting in IIS6, you need one virtual directory and that will partly dictate your SOAP endpoint - it will have to be something like:
http://kebab-shop.intra.net/YourVirtDir/takeaway.svc
(or: http://kebab-shop.intra.net/YourVirtDir/takeaway.asmx if you insist on using an ASP.NET legacy webservice).
and the REST endpoint can live inside the same virtual directory and define URI templates, e.g. you could have something like:
http://kebab-shop.intra.net/YourVirtDir/TakeKebab/gbn
or similar URL's.
However: checking this out myself I found that you cannot have both service endpoints "live" off the same base address - one of them has to have another "relative address" associated with it.
So either you add e.g. "SOAP" to your SOAP endpoint
http://kebab-shop.intra.net/YourVirtDir/takeaway.svc/SOAP/GetKebab
http://kebab-shop.intra.net/YourVirtDir/TakeKebab/gbn
or you add something to your REST service
http://kebab-shop.intra.net/YourVirtDir/takeaway.svc/GetKebab
http://kebab-shop.intra.net/YourVirtDir/REST/TakeKebab/gbn
I don't see a reason why you can't. Typìcally your SOAP endpoints will be one specific URLs per service, whereas for resources exposed via REST you will have one URL per resource (following 'URL patterns').
Example URLs for SOAP:
http://kebab-shop.intra.net/soap/service1
http://kebab-shop.intra.net/soap/service2
Example URL patterns for REST:
http://kebab-shop.intra.net/rest/{resourcetype}/{id}/
e.g.: http://kebab-shop.intra.net/rest/monkeys/32/
etc...