I'd like to deploy 2 web apps (app1, app2) in two distinct web containers (tomcat1, tomcat2), each running on distinct JVMs and network nodes. App1 is a GWT app. App2 is server-side java code. Both app1 server-side and app2 have the same object model. To send a request from app1 server to app2 for processing I could use HTTP POST, but that would require custom serialization at each end. Is it possible to use GWT RPC instead; i.e. between 2 distinct GWT servers? Is there an alternative way to accomplish this?
It's possible and it's what I did,
Remember to allow cross-origin-request in your App2 (server-side java code)
Related
We use nomad to deploy our applications - which provide gRPC endpoints - as tasks. The tasks are then registered to Consul, using nomad's service stanza.
The routing for our applications is achieved with envoy proxy. We are running central envoy instances loadbalanced at IP 10.1.2.2.
The decision to which endpoint/task to route is currently based on the host header and every task is registered as a service under <$JOB>.our.cloud. This leads to two problems.
When accessing the service, the DNS name must be registered for the loadbalancer IP which leads to /etc/hosts entries like
10.1.2.2 serviceA.our.cloud serviceB.our.cloud serviceC.our.cloud
This problem is partially mitigated by using dnsmasq, but it is still a bit annoying when we add new services
It is not possible to have multiple services running at the same time which provide the same gRPC service. If we e.g. decide to test a new implementation of a service, we need to run it in the same job under the same name and all services which are defined in a gRPC service file need to be implemented.
A possible solution we have been discussing is to use the tags of the service stanza to add tags which define the provided gRPC services, e.g.:
service {
tags = ["grpc-my.company.firstpackage/ServiceA", "grpc-my.company.secondpackage/ServiceB"]
}
But this is discouraged by Consul:
Dots are not supported because Consul internally uses them to delimit service tags.
Now we were thinking about doing it with tags like grpc-my-company-firstpackage__ServiceA, ... This looks really disgusting, though :-(
So my questions are:
Has anyone ever done something like that?
If so, what are recommendations on how to route to gRPC services which are autodiscovered with Consul?
Does anyone have some other ideas or insights into this?
How is this accomplished in e.g. istio?
I think this is a fully supported usecase for Istio. Istio will help you with service discovery w/ Consul and you can use route rules to specify which deployment will provide the service. You can start explore from https://istio.io/docs/tasks/traffic-management/
We do something similar to this, using our own product, Turbine Labs.
We're on a slightly different stack, but the idea is:
Pull service discovery information into our control plane. (We use Kubernetes but support Consul).
Organize this service discovery information by service and by version. We use the tbn_cluster, stage, and version (like here).
Since version for us is the SHA of the release, we don't have formatting issues with it. Also, they don't have to be unique, because the tbn_cluster tag defines the first level of the hierarchy.
Once we have those, we use UI / API to define all the routes (e.g. app.turbinelabs.io/stats -> stats_service). These rules include the tags, so when we deploy a new version (deploy != release), no traffic is routed to it. Releases are done by updating the rules.
(There's even some nice UI affordances for updating those rules for the common case of "release 10% of traffic to the new version," like a slider!)
Hopefully that helps! You might check out LearnEnvoy.io -- lots of tutorials and best practices on what works with Envoy. The articles on Service Discovery Integration and Incremental Blue/Green Releases may be helpful.
Is it possible to add operation contracts to a rest service during the runtime?
For instance we have a service which is available under the endpoint: www.mywebsite.com has already one operationContract: getName.
But now I want to add during the runtime two additional operationContracts: like getAdress and GetNumber.
The main problem is how to let clients to know about new service contracts.
I think you can add new service endpoints at runtime by creating ServiceHost instances. Service will listenting to some different addresses. Those endpoints will use different service contracts (interfaces).
I don't think you will construct and implement interfaces dynamically (but you can in .net). You will probably use some predefined interfaces. In this case you should inform clients somehow about new service contract and address. To pass and consume such metadata you can:
1) Use duplex methods (service to client call direction) of some 'static' maintenance service - to bring service metadata to client in custom format;
2) Use periodical client calls (polling) to this maintenance service - for same purpose;
3) Use periodical web scanning (by address range) and parse wsdl to build client proxy at runtime. You can run "svcutil /sc ..." at runtime to generate code or use custom technic (this or this can help);
4) Use intermediate service to orchestrate both service and clients (complex but powerful approach, this can help);
5) Use wcf discovery in addition (not an easy way too);
6) Use combination of those methods.
p.s. You can use one interface but inform client when service supports (implements) certain method. It would be the easiest way.
In order to test how a REST service performs under load I would like to send production data to two service end points, one production service and one test service. I would like the responses from the test service to be ignored so the client has no idea it's request is being sent to 2 services. I would like something in between the client and the service so that I do not need to make any changes to either, just have some sort of additional 'filter' service that the traffic will go through which will then pass the request on untouched but also send a duplicate request to the test service. Someone suggested that I might be able to do this using Apache Camel but it's a bit overwhelming and I don't want to get started learning Camel if my aim is not possible. Has anyone achieved anything similar, either using Camel or some other method?
This can be done by defining a jetty or a cxf endpoint that acts as a proxy routing the request to two other http endpoints ie, the REST services.
from("jetty:http://somehost:8282/xxx").
to("http://prod:8181/rest/service/xyz").
to("http://test:8182/rest/service/xyz");
The client can fire the load to http://somehost:8282/xxx. The client will not be aware that it is being routed to two services.
Note: The test will not yeild a real load result if you are routing the client request to two services via the above route or other proxy/router methodologies. Because some latency will be introduced by the proxy/route itself. Instead i suggest to use some load test tool to generate the load and directly fire to the prod and test endpoints separately.
For example, apache ab benchmark tool. Take a look at this and this.
My question is is it possible to connect to two different BlazeDS servers from the same Flex app? I have already read this question:
Can a Flex client app connect to BlazeDS running on a different server?
However, it appears to be discussing the possibility of connecting a Flex client to a BlazeDS on a different server but not necessarily to another BlazeDS on a different server.
I have also read this question:
One Flex client connecting to two webapps using BlazeDS - Detected duplicate HTTP-based FlexSessions
In attempts I have tried, I get the error mentioned in the second question above:
Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly.
Is connecting one Flex application to two BlazeDS enabled servers completely impossible? We want to be able to have a "common functionality" BlazeDS server that is used by a number of Flex apps that each have their own local BlazeDS server for their own functionality.
//Edit
The way I'm currently doing it:
In my mxml file, I'm defining a a channset like so:
<mx:ChannelSet id="dataService1Channel">
<mx:channels>
<mx:AMFChannel id="dataService1AmfChannel"
channelFault="dataService1Fault(event)"
url="http://localhost:7001/dataservice1/messagebroker/amf"/>
</mx:channels>
</mx:ChannelSet>
And then I'm using this channelset in the following configured dataservice (which was autoconfigured when I used FlashBuilder's "Connect to BlazeDS" funcion)
<dataservice1:DataService1Service id="dataService1Service"
fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)"
showBusyCursor="true"
channelSet="{dataService1Channel}"/>
The other dataservice is defined like so:
<dataservice2:DataService2Service id="dataService2Service"
fault="Alert.show(event.fault.faultString + '\n' + event.fault.faultDetail)"
showBusyCursor="true"/>
The calls work and I can get the data but I'm getting that warning I mentioned in the form of an alert in the Flex application. If I could suppress that warning, I'd be happy.
Is connecting one Flex application to two BlazeDS enabled servers
completely impossible?
Yes! I see no reason why you wouldn't be able to do so, assuming the proper crossdomain.xml files are in place.
[Note, from here on out I assume you are using BlazeDS along with RemoteObjects/AMF]
To do this; you would most likely create different endpoints in your services-config file. The default endpoint [at least for the services-config included with ColdFusion] automatically points to the server that the SWF is served off of. There is no reason you can't create your own end points, even different endpoints in the same services-config file. You could also have the runpoints defind at runtime if you felt that was necessary.
I'm not sure why you'd be receiving session related errors; unless your server side code somehow requires sessions.
Well I faced the same problem while connecting two BlazeDS server with SINGLE flex client(swf). In fact as the flex documentation says:
"Every Flex application, written in MXML or ActionScript, is eventually compiled into a SWF file. When the SWF file connects to the BlazeDS server, a flex.messaging.client.FlexClient object is created to represent that SWF file on the server. SWF files and FlexClient instances have a one-to-one mapping. In this mapping, every FlexClient instance has a unique identifier named id, which the BlazeDS server generates. An ActionScript singleton class, mx.messaging.FlexClient, is also created for the Flex application to access its unique FlexClient id."
For example you have two blazeDS servers. 1) REMOTE 2)LOCAL and single FlexApp(swf) "MyClient".
Step 1. MyClient connects to REMOTE blazeDS server. So one unique id is generated .
Step 2. Now MyClient connects to LOCAL blazeDS server. The same id generated in Step 1 will be used as only single uniqe Id can be generated for a single FlexApp(swf).
Step 3. Now again MyClient will reconnect to REMOTE blazeDS server. Remeber that each time a FlexApp(swf) connects to blazeDS server a unique FlexClient is generated as well as the unique id. So, now at this Step 3, we already have the id generated in Step 1. So, definitely it will throw Duplicate Session exception.
Solution:
There is a workaround I found and applied in my application. It works.
Every time the FlexApp(swf) switches the blazeDS server make the generated id=null.
FlexClient.getInstance().id=null;
In the above quoted example make the id=null after Step 1. Now when it will connect to the LOCAL blazeDS it will not use the id generated by Step 1. Instead , it will create one new unique id while working in LOCAL blazeDS mode.
Again when you switch from LOCAL to REMOTE mode (Step 3), make the id=null by this code piece. So , now when the FlexApp(swf) connects to the REMOTE blazeDS, a new unique Id will be generated and there will not be a Duplicate Session exception.
Thanks and regards,
Anupam G.
Since Flashbuilder does not support WCF over https, i am considering to use weborb remoting as alternative, but not really sure how flash is going to know weborb location, if they are sitting on different servers. Looked at destination, source fields, but not really find a field called url in remoteObject in Flex. Has anyone done similar things?
I know this is an old question, but thought I'd answer it anyway. You can expose your WCF services to remoting clients (Flash, Flex) via WebORB. WebORB supports both self-host and IIS-hosted WCF services. Here are links to instructions for both models.
Self-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?standalone_wcf_services.htm
IIS-hosted: http://www.themidnightcoders.com/fileadmin/docs/dotnet/v4/guide/index.html?iis_hosted_wcf_services.htm
Both documents address your questions. Here is an example of one approach:
Invoking Self-Hosted Service From Flex/AIR
Flex and AIR clients can use the RemoteObject API to invoke methods on self-hosted WCF services which use the AMF endpoint. There are two approaches for invoking self-hosted WCF service. The first approach requires less code, but creates a dependency on configuration files declaring destinations and channels (the files located in WEB-INF/flex). The second approach does not have any dependencies on the configuration files, but results in a few additional lines of code.Consider the examples of the API below:
Approach 1 (with dependency on configuration files):
var remoteObject:RemoteObject = new RemoteObject("GenericDestination");
remoteObject.endpoint = "http://localhost:8000/WCFAMFExample/amf"
remoteObject.GetQuote.addEventListener( ResultEvent.RESULT, gotResult );
remoteObject.GetQuote.addEventListener( FaultEvent.FAULT, gotError );
remoteObject.GetQuote( "name" );
The endpoint URL uniquely identifies the WCF service. Notice the /amf at the end of the URL, it is required for the AMF endpoint. With the approach demonstrated above, the destination name in the RemoteObject constructor is required however it is not used. As a result, for the code to work, the Flex/AIR application must be compiled with additional compile argument:
-services "C:\Program Files\WebORB for .NET\4.0\web-inf\flex\services-config.xml"
I hope this helps.
K