I have two local Grunt+Bower-projects with typical build and watch/serve tasks:
Client contains the client to be publicly released
AdminClient is an extension to client intended for internal administration use
AdminClient should re-use Client code and build-result. watch/serve must behave transparently for any change in Client and AdminClient.
How can I do this with Grunt+Bower?
It is a basic problem solved in C# with project dependency and in java typically with maven sub-modules.
You can have the Client configuration in a separate file that you extend in the AdminClient.
var common = require("common.js");
...
grunt.initConfig(common.config);
Related
We have a microservice that needs to be integration tested (real calls, but no network communication with anything outside of the test namespace in kubernetes) in our pipeline. It also relies on an external gRPC server which we have no control over.
Above is a picture of what we'd like to have happen. The white box on the left is code that provides the Microservice Boundary with 'external' data. It then keeps calling the Code via REST until it gets back the proper number of records or it times out. The Code pulls records from an internal database, as well as data associated to those records from a gRPC call. Since we do not own the gRPC service, but are doing integration tests, we need a few pre-defined responses to the two gRPC services we call (blue box).
Since our integration tests are self-contained right now, and we don't want to write an entirely new actual gRPC server implementation just to mimick calls, is there a way to stand up a real gRPC server and configure it to return responses? The request is pretty much like a mock setup, except with an actual server.
We need to be able to:
give the server multiple proto files to interpret and have it expose those as endpoints. Proto files must be able to have different package names
using files we can store in source control, configure the responses to each call
able to run in a linux docker container (no windows)
I did find gripmock which seemed almost exactly what we need, but it only serves one proto file per container. It supposedly can serve more than one, but I can't get it to work and their example that serves two files implies each proto file must have the same package name which will likely never happen with our scenarios. In the meantime we are using it, but if we have 10 gRPC call dependencies, we now have to run 10 gripmock servers.
Wikipedia contains a list of API mocking tools. Looking at that list today there is a commercial tool that supports gRPC called Traffic Parrot which allows you to create gRPC mocks based on your Proto files. You can give it multiple proto files, store the mocks in Git and run the tool in Docker.
There are also open-source tools like GripMock but it does not generate stubs based on Proto files, you have to create them manually. Also, the project up to today was not keeping up to date with Proto and gRPC developments i.e. the package name issue you have discovered yourself above (works only if the package names in different proto files are the same). There are a few other open-source tools like grpc-wiremock, grpc-mock or bloomrpc-mock but they still lack widespread adoption and hence might be risky to adopt for an important enterprise project.
Keep in mind, the mock generated will be only a test double, it will not replicate the full behaviour of the system the Proto file corresponds to. If you wanted to also replicate partially the semantics of the messages consider doing a recording of the gRPC messages to create the mocks, that way you can see the sample data as well.
Take a look at this JS library which hopefully does what you need:
https://github.com/alenon/grpc-mock-server
Usage example:
private static readonly PROTO_PATH: string = __dirname + "example.proto";
private static readonly PKG_NAME: string = "com.alenon.example";
private static readonly SERVICE_NAME: string = "ExampleService";
...
const implementations = {
ex1: (call: any, callback: any) => {
const response: any =
new this.proto.ExampleResponse.constructor({msg: "the response message"});
callback(null, response);
},
};
this.server.addService(PROTO_PATH, PKG_NAME, SERVICE_NAME, implementations);
this.server.start();
I am following academy zenva's course on meteor, but when I define Messages = new Mongo.Collection('messages');
in the main.js file of the client, and want to insert in the browser console it doesn't work and dislays that the method is not found.
try defining the collection in a file that is accessible for both server and client, i.e the api folder if you are using meteor's recommended file structure
i'm refactoring an application build when meteor was in 0.5.x
I need to scale the application, so i will now have different applications able to run on different core. One of them will be dedicated to web-application, but others are server only. For those case i don't want Meteor to serve anything, it must not be an http server.
I tried to configure differently the package list (file .meteor/packages:
# standard package of meteor-platform in server app only
application-configuration
autoupdate
base64
binary-heap
callback-hook
check
ddp
deps
ejson
follower-livedata
geojson-utils
id-map
json
logging
meteor
mongo
observe-sequence
ordered-dict
random
retry
routepolicy
# standard package of meteor-platform in client app
#blaze
#blaze-tools
#boilerplate-generator
#html-tools
#htmljs
#jquery
#minifiers
#minimongo
#reactive-var
#spacebars
#spacebars-compiler
#templating
#tracker
#ui
#webapp
#webapp-hashing
# specific app package
But when i run #> meteor
Then it tells me that the server is listening, so it doesn't work
I also tried to remove "browser platform" :
meteor remove-platform browser
but it tells me that it cannot remove platform in this version of meteor
Where am i wrong ? the list of packages is not the right one for a server only application ?
Not possible at the moment, "maybe in a future version" as someone from MDG says
Meteor relies on the DDP package to listen to incoming requests and DDP listens on websockets, which is basically http.
Therefore it has to listen for something on some port. If it does not listen, you cannot tell the app to do anything or ask it for anything so then, what use is it?
But if you don't want your app to interfere with your other apps in tems of the ports it binds to, then give it a custom port when you are starting it.
$ meteor run --port 12345
I have a flex application that communicates via BlazeDS with two webapps running inside a single instance of Tomcat.
The flex client is loaded by the browser from the first webapp and all is well. However on the initial call to the second webapp the client receives the following error:
Detected duplicate HTTP-based FlexSessions, generally due to the remote host disabling session cookies. Session cookies must be enabled to manage the client connection correctly.
Subsequent calls to the same service method succeed.
I've seen a few posts around referring to the same error in the context of two flex apps calling a single webapp from the same browser page, but nothing which seems to help my situation - so I'd be very grateful if anyone could help out....
Cheers, Mark
Three potential solutions for you:
I found once that if I hit a remote object before setting up a messaging channel then the CientID would get screwed up. Try to establish an initial messaging channel once the application loads, and before any remote object calls are made.
Flash Builder's network monitoring tool can cause some problems with BlazeDS. I set up a configuration option on application load that checks to see if I'm in the dev environment (it is called just before setting up my channel from #1). If I'm in dev, I assign a UID manually. For some reason this doesn't take well outside the dev environment... been awhile since I set it all up so I can't remember the finer points as to why:
if (!(AppSettingsModel.getInstance().dev))
FlexClient.getInstance().id = UIDUtil.createUID();
BlazeDS by default only allows for a single HTTP session to be setup per client/browser. In my streaming channel definitions I added the following to allow for additional sessions per browser:
<channel-definition id="my-secure-amf-stream" class="mx.messaging.channels.SecureStreamingAMFChannel">
<endpoint url="https://{server.name}:{server.port}/FlexClient/messagebroker/securestreamingamf"
class="flex.messaging.endpoints.SecureStreamingAMFEndpoint"/>
<properties>
<add-no-cache-headers>false</add-no-cache-headers>
<idle-timeout-minutes>0</idle-timeout-minutes>
<max-streaming-clients>10</max-streaming-clients>
<server-to-client-heartbeat-millis>5000</server-to-client-heartbeat-millis>
<user-agent-settings>
<user-agent match-on="MSIE" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
<user-agent match-on="Firefox" kickstart-bytes="2048" max-streaming-connections-per-session="3" />
</user-agent-settings>
</properties>
Problem: Duplicate session errors when flex.war and Livecycle.lca files are hosted in separate JVMs on WebSphere Server.
Solution:
Inside the command file for the event, set FlexClientId to null in execute method before calling remote service (Java method or LC Process).
Guess this approach can be used in other scenarios as well to prevent Duplicate session errors.
EventCommand.as file
—————————–
import mx.messaging.FlexClient;
//other imports as per your code
public function execute(event:CairngormEvent):void
{
var evt:EventName = event as EventName ;
var delegate:Delegate = new DelegateImpl(this as IResponder);
//***set client ID to null
FlexClient.getInstance().id = null;
delegate.functionName(evt.data);
}
How do you use this great new API in connection with Java? Do you use just pure native process API like nativeProcess.standardInput.write() and nativeProcess.standardOutput.read() with which you cannot debug Java side neither invoke remote java method. Or you are using some library that leverages remote method invocation such as flerry lib but that also cannot debug Java side? Or maybe you are using Merapi with which you can debug but cannot remotely invoke Java method? I'm asking this because this is maybe the most important question regarding this API and its ease of use.
It sounds like your reservations have to do with being able to debug the Java process. This is not really an issue. You can use the NativeProcess API to kick off a Java process with arguments that will cause it to be externally debuggable. For example:
var processArgs:Vector.<String> = new Vector.<String>();
processArgs.push("-Xdebug");
processArgs.push("-Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n");
This will allow your Java process to be remote debuggable. You can then connect to it from Eclipse or Netbeans once the process has started. If the code in the Java process is linked to an active Eclipse/Netbeans project, you can do linewise debugging like you would of any other Java application.
-Raj
You can use NativeProcess to execute java.exe and pass it the right parameters to execute a java application.
You cannot use NativeProcess to run random java code from a jar file.
Having used both of them, you can debug the JVM with MerAPI or NativeProcess API.
Prior to AIR2.0, I used merapi to communicate over the network to a java process.
I would much prefer to use the NativeProcess launcher now, with MerAPI we were hacking
ugly marshalling code. Debugging the network payloads was a pin via merapi.
Using NativeProcess API is easy -
var myForkedExe:NativeProcessStartupInfo = new NativeProcessStartupInfo();
myForkedExe.executable = ;
...
I am not sure I understand what you mean by can not invoke remote Java methods with merapi.
That's exactly what I have been doing. Debugging is easy, just set the JPDA args and attach any JAVA debugger.
You could use Flerry to launch and communicate with java processes.
You can use var file:File = new File("/usr/bin/java"); and pass parameters to the Java-file with a Vector of arguments. E.g.
var arguments:Vector.<String> = new Vector.<String>;
arguments.push("-jar");