I'm putting an object into the session, and then in a latter step in the scenario I need to use properties of that object in an http request.
The Gatling expression language does not support accessing properties of an object, so I thought I could extract the object from the session manually and then extract the properties I needed in the http request using the following code.
exec(session => {
val project = session("item").as[Project]
println(s"name = ${project.getName}, daysToComplete = ${project.getDaysToComplete}")
http("Health Check")
.get(s"/health")
.queryParam("name", s"${project.getName}")
session
})
But structured this way the http request is not added into the chain and so does not execute.
Is there anyway to do this, short of putting the individual properties into the session. This is a simplified example. The object I'm putting into the session is much more complicated than this.
Already answered on Gatling's official mailing list.
This cannot work, please read the documentation: https://gatling.io/docs/gatling/reference/current/general/scenario/#exec
Gatling DSL components are immutable ActionBuilder(s) that have to be chained altogether and are only built once on startup. The result is a workflow chain of Action(s). These builders don’t do anything by themselves, they don’t trigger any side effect, they are just definitions. As a result, creating such DSL components at runtime in functions is completely meaningless. If you want conditional paths in your execution flow, use the proper DSL components (doIf, randomSwitch, etc)
exec { session =>
if (someSessionBasedCondition(session)) {
// just create a builder that is immediately discarded, hence doesn't do anything
// you should be using a doIf here
http("Get Homepage").get("http://github.com/gatling/gatling")
}
session
}
You should do something like:
foreach(components, "component") {
exec(
http { session =>
val component = session("component").as[ITestComponent]
s"Upload Component ${component.getId}"
}.post { session =>
val component = session("component").as[ITestComponent]
s"/component/$repoId/$assetId/${component.getId}/${component.getResourceVersionId}"
}
.bodyPart(RawFileBodyPart("resource", session => {
val component = session("component").as[ITestComponent]
component.getContent.getAbsolutePath()).contentType(component.getMediaType()).fileName(component.getContent.getName())).asMultipartForm
}
)
}
Yes, this is pretty complicated. The reason it looks so over bloated is because you're trying to use a Java POJO (hidden behind an interface), instead of using Scala case classes.
If you were to use a Scala case class, you could use Gatling Expression Language (it doesn't support accessing POJOs by reflection atm) and do something like this:
foreach(components, "component") {
exec(
http("Upload Component ${component.id}")
.post(s"/component/$repoId/$assetId/$${component.id}/$${component.resourceVersionId}")
.bodyPart(
RawFileBodyPart("resource", "${component.content.absolutePath}")
.contentType("${component.content.mediaType}")
.fileName("${component.content.name}")
).asMultipartForm
)
}
Related
I am writing unit tests for some async sections of my code (returning Futures) that also involves the need to mock a Scala object.
Following these docs, I can successfully mock the object's functions. My question stems from the fact that withObjectMocked[FooObject.type] returns Unit, where async tests in scalatest require either an Assertion or Future[Assertion] to be returned. To get around this, I'm creating vars in my tests that I reassign within the function sent to withObjectMocked[FooObject.type], which ends up looking something like this:
class SomeTest extends AsyncWordSpec with Matchers with AsyncMockitoSugar with ResetMocksAfterEachAsyncTest {
"wish i didn't need a temp var" in {
var ret: Future[Assertion] = Future.failed(new Exception("this should be something")) // <-- note the need to create the temp var
withObjectMocked[SomeObject.type] {
when(SomeObject.someFunction(any)) thenReturn Left(Error("not found"))
val mockDependency = mock[SomeDependency]
val testClass = ClassBeingTested(mockDependency)
ret = testClass.giveMeAFuture("test_id") map { r =>
r should equal(Error("not found"))
} // <-- set the real Future[Assertion] value here
}
ret // <-- finally, explicitly return the Future
}
}
My question then is, is there a better/cleaner/more idiomatic way to write async tests that mock objects without the need to jump through this bit of a hoop? For some reason, I figured using AsyncMockitoSugar instead of MockitoSugar would have solved that for me, but withObjectMocked still returns Unit. Is this maybe a bug and/or a candidate for a feature request (the async version of withObjectMocked returning the value of the function block rather than Unit)? Or am I missing how to accomplish this sort of task?
You should refrain from using mockObject in a multi-thread environment as it doesn't play well with it.
This is because the object code is stored as a singleton instance, so it's effectively global.
When you use mockObject you're efectibly forcefully overriding this var (the code takes care of restoring the original, hence the syntax of usign it as a "resource" if you want).
Because this var is global/shared, if you have multi-threaded tests you'll endup with random behaviour, this is the main reason why no async API is provided.
In any case, this is a last resort tool, every time you find yourself using it you should stop and ask yourself if there isn't anything wrong with your code first, there are quite a few patterns to help you out here (like injecting the dependency), so you should rarely have to do this.
I have a GRPC API where, following a refactor, a few packages were renamed. This includes the package declaration in one of our proto files that defines the API. Something like this:
package foo;
service BazApi {
rpc FooEventStream(stream Ack) returns (stream FooEvent);
}
which was changed to
package bar;
service BazApi {
rpc FooEventStream(stream Ack) returns (stream FooEvent);
}
The server side is implemented using grpc-java with scala and monix on top.
This all works fine for clients that use the new proto files, but for old clients that were built on top of the old proto files, this causes problems: UNIMPLEMENTED: Method not found: foo.BazApi/FooEventStream.
The actual data format of the messages passed over the GRPC API has not changed, only the package.
Since we need to keep backwards compatibility, I've been looking into a way to make the old clients work while keeping the name change.
I was hoping to make this work with a generic ServerInterceptor which would be able to inspect an incoming call, see that it's from an old client (we have the client version in the headers) and redirect/forward it to the renamed service. (Since it's just the package name that changed, this is easy to figure out e.g. foo.BazApi/FooEventStream -> bar.BazApi/FooEventStream)
However, there doesn't seem to be an elegant way to do this. I think it's possible by starting a new ClientCall to the correct endpoint, and then handling the ServerCall within the interceptor by delegating to the ClientCall, but that will require a bunch of plumbing code to properly handle unary/clientStreaming/serverStreaming/bidiStreaming calls.
Is there a better way to do this?
If you can easily change the server, you can have it support both names simultaneously. You can consider a solution where you register your service twice, with two different descriptors.
Every service has a bindService() method that returns a ServerServiceDefinition. You can pass the definition to the server via the normal serverBuilder.addService().
So you could get the normal ServerServiceDefinition and then rewrite it to the new name and then register the new name.
BazApiImpl service = new BazApiImpl();
serverBuilder.addService(service); // register "bar"
ServerServiceDefinition barDef = service.bindService();
ServerServiceDefinition fooDefBuilder = ServerServiceDefinition.builder("foo.BazApi");
for (ServerMethodDefinition<?,?> barMethodDef : barDef.getMethods()) {
MethodDescriptor desc = barMethodDef.getMethodDescriptor();
String newName = desc.getFullMethodName().replace("foo.BazApi/", "bar.BazApi/");
desc = desc.toBuilder().setFullMethodName(newName).build();
foDefBuilder.addMethod(desc, barMethodDef.getServerCallHandler());
}
serverBuilder.addService(fooDefBuilder.build()); // register "foo"
Using the lower-level "channel" API you can make a proxy without too much work. You mainly just proxy events from a ServerCall.Listener to a ClientCall and the ClientCall.Listener to a ServerCall. You get to learn about the lower-level MethodDescriptor and the rarely-used HandlerRegistry. There's also some complexity to handle flow control (isReady() and request()).
I made an example a while back, but never spent the time to merge it to grpc-java itself. It is currently available on my random branch. You should be able to get it working just by changing localhost:8980 and by re-writing the MethodDescriptor passed to channel.newCall(...). Something akin to:
MethodDescriptor desc = serverCall.getMethodDescriptor();
if (desc.getFullMethodName().startsWith("foo.BazApi/")) {
String newName = desc.getFullMethodName().replace("foo.BazApi/", "bar.BazApi/");
desc = desc.toBuilder().setFullMethodName(newName).build();
}
ClientCall<ReqT, RespT> clientCall
= channel.newCall(desc, CallOptions.DEFAULT);
I have a problem that I can solve reasonably easy with classic imperative programming using state: I'm writing a co-browsing app that shares URL's between several nodes. The program has a module for communication that I call link and for browser handling that I call browser. Now when a URL arrives in link i use the browser module to tell the
actual web browser to start loading the URL.
The actual browser will trigger the navigation detection that the incoming URL has started to load, and hence will immediately be presented as a candidate for sending to the other side. That must be avoided, since it would create an infinite loop of link-following to the same URL, along the line of the following (very conceptualized) pseudo-code (it's Javascript, but please consider that a somewhat irrelevant implementation detail):
actualWebBrowser.urlListen.gotURL(function(url) {
// Browser delivered an URL
browser.process(url);
});
link.receivedAnURL(function(url) {
actualWebBrowser.loadURL(url); // will eventually trigger above listener
});
What I did first wast to store every incoming URL in browser and simply eat the URL immediately when it arrives, then remove it from a 'received' list in browser, along the lines of this:
browser.recents = {} // <--- mutable state
browser.recentsExpiry = 40000;
browser.doSend = function(url) {
now = (new Date).getTime();
link.sendURL(url); // <-- URL goes out on the network
// Side-effect, mutating module state, clumsy clean up mechanism :(
browser.recents[url] = now;
setTimeout(function() { delete browser.recents[url] }, browser.recentsExpiry);
return true;
}
browser.process = function(url) {
if(/* sanity checks on `url`*/) {
now = (new Date).getTime();
var duplicate = browser.recents[url];
if(! duplicate) return browser.doSend(url);
if((now - duplicate_t) > browser.recentsExpiry) {
return browser.doSend(url);
}
return false;
}
}
It works but I'm a bit disappointed by my solution because of my habitual use of mutable state in browser. Is there a "Better Way (tm)" using immutable data structures/functional programming or the like for a situation like this?
A more functional approach to handling long-lived state is to use it as a parameter to a recursive function, and have one execution of the function responsible for handling a single "action" of some kind, then calling itself again with the new state.
F#'s MailboxProcessor is one example of this kind of approach. However it does depend on having the processing happen on an independent thread which isn't the same as the event-driven style of your code.
As you identify, the setTimeout in your code complicates the state management. One way you could simplify this out is to instead have browser.process filter out any timed-out URLs before it does anything else. That would also eliminate the need for the extra timeout check on the specific URL it is processing.
Even if you can't eliminate mutable state from your code entirely, you should think carefully about the scope and lifetime of that state.
For example might you want multiple independent browsers? If so you should think about how the recents set can be encapsulated to just belong to a single browser, so that you don't get collisions. Even if you don't need multiple ones for your actual application, this might help testability.
There are various ways you might keep the state private to a specific browser, depending in part on what features the language has available. For example in a language with objects a natural way would be to make it a private member of a browser object.
I am writing a DSL for expressing flow (original I know) in groovy. I would like to provide the user the ability to write functions that are stored and evaluated at certain points in the flow. Something like:
states {
"checkedState" {
onEnter {state->
//do some groovy things with state object
}
}
}
Now, I am pretty sure I could surround the closure in quotes and store that. But I would like to keep syntax highlighting and content assist if possible when editing these DSLs. I realize that the closure COULD reference artifacts from the surrounding flow definition which would no longer be valid when executing the closure in a different context, and I am fine with this. In reality I would like to use the closure syntax for a non-closure function definition.
tl;dr; I need to get the closure's code while evaluating the DSL so that it can be stored in the database and executed by a script host later.
I don't think there is a way to get a closure's source code, as this information is discarded during compilation. Perhaps you could try writing an AST transformation that would make closure's syntax tree available at runtime.
If all you care about is storing the closure in the database, and you don't need later access to the source code, you can try serializing it and storing the serialized form.
Closure implements Serializable, and after nulling its owner, thisObject and delegate attributes I was able to serialize it, but I'm getting ClassNotFoundException on deserialization.
def myClosure = {a, b -> a + b}
Closure.metaClass.setAttribute(myClosure, "owner", null)
Closure.metaClass.setAttribute(myClosure, "thisObject", null)
myClosure.delegate = null
def byteOS = new ByteArrayOutputStream()
new ObjectOutputStream(byteOS).writeObject(myClosure)
def serializedClosure = byteOS.toByteArray()
def input = new ObjectInputStream(new ByteArrayInputStream(serializedClosure))
def deserializedClosure = input.readObject() // throws CNFE
After some searching, I found Groovy Remote Control, a library created specifically to enable serializing closures and executing them later, possibly on a remote machine. Give it a try, maybe that's what you need.
I've been utilizing the command pattern in my Flex projects, with asynchronous callback routes required between:
whoever instantiated a given command object and the command object,
the command object and the "data access" object (i.e. someone who handles the remote procedure calls over the network to the servers) that the command object calls.
Each of these two callback routes has to be able to be a one-to-one relationship. This is due to the fact that I might have several instances of a given command class running the exact same job at the same time but with slightly different parameters, and I don't want their callbacks getting mixed up. Using events, the default way of handling asynchronicity in AS3, is thus pretty much out since they're inherently based on one-to-many relationships.
Currently I have done this using callback function references with specific kinds of signatures, but I was wondering if someone knew of a better (or an alternative) way?
Here's an example to illustrate my current method:
I might have a view object that spawns a DeleteObjectCommand instance due to some user action, passing references to two of its own private member functions (one for success, one for failure: let's say "deleteObjectSuccessHandler()" and "deleteObjectFailureHandler()" in this example) as callback function references to the command class's constructor.
Then the command object would repeat this pattern with its connection to the "data access" object.
When the RPC over the network has successfully been completed (or has failed), the appropriate callback functions are called, first by the "data access" object and then the command object, so that finally the view object that instantiated the operation in the first place gets notified by having its deleteObjectSuccessHandler() or deleteObjectFailureHandler() called.
I'll try one more idea:
Have your Data Access Object return their own AsyncTokens (or some other objects that encapsulate a pending call), instead of the AsyncToken that comes from the RPC call. So, in the DAO it would look something like this (this is very sketchy code):
public function deleteThing( id : String ) : DeferredResponse {
var deferredResponse : DeferredResponse = new DeferredResponse();
var asyncToken : AsyncToken = theRemoteObject.deleteThing(id);
var result : Function = function( o : Object ) : void {
deferredResponse.notifyResultListeners(o);
}
var fault : Function = function( o : Object ) : void {
deferredResponse.notifyFaultListeners(o);
}
asyncToken.addResponder(new ClosureResponder(result, fault));
return localAsyncToken;
}
The DeferredResponse and ClosureResponder classes don't exist, of course. Instead of inventing your own you could use AsyncToken instead of DeferredResponse, but the public version of AsyncToken doesn't seem to have any way of triggering the responders, so you would probably have to subclass it anyway. ClosureResponder is just an implementation of IResponder that can call a function on success or failure.
Anyway, the way the code above does it's business is that it calls an RPC service, creates an object encapsulating the pending call, returns that object, and then when the RPC returns, one of the closures result or fault gets called, and since they still have references to the scope as it was when the RPC call was made, they can trigger the methods on the pending call/deferred response.
In the command it would look something like this:
public function execute( ) : void {
var deferredResponse : DeferredResponse = dao.deleteThing("3");
deferredResponse.addEventListener(ResultEvent.RESULT, onResult);
deferredResponse.addEventListener(FaultEvent.FAULT, onFault);
}
or, you could repeat the pattern, having the execute method return a deferred response of its own that would get triggered when the deferred response that the command gets from the DAO is triggered.
But. I don't think this is particularly pretty. You could probably do something nicer, less complex and less entangled by using one of the many application frameworks that exist to solve more or less exactly this kind of problem. My suggestion would be Mate.
Many of the Flex RPC classes, like RemoteObject, HTTPService, etc. return AsyncTokens when you call them. It sounds like this is what you're after. Basically the AsyncToken encapsulates the pending call, making it possible to register callbacks (in the form of IResponder instances) to a specific call.
In the case of HTTPService, when you call send() an AsyncToken is returned, and you can use this object to track the specific call, unlike the ResultEvent.RESULT, which gets triggered regardless of which call it is (and calls can easily come in in a different order than they were sent).
The AbstractCollection is the best way to deal with Persistent Objects in Flex / AIR. The GenericDAO provides the answer.
DAO is the Object which manages to perform CRUD Operation and other Common
Operations to be done over a ValueObject ( known as Pojo in Java ).
GenericDAO is a reusable DAO class which can be used generically.
Goal:
In JAVA IBM GenericDAO, to add a new DAO, the steps to be done is simply,
Add a valueobject (pojo).
Add a hbm.xml mapping file for the valueobject.
Add the 10-line Spring configuration file for the DAO.
Similarly, in AS3 Project Swiz DAO. We want to attain a similar feet of achievement.
Client Side GenericDAO model:
As we were working on a Client Side language, also we should be managing a persistent object Collection (for every valueObject) .
Usage:
Source:
http://github.com/nsdevaraj/SwizDAO