Flex/Flash: capture 'trace' in code? - apache-flex

In Flash/Flex, is it possible to capture the result of 'trace' in code?
So, for example, if one part of the code calls trace("foo"), I'd like to automatically capture the string "foo" and pass it to some other function.
Edit: I'm not interested in trying to use trace instead of a proper logging framework… I want to write a plugin for FlexUnit, so when a test fails it can say something like: "Test blah failed. Here is the output: ... traced text ...".
Edit 2: I only want to capture the results of trace. Or, in other words, even though my code uses a proper logging framework, I want to handle gracefully code that's still using trace for logging.

As far as I know it's impossible to do it externally, google brings up no results. Have you considered creating a variable for the output and then adding that to the log, eg:
var outputtext = "text";
trace(outputtext);
// log outputtext here
Disregard if it isn't feasible, but I can't think of any other way.
However you can do it internally, if it's just for development purposes: http://broadcast.artificialcolors.com/index.php?c=1&more=1&pb=1&tb=1&title=logging_flash_trace_output_to_a_text_fil

If you want to write traces to a log, you can just use the Debug version of Flash Player and tell it to log traces.

I have a Debug.write method that sends the passed messages over a LocalConnection which I use that instead of trace. My requirement is to be able to capture the debug statements even when the SWF is running out of the authoring environment, but you can use this method to capture the trace messages.

As far as I understood you don't want to use logging, which is of course the right way to do it.
So, you can simply create a Static class with method trace, and call this method from anywhere in the application, that's how you will get all traces to one place, then could do what ever you want with the trace string before printing it to console.
Another way is to create bubbling trace event and dispatch it whenever you want to trace message, then add listener to STAGE for it and catch all events...
Hope its help

I would suggest looking through the source for the swiz framework. They use the flex internal logLogger app-wide and use best practices in a good majority of their code.

Related

Chaining Handlers with MediatR

We are using MediatR to implement a "Pipeline" for our dotnet core WebAPI backend, trying to follow the CQRS principle.
I can't decide if I should try to implement a IPipelineBehavior chain, or if it is better to construct a new Request and call MediatR.Send from within my Handler method (for the request).
The scenario is essentially this:
User requests an action to be executed, i.e. Delete something
We have to check if that something is being used by someone else
We have to mark that something as deleted in the database
We have to actually delete the files from the file system.
Option 1 is what we have now: A DeleteRequest which is handled by one class, wherein the Handler checks if it is being used, marks it as deleted, and then sends a new TaskStartRequest with the parameters to Delete.
Option 2 is what I'm considering: A DeleteRequest which implements the marker interfaces IRequireCheck, IStartTask, with a pipeline which runs:
IPipelineBehavior<IRequireCheck> first to check if the something is being used,
IPipelineBehavior<DeleteRequest> to mark the something as deleted in database and
IPipelineBehavior<IStartTask> to start the Task.
I haven't fully figured out what Option 2 would look like, but this is the general idea.
I guess I'm mainly wondering if it is code smell to call MediatR.Send(TRequest2) within a Handler for a TRequest1.
If those are the options you're set on going with - I say Option 2. Sending requests from inside existing Mediatr handlers can be seen as a code smell. You're hiding side effects and breaking the Single Responsibility Principle. You're also coupling your requests together and you should try to avoid situations where you can't send one type of request before another.
However, I think there might be an alternative. If a delete request can't happen without the validation and marking beforehand you may be able to leverage a preprocessor (example here) for your TaskStartRequest. That way you can have a single request that does everything you need. This even mirrors your pipeline example by simply leveraging the existing Mediatr patterns.
Is there any need to break the tasks into multiple Handlers? Maybe I am missing the point in mediatr. Wouldn't this suffice?
public async Task<Result<IFailure,ISuccess>> Handle(DeleteRequest request)
{
var thing = await this.repo.GetById(request.Id);
if (thing.IsBeignUsed())
{
return Failure.BeignUsed();
}
var deleted = await this.repo.Delete(request.Id);
return deleted ? new Success(request.Id) : Failure.DbError();
}

Unused gRPC ServerContext

I am new to gRPC and trying to use it in my existing system. However, I get this unused parameter error while compiling it.
server_grpc.cc:100:39: error: unused parameter ‘context’[-Werror=unused-parameter]
Status MyFunc(ServerContext* context, const QueryRequest* request,
Probably the context parameter is used in some other cases. But, in simple hello world type of example it is not used. Is there a way to compile the protocol buffer without generating the ServerContext parameter ?
I know I can make the compiler ignore warning messages. But, just wondering if it can be done without affecting the way my system is being compiled right now.
I would like to know how the context is used ? It would be great if anybody can give pointers to how to use this context. I might find a use of it in my work.
The ServerContext is provided to, well, add context for every RPC you get. It'll allow you to tweak certain aspects of the RPC, such as deal with authentication, or add metadata to your response back to the client. You may or may not need that parameter, obviously, depending on your needs.
We didn't want to add an option for this specifically, because that'd complexify the code and tool for little benefit, so the code generator and the function signature force you to have that parameter at all times. Now this isn't really a big deal, because in C++, you can specifically ask your compiler to ignore a parameter in a specific instance, for example with the following:
Status SayHello(ServerContext* context, const HelloRequest* request,
HelloReply* reply) override {
(void) context; // ignore that variable without causing warnings
std::string prefix("Hello ");
reply->set_message(prefix + request->name());
return Status::OK;
}
And that's how I'd suggest you to take care of that warning in that specific instance, without causing your whole project to not have warnings enabled.

Force iron-router to get back an ready from waitOn

Currently it seems not to be possible to force a ready() state in the route. For example:
I have a waitOn on 2 subscribtions. One of them returns a Meteor.Error - now the route will be in the loading-state with no ending.
Is there a recommend way to tell iron-router "waitOn until subscribtion is ready OR subscribtion fails with an error" ?
Edit:
To explain my special case:
The waitOn is for a route which is for searching. The search arguments are "what" and "where". In "where" I have a plan String Address and need to convert it to a geo coordinate. For this I use the googlemaps converter on the Serverside (because its Sync). When no address was found I need to get back a error a lá "This address must be wrong". For this I need the functionality to get back an error.
When I do it like David Weldon said I need to do this step in the waitOn method but the Client-Side googlemaps converter is not Sync - instead its async so this would not work.
General Recommendations
It's okay for your publishers to throw errors, but those conditions should only be hit if the client does the wrong thing. In other words, you are solving the wrong problem - you should only subscribe when you know the publisher will not throw an error. Let's look at an example:
Suppose your route needs to subscribe to newPosts and postsForSuperuser. Assume that the postsForSuperuser publisher will throw an error if the user isn't a superuser. It's now the client's job not to let that happen. The waiton definition could look like:
waitOn: function() {
var subs = [Meteor.subscribe('newPosts')];
if (Roles.userIsInRole(Meteor.user(), ['superuser']))
subs.push(Meteor.subscribe('postsForSuperuser'));
return subs;
}
Because we are conditionally adding the postsForSuperuser subscription, we don't give the publisher the opportunity to throw an error.
Your specific use case
You case is a little more tricky, because mechanically the client is doing the correct thing but the user input may happen to be bad. In this case, I don't think throwing an error is appropriate. Here are some recommendations:
Avoid the problem by checking the address via a method call prior to changing the route.
If an address is found to be invalid, have the publish function immediately return this.ready(). This will prevent your route from failing, but you'll be left assuming that the reason you have no data is because of the address. If that's a valid assumption (i.e. it's the only possible reason for failure), then your router could deal with this by using a dataNotFound hook.
If you need to explicitly identify the cause of the error, have a close look at the 'counts' example from the docs. You can declare a client-only collection called addressErrors and then call this.added with a dynamically created document describing the cause of the error. The implementation of this is a little more tricky, and probably worthy of a separate question if you get stuck. I'd see if the first two make sense before attempting it.

Signalr - Serialize callback as event not a function call?

In Signalr, is there any support for having events instead of callbacks.
Let me explain before you grab your pitchforks.
In following with the first example here
Clients.All.addContosoChatMessageToPage(name, message);
Wouldn't call a hub proxy's addContosoChatMessageToPage(name, message), but would dispatch a addContosoChatMessageToPage event with some extra information. (not asking that it be the same api call exactly)
The reason I'm asking all of this is because
This works much better alongside functional reactive programming frameworks like ELM and bacon.js
I don't want to do this myself and essentially create my own sub-framework. Of course I could always do Clients.All.CreateEvent(name,params...) where I'm continually calling back my method to do this event creation
I actually think events work better in some scenarios for separation of concerns.
Am I crazy? does something like this exist?
This is already supported. If you don't want to do the dispatching yourself and you know the name of the "event" or "method" at runtime you can do this:
IClientProxy proxy = Clients.All;
proxy.Invoke(name, args);
This lets you write code where you may not know the name of the event you're trying to callback on the client at compile time.

custom validator against remote object

I have a need to validate a field against our database to verify unique-ness. The problem I seem to be having is that the validators doValidation() exits before we've heard back from database.
How can I have the validator wait to return its payload until after we've heard from the DB?
Or perhaps a better question might be (since I think the first question is impossible), how can I set this up differently, so that I don't need to wait, or so that the wait doesn't cause the validation to automaticallly return valid?
If you're using a remote object, you can specify the method call inside your remote declaration and assign a function to the result call. The result call only runs once the remote server returns something, so it won't be run before your validation.
Do your validation call in said result function call (which you will have to create) and you should be good. Your code should go something like this:
<s:RemoteObject id="employeeService"
destination="ColdFusion"
source="f4iaw100.remoteData.employeeData"
endpoint="http://adobetes.com/flex2gateway/"
result="employeeService_resultHandler(event)"/>
**<s:method name="dataCheckCall" result="dataCheckResult(event)"/>**
<s:RemoteObject />
And in your script:
function protected dataCheckResult(event:ResultEvent):void {
**doValidate();**
}
Edit: As soon as you call "dataCheckCall" the method will start running. If, for whatever reason, you want to call this WITHIN your validator, you can do so, and then dataCheckResult will run whenever it returns with it's payload (pretend doValidate is called elsewhere). I've left a message below as well.
You are trying to fit an asynchronous process (fetching data from a DB) into a synchronous process (checking all the validators in turn).
This won't work...
You'll need to either roll your own validator framework, or use a different method of determining the legality of your controls.
P.S. The MX validators are rubbish anyway!
What I've managed to do, seems to work, mostly. I don't like it, but it at least performs the validation against the remote source.
What I've done, then, is to use an 'keyUp' event handler to spin off the database lookup portion. In the meanwhile, I set up a string variable to act as some kind of a Flag, which'll be marked as 'processing'. When the response event fires, I'll examine its contents, and either clear the flag, or set it to some kind of other error.
Then, I have created a new 'EmptyStringValidator' will check the contents of this flag, and do its job accordingly.
Its indirect, but, so far, seems to work.

Resources