Any way to programmatically open a collapsed console group? - console

I use console.groupCollapsed() to hide functions I don't generally need to review, but may occasionally want to dig into. One downside of this is that if I use console.warn or console.error inside that collapsed group, I may not notice it or it may be very hard to find. So when I encounter an error, I would like to force the collapsed group open to make it easy to spot the warning/error.
Is there any way to use JS to force the current console group (or just all blindly) to open?
Some way to jump directly to warnings/errors in Chrome debugger? Filtering just to warnings/errors does not work, as they remain hidden inside collapsed groups.
Or perhaps some way to force Chrome debugger to open all groups at once? <alt/option>-clicking an object shows all levels inside it, but there does not appear to be a similar command to open all groups in the console. This would be a simple and probably ideal solution.

There is no way to do this currently, nor am I aware of any plans to introduce such functionality, mainly because I don't think enough developers are actively using the feature to enough of a degree to create demand for this.
You can achieve what you're trying to do, but you need to write your own logging library. First thing you'll need to do is override the console API. Here is an example of what I do:
const consoleInterceptorKeysStack: string[][] = [];
export function getCurrentlyInterceptedConsoleKeys () { return lastElement(consoleInterceptorKeysStack); }
export function interceptConsole (keys: string[] = ['trace', 'debug', 'log', 'info', 'warn', 'error']) {
consoleInterceptorKeysStack.push(keys);
const backup: any = {};
for (let i = 0; i < keys.length; ++i) {
const key = keys[i];
const _log = console[key];
backup[key] = _log;
console[key] = (...args: any[]) => {
const frame = getCurrentLogFrame();
if (isUndefined(frame)) return _log(...args);
frame.children.push({ type: 'console', key, args });
frame.hasLogs = true;
frame.expand = true;
_log(...args);
};
}
return function restoreConsole () {
consoleInterceptorKeysStack.pop();
for (const key in backup) {
console[key] = backup[key];
}
};
}
You'll notice a reference to a function getCurrentLogFrame(). Your logging framework will require the use of a global array that represents an execution stack. When you make a call, push details of the call onto the stack. When you leave the call, pop the stack. As you can see, when logging to the console, I'm not immediately writing the logs to the console. Instead, I'm storing them in the stack I'm maintaining. Elsewhere in the framework, when I enter and leave calls, I'm augmenting the existing stack frames with references to stack frames for child calls that were made before I pop the child frame from the stack.
By the time the entire execution stack finishes, I've captured a complete log of everything that was called, who called it, what the return value was (if any), and so on. And at that time, I can then pass the root stack frame to a function that prints the entire stack out to the console, now with the full benefit of hindsight on every call that was made, allowing me to decide what the logs should actually look like. If deeper in the stack there was (for example) a console.debug statement or an error thrown, I can choose to use console.group instead of console.groupCollapsed. If there was a return value, I could print that as a tail argument of the console.group statement. The possibilities are fairly extensive. Here's a screenshot of what my console logs look like:
Note that you will have to architect your application in a way that allows for logging to be deeply integrated into your code, otherwise your code will get very messy. I use a visitor pattern for this. I have a suite of standard interface types that do almost everything of significance in my system's architecture. Each interface method includes a visitor object, which has properties and methods for every interface type in use in my system. Rather than calling interface methods directly, I use the visitor to do it. I have a standard visitor implementation that simply forwards calls to interface methods directly (i.e. the visitor doesn't do anything much on its own), but I then have a subclassed visitor type that references my logging framework internally. For every call, it tells the logging framework that we're entering a new execution frame. It then calls the default visitor internally to make the actual call, and when the call returns, the visitor tells the logging framework to exit the current call (i.e. to pop the stack and finalize any references to child calls, etc.). By having different visitor types, it means you can use your slow, expensive, logging visitor in development, and your fast, forwarding-only, default visitor in production.

Related

Wanting to chain web requests and pass data down through them in Twilio Studio

So I'm playing with Twilio Studio, and building a sample IVR. I have it doing a web request to an API that looks up the customer based on their phone number. That works, I can get/say their name to them.
I'm having trouble with the next step, I want to do another http request and pass the 'customer_id' that I get in webrequest1 to webrequest2, but it almost looks like all the web requests fire right when the call starts instead of in order/serialized.
It looks sorta like this;
call comes in, make http request to lookup customer (i get their customer_id and name)
split on content, if customer name is present, (it is, it goes down this decision path)
do another http request to "get_open_invoice_count", this request needs the customer_id though and not their phone number.
From looking at the logs it's always got a blank value there, even though in the "Say" step just above I can say their customer_id and name.
I can almost imagine someone is going to say I should go use a function, but for some reason I can't get a simple function to do a (got) get request.
I've tried to copy/paste this into a function and I kind of think this example is incomplete: https://support.twilio.com/hc/en-us/articles/115007737928-Getting-Started-with-Twilio-Functions-Beta-
var got = require('got');
got('https://swapi.co/api/people/?search=r2', {json: true})
.then(function(response) {
console.log(response)
twiml.message(response.body.results[0].url)
callback(null, twiml);
})
.catch(function(error) {
callback(error)
})
If this is the right way to do it, I'd love to see one of these ^ examples that returns json that can be used in the rest of the flow. Am I missing something about the execution model? I'm hoping it executes step by step as people flow through the studio, but I'm wondering if it executes the whole thing at boot?
Maybe another way to ask this question is; If I wanted to have the IVR be like
- If I know who you are, i send you down this path, if I know who you are I want to lookup some account details and say them to you and give you difference choices than if you are a stranger.
---- how do you do this?
You're right -- that code excerpt from the docs is just a portion that demonstrates how you might use the got package.
That same usage in context of the complete Twilio Serverless Function could look something like this:
exports.handler = function(context, event, callback) {
var twiml = new Twilio.twiml.MessagingResponse();
var got = require('got');
got('https://example.com/api/people/?search=r2', { json: true })
.then(function(response) {
console.log(response);
twiml.message(response.body.results[0].url);
callback(null, twiml);
})
.catch(function(error) {
callback(error);
});
};
However, another part of the issue here is that the advice in this documentation is perfectly reasonable for Functions when building an app on the Twilio Runtime, but there are a couple of unsaid caveats when invoking these functions from a Studio Flow context. Here's some relevant docs about that: https://support.twilio.com/hc/en-us/articles/360019580493-Using-Twilio-Functions-to-Enhance-Studio-Voice-Calls-with-Custom-TwiML
This function would be acceptable if you were calling it directly from an inbound number, but when you use the Function widget within a Studio flow to return TwiML, Studio releases control of the call.
If you want to call external logic that returns TwiML from a flow, and want to return to that flow later, you need to use the TwiML Redirect widget (see "Returning control to Studio" for details).
However, you don't have to return TwiML to Studio when calling external logic! It sounds like you want to make an external call to get some information, and then have your Flow direct the call down one path or another, based on that information. When using a Runtime Function, just have the function return an object instead of twiml, and then you can access that object's properties within your flow as liquid variables, like {{widgets.MY_WIDGET_NAME.parsed.PROPERTY_NAME}}. See the docs for the Run Function widget for more info. You would then use a "Split Based On..." widget following the function in your flow to direct the call down the desired branch.
The one other thing to mention here is the Make HTTP Request widget. If your Runtime Function is just wrapping a call to another web service, you might be able to get away with just using the widget to call that service directly. This works best when the service is under your control, since then you can ensure that the returned data is in a format that is usable to the widget.

How can I use collection.find as a result of a meteor method?

I'm trying to follow the "Use the return value of a Meteor method in a template helper" pattern outlined here, except with collections.
Essentially, I've got something like this going:
(server side)
Meteor.methods({
queryTest: function(selector) {
console.log("In server meteor method...");
return MyCollection.find(selector);
}
});
(client side)
Meteor.call('queryTest', {}, function(error, results) {
console.log("in queryTest client callback...");
queryResult = [];
results.forEach(function(result) {
// massage it into something more useful for display
// and append it to queryResult...
});
Session.set("query-result", queryResult);
});
Template.query_test_template.helpers({
query_test_result: function() {
return Session.get("query-result");
}
});
The problem is, my callback (from Meteor.call) doesn't even get invoked.
If I replace the Method with just 'return "foo"' then the callback does get called. Also, if I add a ".fetch()" to the find, it also displays fine (but is no longer reactive, which breaks everything else).
What gives? Why is the callback not being invoked? I feel like I'm really close and just need the right incantation...
If it at all matters: I was doing all the queries on the client side just fine, but want to experiment with the likes of _ensureIndex and do full text searches, which from what I can tell, are basically only available through server-side method calls (and not in mini-mongo on the client).
EDIT
Ok, so I migrated things publish/subscribe, and overall they're working, but when I try to make it so a session value is the selector, it's not working right. Might be a matter of where I put the "subscribe".
So, I have a publish that takes a parameter "selector" (the intent is to pass in mongo selectors).
On the client, I have subscribe like:
Meteor.subscribe('my-collection-query', Session.get("my-collection-query-filter"));
But it has spotty behaviour. On one article, it recommended putting these on Templates.body.onCreate. That works, but doesn't result in something reactive (i.e. when I change that session value on the console, it doesn't change the displayed value).
So, if I follow the advice on another article, it puts the subscribe right in the relevant helper function of the template that calls on that collection. That works great, but if I have MULTIPLE templates calling into that collection, I have to add the subscribe to every single one of them for it to work.
Neither of these seems like the right thing. I think of "subscribing" as "laying down the pipes and just leaving them there to work", but that may be wrong.
I'll keep reading into the docs. Maybe somewhere, the scope of a subscription is properly explained.
You need to publish your data and subscribe to it in your client.
If you did not remove "autopublish" yet, all what you have will automatically be published. So when you query a collection on client (in a helper method for example), you would get results. This package is useful just for quick development and prototyping, but in a real application it should be removed. You should publish your data according to your app's needs and use cases. (Not all users have to see all data in all use cases)

Purely functional feedback suppression?

I have a problem that I can solve reasonably easy with classic imperative programming using state: I'm writing a co-browsing app that shares URL's between several nodes. The program has a module for communication that I call link and for browser handling that I call browser. Now when a URL arrives in link i use the browser module to tell the
actual web browser to start loading the URL.
The actual browser will trigger the navigation detection that the incoming URL has started to load, and hence will immediately be presented as a candidate for sending to the other side. That must be avoided, since it would create an infinite loop of link-following to the same URL, along the line of the following (very conceptualized) pseudo-code (it's Javascript, but please consider that a somewhat irrelevant implementation detail):
actualWebBrowser.urlListen.gotURL(function(url) {
// Browser delivered an URL
browser.process(url);
});
link.receivedAnURL(function(url) {
actualWebBrowser.loadURL(url); // will eventually trigger above listener
});
What I did first wast to store every incoming URL in browser and simply eat the URL immediately when it arrives, then remove it from a 'received' list in browser, along the lines of this:
browser.recents = {} // <--- mutable state
browser.recentsExpiry = 40000;
browser.doSend = function(url) {
now = (new Date).getTime();
link.sendURL(url); // <-- URL goes out on the network
// Side-effect, mutating module state, clumsy clean up mechanism :(
browser.recents[url] = now;
setTimeout(function() { delete browser.recents[url] }, browser.recentsExpiry);
return true;
}
browser.process = function(url) {
if(/* sanity checks on `url`*/) {
now = (new Date).getTime();
var duplicate = browser.recents[url];
if(! duplicate) return browser.doSend(url);
if((now - duplicate_t) > browser.recentsExpiry) {
return browser.doSend(url);
}
return false;
}
}
It works but I'm a bit disappointed by my solution because of my habitual use of mutable state in browser. Is there a "Better Way (tm)" using immutable data structures/functional programming or the like for a situation like this?
A more functional approach to handling long-lived state is to use it as a parameter to a recursive function, and have one execution of the function responsible for handling a single "action" of some kind, then calling itself again with the new state.
F#'s MailboxProcessor is one example of this kind of approach. However it does depend on having the processing happen on an independent thread which isn't the same as the event-driven style of your code.
As you identify, the setTimeout in your code complicates the state management. One way you could simplify this out is to instead have browser.process filter out any timed-out URLs before it does anything else. That would also eliminate the need for the extra timeout check on the specific URL it is processing.
Even if you can't eliminate mutable state from your code entirely, you should think carefully about the scope and lifetime of that state.
For example might you want multiple independent browsers? If so you should think about how the recents set can be encapsulated to just belong to a single browser, so that you don't get collisions. Even if you don't need multiple ones for your actual application, this might help testability.
There are various ways you might keep the state private to a specific browser, depending in part on what features the language has available. For example in a language with objects a natural way would be to make it a private member of a browser object.

Update document in Meteor mini-mongo without updating server collections

In Meteor, I got a collection that the client subscribes to. In some cases, instead of publishing the documents that exists in the collection on the server, I want to send down some bogus data. Now that's fine using the this.added function in the publish.
My problem is that I want to treat the bogus doc as if it were a real document, specifically this gets troublesome when I want to update it. For the real docs I run a RealDocs.update but when doing that on the bogus doc it fails since there is no representation of it on the server (and I'd like to keep it that way).
A collection API that allowed me to pass something like local = true this would be fantastic but I have no idea how difficult that would be to implement and I'm not to fond of modifying the core code.
Right now I'm stuck at either creating a BogusDocs = new Meteor.Collection(null) but that makes populating the Collection more difficult since I have to either hard code fixtures in the client code or use a method to get the data from the server and I have to make sure I call BogusDocs.update instead of RealDocs.update as soon as I'm dealing with bogus data.
Maybe I could actually insert the data on the server and make sure it's removed later, but the data really has nothing to do with the server side collection so I'd rather avoid that.
Any thoughts on how to approach this problem?
After some further investigation (the evented mind site) it turns out that one can modify the local collection without making calls to the server. This is done by running the same methods as you usually would, but on MyCollection._collection instead of just on Collection. MyCollection.update() would thus become MyCollection._collection.update(). So, using a simple wrapper one can pass in the usual arguments to a update call to update the collection as usual (which will try to call the server which in turn will trigger your allow/deny rules) or we can add 'local' as the last argument to only perform the update in the client collection. Something like this should do it.
DocsUpdateWrapper = function() {
var lastIndex = arguments.length -1;
if (arguments[lastIndex] === 'local') {
Docs._collection.update(arguments.slice(0, lastIndex);
} else {
Docs.update(arguments)
}
}
(This could of course be extended to a DocsWrapper that allows for insertion and removals too.)(Didnt try this function yet but it should serve well as an example.)
The biggest benefit of this is imo that we can use the exact same calls to retrieve documents from the local collection, regardless of if they are local or living on the server too. By adding a simple boolean to the doc we can keep track of which documents are only local and which are not (An improved DocsWrapper could check for that bool so we could even omit passing the 'local' argument.) so we know how to update them.
There are some people working on local storage in the browser
https://github.com/awwx/meteor-browser-store
You might be able to adapt some of their ideas to provide "fake" documents.
I would use the transform feature on the collection to make an object that knows what to do with itself (on client). Give it the corruct update method (real/bogus), then call .update rather than a general one.
You can put the code from this.added into the transform process.
You can also set up a local minimongo collection. Insert on callback
#FoundAgents = new Meteor.Collection(null, Agent.transformData )
FoundAgents.remove({})
Meteor.call 'Get_agentsCloseToOffer', me, ping, (err, data) ->
if err
console.log JSON.stringify err,null,2
else
_.each data, (item) ->
FoundAgents.insert item
Maybe this interesting for you as well, I created two examples with native Meteor Local Collections at meteorpad. The first pad shows an example with plain reactive recordset: Sample_Publish_to_Local-Collection. The second will use the collection .observe method to listen to data: Collection.observe().

Meteor is not transforming my documents before publication

For security reasons, I want to add and remove properties of documents before publishing them to the client, depending on some dynamic calculations. I follow the Meteor documentation and this other SO question.
For example simplicity, say I try to add the following static property to every document (SERVER SIDE ONLY):
var Docs = new Meteor.Collection('docs', {
transform: function (f) {
console.log('Tagging doc: ' + f._id);
f.myProp = 1;
return f;
}
});
For some strange reason, this does not work:
Only some documents trigger the transform function, not all (I can see this through the console logging)
On the client side, none of the documents are tagged with myProp
I haven't tried to put the transform on both the client and the server, because in my real life app I cannot do the necessary computation on the client.
Transform functions on collections are intended for convenience, not security -- note that when you call observeChanges on a cursor, the information is not passed through the transform function (it is passed through the transform when you call observe). The default way of publishing a cursor works by calling observeChanges on it.
If you want to strip off fields of a cursor you're publishing, use the fields option to find on your collection. If you want to do something more complicated, you can explicitly do whatever computation you need if your publish function calls added, changed, and removed itself, instead of returning a cursor. Check out the docs for Meteor.publish for details.

Resources