Fetch error on Office Script (Excel on web) - fetch

I am trying to call an external API from an Excel on web. However, I am stuck on trying to get the result from the fetch call. I am even using the Office doc example to make sure
From an Excel, click on Automate to create a new script
async function main(workbook: ExcelScript.Workbook): Promise<void> {
let fetchResult = await fetch('https://jsonplaceholder.typicode.com/todos/1');
let json = await fetchResult.json();
}
I keep on getting the following message (at the fetchResult.json() call)
"Office Scripts cannot infer the data type of this variable or inferring it might result in unexpected errors. Please annotate the type of the variable to avoid this error. You can also use the Quick fix option provided in the editor to auto fill the type based on the usage. Quick Fix can be accessed by right clicking on the variable name and selecting Quick Fix link."
When running the Chrome inspector, the API request seems to be on hold "CAUTION: request is not finished yet"
PS: I am not the Office administrator and is not reachable right now, but hoping this is not a problem with my user or the Office account configuration
Any idea what the problem might be?
Thanks!

"any" types not being allowed in OfficeScript is by design. We think any types in general can lead to developer errors. I understand it can be hard to declare types – but these days most popular APIs provide you the interface (or d.ts) that you can use.
Secondly, there are tools such as https://quicktype.io/typescript where you can type in your sample JSON and it’ll give you the full interface which you can then declare in your code using interface keyword.
See this code for example: https://github.com/sumurthy/officescripts-projects/blob/main/API%20Calls/APICall.ts
You don’t need to declare all properties – only the ones you’ll use.
It’s more up-front work – but in the end the quality is better.

Adding an interface definition for the expected JSON type fixed the problem for me.
interface Todo {
userId: number;
id: number;
title: string;
completed: boolean
}
async function main(workbook: ExcelScript.Workbook): Promise<void> {
let fetchResult = await fetch('https://jsonplaceholder.typicode.com/todos/1');
let json: Todo = await fetchResult.json();
console.log(json);
}
You may need to define a different interface if the Web API you're calling returns different data structure.

Related

Any way to programmatically open a collapsed console group?

I use console.groupCollapsed() to hide functions I don't generally need to review, but may occasionally want to dig into. One downside of this is that if I use console.warn or console.error inside that collapsed group, I may not notice it or it may be very hard to find. So when I encounter an error, I would like to force the collapsed group open to make it easy to spot the warning/error.
Is there any way to use JS to force the current console group (or just all blindly) to open?
Some way to jump directly to warnings/errors in Chrome debugger? Filtering just to warnings/errors does not work, as they remain hidden inside collapsed groups.
Or perhaps some way to force Chrome debugger to open all groups at once? <alt/option>-clicking an object shows all levels inside it, but there does not appear to be a similar command to open all groups in the console. This would be a simple and probably ideal solution.
There is no way to do this currently, nor am I aware of any plans to introduce such functionality, mainly because I don't think enough developers are actively using the feature to enough of a degree to create demand for this.
You can achieve what you're trying to do, but you need to write your own logging library. First thing you'll need to do is override the console API. Here is an example of what I do:
const consoleInterceptorKeysStack: string[][] = [];
export function getCurrentlyInterceptedConsoleKeys () { return lastElement(consoleInterceptorKeysStack); }
export function interceptConsole (keys: string[] = ['trace', 'debug', 'log', 'info', 'warn', 'error']) {
consoleInterceptorKeysStack.push(keys);
const backup: any = {};
for (let i = 0; i < keys.length; ++i) {
const key = keys[i];
const _log = console[key];
backup[key] = _log;
console[key] = (...args: any[]) => {
const frame = getCurrentLogFrame();
if (isUndefined(frame)) return _log(...args);
frame.children.push({ type: 'console', key, args });
frame.hasLogs = true;
frame.expand = true;
_log(...args);
};
}
return function restoreConsole () {
consoleInterceptorKeysStack.pop();
for (const key in backup) {
console[key] = backup[key];
}
};
}
You'll notice a reference to a function getCurrentLogFrame(). Your logging framework will require the use of a global array that represents an execution stack. When you make a call, push details of the call onto the stack. When you leave the call, pop the stack. As you can see, when logging to the console, I'm not immediately writing the logs to the console. Instead, I'm storing them in the stack I'm maintaining. Elsewhere in the framework, when I enter and leave calls, I'm augmenting the existing stack frames with references to stack frames for child calls that were made before I pop the child frame from the stack.
By the time the entire execution stack finishes, I've captured a complete log of everything that was called, who called it, what the return value was (if any), and so on. And at that time, I can then pass the root stack frame to a function that prints the entire stack out to the console, now with the full benefit of hindsight on every call that was made, allowing me to decide what the logs should actually look like. If deeper in the stack there was (for example) a console.debug statement or an error thrown, I can choose to use console.group instead of console.groupCollapsed. If there was a return value, I could print that as a tail argument of the console.group statement. The possibilities are fairly extensive. Here's a screenshot of what my console logs look like:
Note that you will have to architect your application in a way that allows for logging to be deeply integrated into your code, otherwise your code will get very messy. I use a visitor pattern for this. I have a suite of standard interface types that do almost everything of significance in my system's architecture. Each interface method includes a visitor object, which has properties and methods for every interface type in use in my system. Rather than calling interface methods directly, I use the visitor to do it. I have a standard visitor implementation that simply forwards calls to interface methods directly (i.e. the visitor doesn't do anything much on its own), but I then have a subclassed visitor type that references my logging framework internally. For every call, it tells the logging framework that we're entering a new execution frame. It then calls the default visitor internally to make the actual call, and when the call returns, the visitor tells the logging framework to exit the current call (i.e. to pop the stack and finalize any references to child calls, etc.). By having different visitor types, it means you can use your slow, expensive, logging visitor in development, and your fast, forwarding-only, default visitor in production.

getServerSideProps and mysql (RowDataPacket)

I'd like to do server side rendering with Next.js using the getServerSideProps method like explained in the docs.
The data should come from a database, so I'm using the mysql package. This results in the following error:
Error serializing `.assertions[0]` returned from `getServerSideProps` in "/assertion". Reason: `object` ("[object Object]") cannot be serialized as JSON. Please only return JSON serializable data types.
I think the reason for this is, because the query method from mysql returns special objects (RowDataPacket). The result that I'd like to pass to getServerSideProps looks like this when logged:
[ RowDataPacket { id: 1, title: 'Test' } ]
I can fix this error by wrapping the result with JSON.parse(JSON.stringify(result)) but this seems very odd to me.
So, my simple question is: How to use mysql.query and getServerSideProps correctly?
Or might this be an issue that should be addressed by Next.js?
Thank you
I've run into this issue myself. When I had the issue it wasn't related to MySQL. The problem is getServerSideProps() expects you to return a "JSON serializable data type" which basically means a Plain ol' JavaScript Object (POJO).
To fix it, simply create a new POJO to return. A few ways you can go are:
// using spread operator to create new object
const plainData = {
...queryResult
}
// recreating the object with plucked props
const plainData = {
title: queryResult.title,
content: queryResult.content
}
// data conversion (wax-on wax-off)
const plainData = JSON.parse(JSON.stringify(queryResult))
Your specific data is in an array so your simplest solution is the wax-on wax-off since it will support arrays. Otherwise you've got to map over it.
why tho?
You can see your object has RowDataPacket attached to it. This means it's an instance of RowDataPacket and NextJS doesn't allow any instances unless it strictly equals the Object.prototype (see related code)
This seems weird, but they have already described why it's necessary in a Github Issue. TL;DR dates cause issues client-side when the page hydrates.

Wanting to chain web requests and pass data down through them in Twilio Studio

So I'm playing with Twilio Studio, and building a sample IVR. I have it doing a web request to an API that looks up the customer based on their phone number. That works, I can get/say their name to them.
I'm having trouble with the next step, I want to do another http request and pass the 'customer_id' that I get in webrequest1 to webrequest2, but it almost looks like all the web requests fire right when the call starts instead of in order/serialized.
It looks sorta like this;
call comes in, make http request to lookup customer (i get their customer_id and name)
split on content, if customer name is present, (it is, it goes down this decision path)
do another http request to "get_open_invoice_count", this request needs the customer_id though and not their phone number.
From looking at the logs it's always got a blank value there, even though in the "Say" step just above I can say their customer_id and name.
I can almost imagine someone is going to say I should go use a function, but for some reason I can't get a simple function to do a (got) get request.
I've tried to copy/paste this into a function and I kind of think this example is incomplete: https://support.twilio.com/hc/en-us/articles/115007737928-Getting-Started-with-Twilio-Functions-Beta-
var got = require('got');
got('https://swapi.co/api/people/?search=r2', {json: true})
.then(function(response) {
console.log(response)
twiml.message(response.body.results[0].url)
callback(null, twiml);
})
.catch(function(error) {
callback(error)
})
If this is the right way to do it, I'd love to see one of these ^ examples that returns json that can be used in the rest of the flow. Am I missing something about the execution model? I'm hoping it executes step by step as people flow through the studio, but I'm wondering if it executes the whole thing at boot?
Maybe another way to ask this question is; If I wanted to have the IVR be like
- If I know who you are, i send you down this path, if I know who you are I want to lookup some account details and say them to you and give you difference choices than if you are a stranger.
---- how do you do this?
You're right -- that code excerpt from the docs is just a portion that demonstrates how you might use the got package.
That same usage in context of the complete Twilio Serverless Function could look something like this:
exports.handler = function(context, event, callback) {
var twiml = new Twilio.twiml.MessagingResponse();
var got = require('got');
got('https://example.com/api/people/?search=r2', { json: true })
.then(function(response) {
console.log(response);
twiml.message(response.body.results[0].url);
callback(null, twiml);
})
.catch(function(error) {
callback(error);
});
};
However, another part of the issue here is that the advice in this documentation is perfectly reasonable for Functions when building an app on the Twilio Runtime, but there are a couple of unsaid caveats when invoking these functions from a Studio Flow context. Here's some relevant docs about that: https://support.twilio.com/hc/en-us/articles/360019580493-Using-Twilio-Functions-to-Enhance-Studio-Voice-Calls-with-Custom-TwiML
This function would be acceptable if you were calling it directly from an inbound number, but when you use the Function widget within a Studio flow to return TwiML, Studio releases control of the call.
If you want to call external logic that returns TwiML from a flow, and want to return to that flow later, you need to use the TwiML Redirect widget (see "Returning control to Studio" for details).
However, you don't have to return TwiML to Studio when calling external logic! It sounds like you want to make an external call to get some information, and then have your Flow direct the call down one path or another, based on that information. When using a Runtime Function, just have the function return an object instead of twiml, and then you can access that object's properties within your flow as liquid variables, like {{widgets.MY_WIDGET_NAME.parsed.PROPERTY_NAME}}. See the docs for the Run Function widget for more info. You would then use a "Split Based On..." widget following the function in your flow to direct the call down the desired branch.
The one other thing to mention here is the Make HTTP Request widget. If your Runtime Function is just wrapping a call to another web service, you might be able to get away with just using the widget to call that service directly. This works best when the service is under your control, since then you can ensure that the returned data is in a format that is usable to the widget.

Does AngularFire2's database.list hold a reference or actually grab data?

I'm following along the with the basic AngularFire2 docs, and the general format seems to be:
const items = af.database.list('/items');
// to get a key, check the Example app below
items.update('key-of-some-data', { size: newSize });
My confusion is that in the source code, it seems as though calling database.list() grabs all the data at the listed url (line 114 here)
Can anyone help clarify how that works? If it does indeed grab all the data, is there a better way of getting a reference without doing that? Or should I just reference each particular URL individually?
Thanks!
When you create an AngularFire2 list, it holds an internal Firebase ref - accessible via the list's public $ref property.
The list is an Observable - which serves as the interface for reading from the database - and includes some additional methods for writing to the database: push, update and remove.
In the code in your question, you are only calling the update method and are not subscribing to the observable, so no data is loaded from the database into memory:
const items = af.database.list('/items');
// to get a key, check the Example app below
items.update('key-of-some-data', { size: newSize });
It's only when a subscription to the observable is made that listeners for value and the child_... events are added to the ref and the list builds and maintains an internal array that's emitted via the observable. So if you are only calling the methods that write to the database, it won't be loading any data.
The AngularFire2 object is implemented in a similar manner.

Meteor - check() VS new SimpleSchema() for verifying .publish() arguments

To ensure the type of the arguments my publications receive, should I use SimpleSchema or check()?
Meteor.publish('todos.inList', function(listId, limit) {
new SimpleSchema({
listId: { type: String },
limit: { type: Number }
}).validate({ listId, limit });
[...]
});
or
Meteor.publish('todos.inList', function(listId, limit) {
check(listId, String);
check (limit, Number);
[...]
});
check() allows you to check data type, which is one thing, but is somewhat limited.
SimpleSchema is much more powerful as it checks all keys in a documents (instead of one at a time) and lets you define not only type but also allowed values, define default (or dynamic) values when not present.
You should use SimpleSchema this way:
mySchema = new SimpleSchema({ <your schema here>});
var MyCollection = new Mongo.Collection("my_collection");
MyCollection.attachSchema(mySchema);
That way, you don't event need to check the schema in methods: it will be done automatically.
Of course it's always good practice to use the
mySchema.validate(document);
to validate a client generated document before inserting it in your collection, but if you don't and your document doesn't match the schema (extra keys, wrong types etc...) SimpleSchema will reject the parts that don't belong.
To check arguments to a publish() function or a Meteor.method() use check(). You define a SimpleSchema to validate inserts, upserts, and updates to collections. A publication is none of those - it's readonly. Your use of SimpleSchema with .validate() inline would actually work but it's a pretty unusual pattern and a bit of overkill.
You might find this helpful.
CHECK is a lightweight package for argument checking and general pattern matching. where as SimpleSchema is a huge package with one of the check features. It is just that one package was made before the other.
Both works the same. You can use CHECK externally in Meteor.methods as well. Decision is all yours.
Michel Floyd answer's, made me realize that check() actually sends Meteor.Error(400, "Match Failed") to the client, while SimpleSchema within Methods sends detailed ValidationError one can act upon to display appropriate error messages on a form for instance.
So to answer the question : should we use check() or SimpleSchema() to assess our arguments types in Meteor, I believe the answer is :
Use SimpleSchema if you need a detailed report of the error from the client, otherwise check() is the way to go not to send back critical info.

Resources