Whenever I call a function that has (enforce-guard some-guard) from X-wallet or Zelcore it always fails with the error Keyset failure (keys-all)
I have no issues when doing this from Chainweaver
How to fix this
This is an issue if you are also providing capabilities with your request.
To fix this, you will need to put enforce-guard within a capability too.
So you will need to do something like
(defcap VERIFY_GUARD (some-guard:guard)
(enforce-guard some-guard)
)
And wherever you would call enforce-guard , you will then need to do
(with-capability (VERIFY_GAURD some-guard)
; Guarded code here
)
Why does this happen?
Chainweaver allows you to select unrestricted signing keys, which provides a key/guard for enforce-guard to work with.
However X-Wallet and Zelcore don't provide this if capabilities present on the request (otherwise they do).
It is probably better practice to add enforce-guard into capabilities anyways and use require-capability in places where you expect the guard to pass.
I tried to make a query from real time database using equalTo().
database.getReference(verifiedProductsDb.dbPartVerifiedProducts).order By Child(verifiedProductsDb.barcode).equalTo(b.toLong()).get().addOnCompleteListener {
but android studio gives out:
None of the following functions can be called with the argument supplied.
equalTo(Boolean) defined in com.google.firebase.database.Query
equalTo(Double) defined in com.google.firebase.database.Query
equal To(String?) defined in com.google.firebase.database.Query
Despite the fact that using setValue, long values are written to the same database quite successfully and without problems.
The Realtime Database API on Android only has support for Double number types. The underlying wire protocol and database will interpret the long numbers correctly though, so you should be able to just do:
database.getReference("VerifiedProducts")
.orderByChild("barcode")
.equalTo(b.toLong().toDouble()) // 👈
.get().addOnCompleteListener {
...
I have written several tests in Postman, based on the example snippet codes given in the Postman GUI on Windows desktop.
Mainly, I want to check for existence of the parameters in the response (exact in those cases where I need to check for particular values of the parameters) and I want to know if there's a better way to do it than the way I've been doing now.
The following test shows one such example and this is just a small part of it. The actual response schema is a lot larger so I envisioned writing 50-60 lines of such checks per API endpoint.
pm.test("Det details of a POI", function () {
pm.expect(jsonData.code).to.eql(0);
pm.expect(jsonData.data[0].provider).to.eql("google");
pm.expect(jsonData.data[0]).to.have.property("id");
pm.expect(jsonData.data[0].location).to.have.property("position");
pm.expect(jsonData.data[0].location.address).to.have.property("text");
pm.expect(jsonData.data[0].location.address).to.have.property("house");
pm.expect(jsonData.data[0].location.address).to.have.property("street");
pm.expect(jsonData.data[0].location.address).to.have.property("postalCode");
pm.expect(jsonData.data[0].location.address).to.have.property("city");
pm.expect(jsonData.data[0].location.address).to.have.property("county");
pm.expect(jsonData.data[0].location.address).to.have.property("state");
pm.expect(jsonData.data[0].location.address.country).to.eql("United Kingdom");
pm.expect(jsonData.data[0].location.address).to.have.property("countryCode");
pm.expect(jsonData.data[0].contacts).to.have.property("phone");
pm.expect(jsonData.data[0].contacts.website.value).to.include("www.google.com");
pm.expect(jsonData.data[0].contacts.website).to.have.property("label");
pm.expect(jsonData.data[0].categories[0]).to.have.property("id");
pm.expect(jsonData.data[0].categories[0]).to.have.property("title");
pm.expect(jsonData.data[0].categories[0]).to.have.property("type");
pm.expect(jsonData.data[0].categories[0]).to.have.property("system");
)};
Any tips and improvements would be greatly appreciated.
You're basically asking the same as these two Stack Overflow posts:
Schema validation using Postman
How to validate response in Postman?
Answer: There is a json format validation build into Postman it uses the Tiny Validator project to allow schema validation in post-request test scripts. Research Postman's documentation (1, 2) for examples on how to use it.
I've deployed my Bot to Webchat, Skype and MS Teams.
In OnTurnAsync method I check, if the user input begins with bnr then I call a specific method.
if (turnContext.Activity.Text.ToLower().StartsWith("bnr"))
{
string msg = RequestHandler.BnrCaller(turnContext.Activity.Text);
await turnContext.SendActivityAsync(msg);
return;
}
It works fine with Skype and Webchat but with teams it does not work 100%, It works just sometimes.
[Edit]
I found out, that it does not work if I copied the input into the input field but when I type it, it works fine!
The Messages coming from the Teams are having a different structure.
The message text begins with the words <at>...</at>.
You need to delete this beginning in a Middleware component, etc.
Try to look at your incoming messages through NGROK at localhost:4040 in your web browser.
So, I found out the issue by dint of app-insights.
I've added a middleware, which logs the request body into app-insights and just realized, that when I copy/paste a message like: "Hello", it would be logged in app-insights as something like this "\r\n\n\rHello\r\n\n\r\n". When I type it, it would be logged just fine.
So I jsut remove these symboles from the request and it works!
I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}