Is there a way of saving a Google Doc so it has the same unique ID as an existing doc? - iframe

I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?

I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}

Related

How to store keywords in firebase firestore

My application use keywords extensively, everything is tagged with keywords, so whenever use wants to search data or add data I have to show keywords in auto complete box.
As of now I am storing keywords in another collection as below
export interface IKeyword {
Id:string;
Name:string;
CreatedBy:IUserMin;
CreatedOn:firestore.Timestamp;
}
export interface IUserMin {
UserId:string;
DisplayName:string;
}
export interface IKeywordMin {
Id:string;
Name:string;
}
My main document holds array of Keywords
export interface MainDocument{
Field1:string;
Field2:string;
........
other fields
........
Keywords:IKeywordMin[];
}
But problem is auto complete reads data frequently and my document reads quota increases very fast.
Is there a way to implement this without increasing reads for keyword ? Because keyword is not the real data we need to get.
Below is my query to get main documents
query = query.where("Keywords", "array-contains-any", keywords)
I use below query to get keywords in auto complete text box
query = query.orderBy("Name").startAt(searchTerm).endAt(searchTerm+ '\uf8ff').limit(20)
this query run many times when user types auto complete search which is causing more document reads
Does this answer your question
https://fireship.io/lessons/typeahead-autocomplete-with-firestore/
Though the receommended solution is to use 3rd party tool
https://firebase.google.com/docs/firestore/solutions/search
To reduce documents read:
A solution that come to my mind however I'm not sure if it's suitable for your use case is using Firestore caching feature. By default, firestore client will always try to reach the server to get the new changes on your documents and if it cannot reach the server, it will reach to the cached data on the client device. you can take advantage of this feature by using the cache first and reach the server only when you want. For web application, this feature is disabled by default and you can enable it like in
https://firebase.google.com/docs/firestore/manage-data/enable-offline
to help you understand this feature more check this article:
https://firebase.google.com/docs/firestore/manage-data/enable-offline
I found a solution, thought I would share here
Create a new collection named typeaheads in below format
export interface ITypeAHead {
Prefix:string;
CollectionName:string;
FieldName:string;
MatchingValues:ILookupItem[]
}
export interface ILookupItem {
Key:string;
Value:string;
}
depending on the minimum letters add either 2 or 3 letters to Prefix, and search based on the prefix, collection and field. so most probably you will end up with 2 or 3 document reads for on search.
Hope this helps someone else.

Reading JS library from CDN within Mirth

I'm doing some testing around Mirth-Connect. I have a test channel that the datatypes are Raw for the source and one destination. The destination is not doing anything right now. In the source, the connector type is JavaScript Reader, and the code is doing the following...
var url = new java.net.URL('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.15/lodash.fp.min.js');
var conn = url.openConnection();
conn.setRequestMethod('GET');
if(conn.getResponseCode() === 200) {
var body = org.apache.commons.io.IOUtils.toString(conn.getInputStream(), 'UTF-8');
logger.debug('CONTENT: ' + body);
globalMap.put('_', body);
}
conn.disconnect();
// This code is in source but also tested in destination
logger.debug('FROM GLOBAL: ' + $('_')); // library was found
var arr = [1, 2, 3, 4];
var _ = $('_');
var newArr = _.chunk(arr, 2);
The error I'm getting is: TypeError: Cannot find function chunk in object.
The reason I want to do this is to build custom/internal libraries with unit test and serve them with an internal/company CDN and allow Mirth to consume them.
How can I make the library available to Mirth?
Rhino actually has commonjs support, but mirth doesn't have it enabled by default. Here's how you can use it in your channel.
channel deploy script
with (JavaImporter(
org.mozilla.javascript.Context,
org.mozilla.javascript.commonjs.module.Require,
org.mozilla.javascript.commonjs.module.provider.SoftCachingModuleScriptProvider,
org.mozilla.javascript.commonjs.module.provider.UrlModuleSourceProvider,
java.net.URI
)) {
var require = new Require(
Context.getCurrentContext(),
this,
new SoftCachingModuleScriptProvider(new UrlModuleSourceProvider([
// Search path. You can add multiple URIs to this array
new URI('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.15/')
],null)),
null,
null,
true
);
} // end JavaImporter
var _ = require('lodash.min');
require('lodash.fp.min')(_); // convert lodash to fp
$gc('_', _);
Note: There's something funky with the cdnjs lodash fp packages that don't detect the environment correctly and force that weird two stage import. If you use https://cdn.jsdelivr.net/npm/lodash#4.17.15/ instead you only need to do var _ = require('fp'); and it loads everything in one step.
transformer
var _ = $gc('_');
logger.info(JSON.stringify(_.chunk(2)([1,2,3,4])));
Note: This is the correct way to use fp/chunk. In your OP you were calling with the standard chunk syntax.
Additional Commentary
I think it's probably ok to do it this way where you download the library once at deploy time and store it in the globalChannelMap, then retrieve it from the map where needed. It would probably also work to store the require object itself in the map if you wanted to call it elsewhere. It will cache and reuse the object created for future calls to the same resource.
I would not create new Require objects anywhere but the deploy script, or you will be redownloading the resource on every message (or every poll in the case of a Javascript Reader.)
Edit: I guess for an internal webhost, this could be desirable in a Javascript Reader if you intend for it to pick up changes immediately on the next poll without a redeploy, assuming you would be upgrading the library in place instead of incrementing a version
The benefit to using Code Templates, as Vibin suggested is that they get compiled directly into your channel at deploy time and there is no additional fetching step at runtime. Making the library available is as simple as assigning it to your channel.
Even though importing third party libraries could be an option, I was actually looking into this for our team to write our own custom functions, write unit-test for them, and lastly be able to pull that code inside Mirth. I was experimenting with lodash but it was not my end goal to use it, it is. My solution was to do a REST GET call with java in the global script. Your URL would be the GitHub raw URL of the code you want to pull in. Same code of my original question but like I said, the URL is the raw GitHub URL for the function I want to pull in.

How to do pattern searching in fire base real time DB [duplicate]

I am using firebase for data storage. The data structure is like this:
products:{
product1:{
name:"chocolate",
}
product2:{
name:"chochocho",
}
}
I want to perform an auto complete operation for this data, and normally i write the query like this:
"select name from PRODUCTS where productname LIKE '%" + keyword + "%'";
So, for my situation, for example, if user types "cho", i need to bring both "chocolate" and "chochocho" as result. I thought about bringing all data under "products" block, and then do the query at the client, but this may need a lot of memory for a big database. So, how can i perform sql LIKE operation?
Thanks
Update: With the release of Cloud Functions for Firebase, there's another elegant way to do this as well by linking Firebase to Algolia via Functions. The tradeoff here is that the Functions/Algolia is pretty much zero maintenance, but probably at increased cost over roll-your-own in Node.
There are no content searches in Firebase at present. Many of the more common search scenarios, such as searching by attribute will be baked into Firebase as the API continues to expand.
In the meantime, it's certainly possible to grow your own. However, searching is a vast topic (think creating a real-time data store vast), greatly underestimated, and a critical feature of your application--not one you want to ad hoc or even depend on someone like Firebase to provide on your behalf. So it's typically simpler to employ a scalable third party tool to handle indexing, searching, tag/pattern matching, fuzzy logic, weighted rankings, et al.
The Firebase blog features a blog post on indexing with ElasticSearch which outlines a straightforward approach to integrating a quick, but extremely powerful, search engine into your Firebase backend.
Essentially, it's done in two steps. Monitor the data and index it:
var Firebase = require('firebase');
var ElasticClient = require('elasticsearchclient')
// initialize our ElasticSearch API
var client = new ElasticClient({ host: 'localhost', port: 9200 });
// listen for changes to Firebase data
var fb = new Firebase('<INSTANCE>.firebaseio.com/widgets');
fb.on('child_added', createOrUpdateIndex);
fb.on('child_changed', createOrUpdateIndex);
fb.on('child_removed', removeIndex);
function createOrUpdateIndex(snap) {
client.index(this.index, this.type, snap.val(), snap.name())
.on('data', function(data) { console.log('indexed ', snap.name()); })
.on('error', function(err) { /* handle errors */ });
}
function removeIndex(snap) {
client.deleteDocument(this.index, this.type, snap.name(), function(error, data) {
if( error ) console.error('failed to delete', snap.name(), error);
else console.log('deleted', snap.name());
});
}
Query the index when you want to do a search:
<script src="elastic.min.js"></script>
<script src="elastic-jquery-client.min.js"></script>
<script>
ejs.client = ejs.jQueryClient('http://localhost:9200');
client.search({
index: 'firebase',
type: 'widget',
body: ejs.Request().query(ejs.MatchQuery('title', 'foo'))
}, function (error, response) {
// handle response
});
</script>
There's an example, and a third party lib to simplify integration, here.
I believe you can do :
admin
.database()
.ref('/vals')
.orderByChild('name')
.startAt('cho')
.endAt("cho\uf8ff")
.once('value')
.then(c => res.send(c.val()));
this will find vals whose name are starting with cho.
source
The elastic search solution basically binds to add set del and offers a get by wich you can accomplish text searches.
It then saves the contents in mongodb.
While I love and reccomand elastic search for the maturity of the project, the same can be done without another server, using only the firebase database.
That's what I mean:
(https://github.com/metaschema/oxyzen)
for the indexing part basically the function:
JSON stringifies a document.
removes all the property names and JSON to leave only the data
(regex).
removes all xml tags (therefore also html) and attributes (remember
old guidance, "data should not be in xml attributes") to leave only
the pure text if xml or html was present.
removes all special chars and substitute with space (regex)
substitutes all instances of multiple spaces with one space (regex)
splits to spaces and cycles:
for each word adds refs to the document in some index structure in
your db tha basically contains childs named with words with childs
named with an escaped version of "ref/inthedatabase/dockey"
then inserts the document as a normal firebase application would do
in the oxyzen implementation, subsequent updates of the document ACTUALLY reads the index and updates it, removing the words that don't match anymore, and adding the new ones.
subsequent searches of words can directly find documents in the words child. multiple words searches are implemented using hits
SQL"LIKE" operation on firebase is possible
let node = await db.ref('yourPath').orderByChild('yourKey').startAt('!').endAt('SUBSTRING\uf8ff').once('value');
This query work for me, it look like the below statement in MySQL
select * from StoreAds where University Like %ps%;
query = database.getReference().child("StoreAds").orderByChild("University").startAt("ps").endAt("\uf8ff");

Evernote IOS SDK fetchResourceByHashWith throws exception

Working with Evernote IOS SDK 3.0
I would like to retrieve a specific resource from note using
fetchResourceByHashWith
This is how I am using it. Just for this example, to be 100% sure about the hash being correct I first download the note with a single resource using fetchNote and then request this resource using its unique hash using fetchResourceByHashWith (hash looks correct when I print it)
ENSession.shared.primaryNoteStore()?.fetchNote(withGuid: guid, includingContent: true, resourceOptions: ENResourceFetchOption.includeData, completion: { note, error in
if error != nil {
print(error)
seal.reject(error!)
} else {
let hash = note?.resources[0].data.bodyHash
ENSession.shared.primaryNoteStore()?.fetchResourceByHashWith(guid: guid, contentHash: hash, options: ENResourceFetchOption.includeData, completion: { res, error in
if error != nil {
print(error)
seal.reject(error!)
} else {
print("works")
seal.fulfill(res!)
}})
}
})
Call to fetchResourceByHashWith fails with
Optional(Error Domain=ENErrorDomain Code=0 "Unknown error" UserInfo={EDAMErrorCode=0, NSLocalizedDescription=Unknown error})
The equivalent setup works on Android SDK.
Everything else works so far in IOS SDK (chunkSync, auth, getting notebooks etc.. so this is not an issue with auth tokens)
would be great to know if this is an sdk bug or I am still doing something wrong.
Thanks
This is a bug in the SDK's "EDAM" Thrift client stub code. First the analysis and then your workarounds.
Evernote's underlying API transport uses a Thrift protocol with a documented schema. The SDK framework includes a layer of autogenerated stub code that is supposed to marshal input and output params correctly for each request and response. You are invoking the underlying getResourceByHash API method on the note store, which is defined per the docs to accept a string type for the contentHash argument. But it turns out the client is sending the hash value as a purely binary field. The service is failing to parse the request, so you're seeing a generic error on the client. This could reflect evolution in the API definition, but more likely this has always been broken in the iOS SDK (getResourceByHash probably doesn't see a lot of usage). If you dig into the more recent Python version of the SDK, or indeed also the Java/Android version, you can see a different pattern for this method: it says it's going to write a string-type field, and then actually emits a binary one. Weirdly, this works. And if you hack up the iOS SDK to do the same thing, it will work, too.
Workarounds:
Best advice is to report the bug and just avoid this method on the note store. You can get resource data in different ways: First of all, you actually got all the data you needed in the response to your fetchNote call, i.e. let resourceData = note?.resources[0].data.body and you're good! You can also pull individual resources by their own guid (not their hash), using fetchResource (use note?.resources[0].guid as the param). Of course, you may really want to use the access-by-hash pattern. In that case...
You can hack in the correct protocol behavior. In the SDK files, which you'll need to build as part of your project, find the ObjC file called ENTProtocol.m. Find the method +sendMessage:toProtocol:withArguments.
It has one line like this:
[outProtocol writeFieldBeginWithName:field.name type:field.type fieldID:field.index];
Replace that line with:
[outProtocol writeFieldBeginWithName:field.name type:(field.type == TType_BINARY ? TType_STRING : field.type) fieldID:field.index];
Rebuild the project and you should find that your code snippet works as expected. This is a massive hack however and although I don't think any other note store methods will be impacted adversely by it, it's possible that other internal user store or other calls will suddenly start acting funny. Also you'd have to maintain the hack through updates. Probably better to report the bug and don't use the method until Evernote publishes a proper fix.

Why am I seeing "reference error: "Calendar" is not defined" when I run a Google Spreadsheet API script?

I made a working bound script in a clone of my original Google Spreadsheet, put together over a course of several days.
I then copied my formula to a new spreadsheet.
Suddenly, even though I don't think I made any changes to any relevant code, I'm seeing "reference error: "Calendar" is not defined google API."
I have a lot of code so it would be difficult to paste it all, but the relevant section is here:
var start = (passed as argument to function);
var end = (passed as argument to function);
var calendarId = 'redacted'//source calendar
var optionalArgs = {
timeMin: start.toISOString(),
timeMax: end.toISOString(),
showDeleted: false,
singleEvents: true,
maxResults: null,
orderBy: 'startTime'
};
var response = Calendar.Events.list(calendarId, optionalArgs);
This took me way too long to get through, so dear person of the future, if I've saved you a few minutes, my struggles will be worthwhile.
Calendar API has to be turned on for Calendar.Events.list to work.
In your script's menu, select "Resources" then "Advanced Google Services." You will see a list of APIs with switches.
To use the API for another service, you must turn it on both here and in your Google Developers Console (just follow the link at the bottom of the dialogue window).
Also, I had sort of assumed that in the Google Developers Console, once an API was turned on, it was turned on for you as a user, but there is a unique console for each project, which is what you're taken to when you follow the link. You have to turn it on again for each additional project.

Resources