Reading JS library from CDN within Mirth - cdn

I'm doing some testing around Mirth-Connect. I have a test channel that the datatypes are Raw for the source and one destination. The destination is not doing anything right now. In the source, the connector type is JavaScript Reader, and the code is doing the following...
var url = new java.net.URL('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.15/lodash.fp.min.js');
var conn = url.openConnection();
conn.setRequestMethod('GET');
if(conn.getResponseCode() === 200) {
var body = org.apache.commons.io.IOUtils.toString(conn.getInputStream(), 'UTF-8');
logger.debug('CONTENT: ' + body);
globalMap.put('_', body);
}
conn.disconnect();
// This code is in source but also tested in destination
logger.debug('FROM GLOBAL: ' + $('_')); // library was found
var arr = [1, 2, 3, 4];
var _ = $('_');
var newArr = _.chunk(arr, 2);
The error I'm getting is: TypeError: Cannot find function chunk in object.
The reason I want to do this is to build custom/internal libraries with unit test and serve them with an internal/company CDN and allow Mirth to consume them.
How can I make the library available to Mirth?

Rhino actually has commonjs support, but mirth doesn't have it enabled by default. Here's how you can use it in your channel.
channel deploy script
with (JavaImporter(
org.mozilla.javascript.Context,
org.mozilla.javascript.commonjs.module.Require,
org.mozilla.javascript.commonjs.module.provider.SoftCachingModuleScriptProvider,
org.mozilla.javascript.commonjs.module.provider.UrlModuleSourceProvider,
java.net.URI
)) {
var require = new Require(
Context.getCurrentContext(),
this,
new SoftCachingModuleScriptProvider(new UrlModuleSourceProvider([
// Search path. You can add multiple URIs to this array
new URI('https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.15/')
],null)),
null,
null,
true
);
} // end JavaImporter
var _ = require('lodash.min');
require('lodash.fp.min')(_); // convert lodash to fp
$gc('_', _);
Note: There's something funky with the cdnjs lodash fp packages that don't detect the environment correctly and force that weird two stage import. If you use https://cdn.jsdelivr.net/npm/lodash#4.17.15/ instead you only need to do var _ = require('fp'); and it loads everything in one step.
transformer
var _ = $gc('_');
logger.info(JSON.stringify(_.chunk(2)([1,2,3,4])));
Note: This is the correct way to use fp/chunk. In your OP you were calling with the standard chunk syntax.
Additional Commentary
I think it's probably ok to do it this way where you download the library once at deploy time and store it in the globalChannelMap, then retrieve it from the map where needed. It would probably also work to store the require object itself in the map if you wanted to call it elsewhere. It will cache and reuse the object created for future calls to the same resource.
I would not create new Require objects anywhere but the deploy script, or you will be redownloading the resource on every message (or every poll in the case of a Javascript Reader.)
Edit: I guess for an internal webhost, this could be desirable in a Javascript Reader if you intend for it to pick up changes immediately on the next poll without a redeploy, assuming you would be upgrading the library in place instead of incrementing a version
The benefit to using Code Templates, as Vibin suggested is that they get compiled directly into your channel at deploy time and there is no additional fetching step at runtime. Making the library available is as simple as assigning it to your channel.

Even though importing third party libraries could be an option, I was actually looking into this for our team to write our own custom functions, write unit-test for them, and lastly be able to pull that code inside Mirth. I was experimenting with lodash but it was not my end goal to use it, it is. My solution was to do a REST GET call with java in the global script. Your URL would be the GitHub raw URL of the code you want to pull in. Same code of my original question but like I said, the URL is the raw GitHub URL for the function I want to pull in.

Related

Is there a way of saving a Google Doc so it has the same unique ID as an existing doc?

I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}

Firebase and Angularfire nightmare migration for Update

I am new to firebase and I am having a bit of a nightmare trying to adapt old code to what is now deprecated and what is not. I am trying to write a function which updates one "single" record in my datasource using the now approved $save()promise but it is doing some really strange stuff to my data source.
My function (should) enables you to modify a single record then update the posts json array. However, instead of doing this, it deletes the whole datasource on the firebase server and it is lucky that I am only working with testdata at this point because everything would be gone.
$scope.update = function() {
var fb = new Firebase("https://mysource.firebaseio.com/Articles/" + $scope.postToUpdate.$id);
var article = $firebaseObject(ref);
article.$save({
Title: $scope.postToUpdate.Title,
Body: $scope.postToUpdate.Body
}).then(function(ref) {
$('#editModal').modal('hide');
console.log($scope.postToUpdate);
}, function(error) {
console.log("Error:", error);
});
}
Funnily enough I then get a warning in the console "after" I click the button:
Storing data using array indices in Firebase can result in unexpected behavior. See https://www.firebase.com/docs/web/guide/understanding-data.html#section-arrays-in-firebase for more information. Also note that you probably wanted $firebaseArray and not $firebaseObject.
(No shit?) I am assuming here that $save() is not the right call, so what is the equivalent of $routeParams/$firebase $update()to do a simple binding of the modified data and my source? I have been spending hours on this and really don't know what is the right solution.
Unless there's additional code that you've left out, your article $firebaseObject should most likely use the fb variable you created just before it.
var article = $firebaseObject(fb);
Additionally, the way in which you're using $save() is incorrect. You need to modify the properties on the $firebaseObject directly and then call $save() with no arguments. See the docs for more.
article.Title = $scope.postToUpdate.Title;
article.Body = $scope.postToUpdate.Body;
article.$save().then(...

Google Maps from Firefox addon (without SDK)

I need to add a map in my adddon and I know how to do what I need in a "common webpage", like I did here: http://jsfiddle.net/hCymP/6/
The problem is I really don't know how to to the same in a Firefox Addon. I tryed importing the scripts with LoadSubScript and also tryed adding a chrome html with the next line:
<script src="https://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false"></script>
But nothing works. The best solution I found was to add part of the code in this file (the code of the script src) in my function, to import this file with loadSubScript, and all my function is executed but an empty div is returned.
Components.utils.import("resource://gre/modules/Services.jsm");
window.google = {};
window.google.maps = {};
window.google.maps.modules = {};
var modules = window.google.maps.modules;
var loadScriptTime = (new window.Date).getTime();
window.google.maps.__gjsload__ = function(name, text) { modules[name] = text;};
window.google.maps.Load = function(apiLoad) {
delete window.google.maps.Load;
apiLoad([0.009999999776482582,[[["https://mts0.googleapis.com/vt?lyrs=m#227000000\u0026src=api\u0026hl=en-US\u0026","https://mts1.googleapis.com/vt?lyrs=m#227000000\u0026src=api\u0026hl=en-US\u0026"],null,null,null,null,"m#227000000"],[["https://khms0.googleapis.com/kh?v=134\u0026hl=en-US\u0026","https://khms1.googleapis.com/kh?v=134\u0026hl=en-US\u0026"],null,null,null,1,"134"],[["https://mts0.googleapis.com/vt?lyrs=h#227000000\u0026src=api\u0026hl=en-US\u0026","https://mts1.googleapis.com/vt?lyrs=h#227000000\u0026src=api\u0026hl=en-US\u0026"],null,null,null,null,"h#227000000"],[["https://mts0.googleapis.com/vt?lyrs=t#131,r#227000000\u0026src=api\u0026hl=en-US\u0026","https://mts1.googleapis.com/vt?lyrs=t#131,r#227000000\u0026src=api\u0026hl=en-US\u0026"],null,null,null,null,"t#131,r#227000000"],null,null,[["https://cbks0.googleapis.com/cbk?","https://cbks1.googleapis.com/cbk?"]],[["https://khms0.googleapis.com/kh?v=80\u0026hl=en-US\u0026","https://khms1.googleapis.com/kh?v=80\u0026hl=en-US\u0026"],null,null,null,null,"80"],[["https://mts0.googleapis.com/mapslt?hl=en-US\u0026","https://mts1.googleapis.com/mapslt?hl=en-US\u0026"]],[["https://mts0.googleapis.com/mapslt/ft?hl=en-US\u0026","https://mts1.googleapis.com/mapslt/ft?hl=en-US\u0026"]],[["https://mts0.googleapis.com/vt?hl=en-US\u0026","https://mts1.googleapis.com/vt?hl=en-US\u0026"]],[["https://mts0.googleapis.com/mapslt/loom?hl=en-US\u0026","https://mts1.googleapis.com/mapslt/loom?hl=en-US\u0026"]],[["https://mts0.googleapis.com/mapslt?hl=en-US\u0026","https://mts1.googleapis.com/mapslt?hl=en-US\u0026"]],[["https://mts0.googleapis.com/mapslt/ft?hl=en-US\u0026","https://mts1.googleapis.com/mapslt/ft?hl=en-US\u0026"]]],["en-US","US",null,0,null,null,"https://maps.gstatic.com/mapfiles/","https://csi.gstatic.com","https://maps.googleapis.com","https://maps.googleapis.com"],["https://maps.gstatic.com/intl/en_us/mapfiles/api-3/13/11","3.13.11"],[3047554353],1.0,null,null,null,null,1,"",null,null,1,"https://khms.googleapis.com/mz?v=134\u0026",null,"https://earthbuilder.googleapis.com","https://earthbuilder.googleapis.com",null,"https://mts.googleapis.com/vt/icon"], loadScriptTime);
};
//I can't use document.write but use loadSubScript insthead
Services.scriptloader.loadSubScript("chrome://googleMaps/content/Google-Maps-V3.js", window, "utf8"); //chrome://MoWA/content/Google-Maps-V3.js", window, "utf8");
var mapContainer = window.content.document.createElement('canvas');
mapContainer.setAttribute('id', "map");
mapContainer.setAttribute('style',"width: 500px; height: 300px");
mapContainer.style.backgroundColor = "red";
var mapOptions = {
center: new window.google.maps.LatLng(latitude, longitude),
zoom: 5,
mapTypeId: window.google.maps.MapTypeId.ROADMAP
}
var map = new window.google.maps.Map(mapContainer,mapOptions);
return mapContainer;
Can you help me? I'm developing a "Firefox for Android" addon and that's why I need to do things like *window.content.*document.createElement because document is not declared, only window and I think thats may be the problem... But I can't declare everything if I don't know what Google Maps uses.
Added: I also read that Google Maps API Team has specific code that disallows you from copying the main script locally. In particular, that code "expires" every so many hours. I'm combined part of this script: https://maps.googleapis.com/maps/api/js?v=3.exp&sensor=false because I can't execute this directly (Error: write called on an object that does not implement interface HTMLDocument). So I don't have any alternative!
Use an iframe (type=content if in XUL) to display web content. There you can include whatever scripts you like. The content in the iframe will not have any special privileges, or at least should not. If you need to communicate with the privileged add-on part of your code, you can use e.g. regular HTML events (createEvent, addEventListener and friends) or the postMessage web API to pass messages.
Do not try to load remote code directly into other pages, or worse, into the browser, as this is a compatibility and security nightmare.
Because loading remote code and/or code not properly reviewed for running in a privileged context, the platform will refuse to load such scripts from remote sources (http, etc.) via loadSubScript, etc.
Should be noted, that if you'd later like to host your add-on on addons.mozilla.org and still do include remote scripts in privileged code, your add-on will be rejected until you fix it.
Also, mozilla might blocklist your add-on even if you host elsewhere if it is discovered that there are known security vulnerabilities in your add-on, per the Add-on Guidelines.

Can I use other node.js libraries in Meteor?

I was playing around with an idea and wanted to get some json from another site. I found with node.js people seem to use http.get to accomplish this however I discovered it wasn't that easy in Meteor. Is there another way to do this or a way to access http so I can call get? I wanted an interval that could collect data from an external source to augment the data the clients would interact with.
Looks like you can get at require this way:
var http = __meteor_bootstrap__.require('http');
Note that this'll probably only work on the server, so make sure it's protected with a check for Meteor.is_server.
This is much easier now with Meteor.http. First run meteor add http, then you can do something like this:
// common code
stats = new Meteor.Collection('stats');
// server code: poll service every 10 seconds, insert JSON result in DB.
Meteor.setInterval(function () {
var res = Meteor.http.get(SOME_URL);
if (res.statusCode === 200)
stats.insert(res.data);
}, 10000);
You can use Meteor.http if you want to handle http. To add other node.js libraries you can use meteorhacks:npm
meteor add meteorhacks:npm
Create apacakges.json file and add all the required packages name and versions.
{
"redis": "0.8.2",
"github": "0.1.8"
}

Drupal node.save and JSONP

I am having an issue with call Drupal node.save using MooTool's JSONP. Here is an example.
Here is my request:
callback Request.JSONP.request_map.request_1
method node.save
sessid 123123123123123
node {"type":"blog","title":"New Title","body":"This is the blog body"}
Here is my result
HTTP/1.0 500 Internal Server Error
I got this working before, but i used AMFPHP and was able to send objects to drupal. I am assuming that this has to do with Drupal expecting an object, but since it is a GET it gets transformed as a string. Is there any way of getting around this with out hacking the code?
Here is my code:
$('newBlogSubmit').addEvent('click', function()
{
var node = {
type : "blog",
title:"New Title",
body :"This is the blog body"
}
var string = JSON.encode(node);
string.escapeRegExp()
var sessID = _sessID;
DrupalService.getInstance().node_save(string, sessID, drupal_handleBlogSubmit);
});
My Drupal Service JS Code:
//NODE
DrupalService.prototype.node_save = function(node, sessid, callback){
var dataObj = {
method : "node.save",
sessid : sessid,
node : node
}
DrupalService.getInstance().request(dataObj, callback);
}
//SEND REQUEST AND CALLBACK FUNCTION
DrupalService.prototype.request = function(dataObject, callback){
new JsonP('http://myDrupalSite.com/services/json', {data: dataObject,onComplete: callback}).request();
}
I am trying to connect the dots, but not too familiar with Drupal, but i would guess all I need to do is turn the string back into an object. Any ideas where I should be looking, or if there is an existing patch?
A first question could be why you use mootools since Drupal comes with jQuery and use it extensively throughout the different modules and Drupal core itself.
Anyways I don't know mootools so can't help you there, but if your request in ending in a internal server error, you have a problem with your drupal code or your js code. So even if I knew exactly what you were doing, I couldn't tell you the problem without looking at the drupal code for your http://myDrupalSite.com/services/json callback.
In general, what you want to make sure is:
You make a POST request, as drupal will cache get's and the semantic of this, is that you are posting data - the node - to the server.
Your data should be sent as post params, this will make them end up in the PHP $_POST variable
Your callback should validate the data and act accordingly, creating a node when the data is intact. You don't need session id's since the script will have the same session the browser has.
I've answered a similar question in detail, which was about altering a field instead of saving a node, but much of the work is still the same. You can take a look on the post, although this is with jQuery and not Mootools.

Resources