How to feed site details on controller in dashlet in alfresco - alfresco

how to directly get SITE details (like ID and name ) on main() of controller js of dashelt in alfresco
i can use " Alfresco.constants.SITE" on FTL file to read site ID, but need to know is there any KEY to read data on controller
janaka

There isn't a service on the Share side which provides that information, because the information you want is only held on the repository. As such, you'll need to call one of the REST APIs on the Repo to get the information you need
Your code would want to look something like:
// Call the repository for the site profile
var json = remote.call("/api/sites/" + page.url.templateArgs.site);
if (json.status == 200)
{
// Create javascript objects from the repo response
var obj = eval('(' + json + ')');
if (obj)
{
var siteTitle = obj.title;
var siteShortName = obj.shortName;
}
}
You can see a fuller example of this in various Alfresco dashlets, eg the Dynamic Welcome dashlet

Related

Download an externalReference from PlannerTask using the Graph API

I'd like to download a file attached to a PlannerTask. I already have the external references but I can't figure out how to access the file.
An external reference is a JSON object like this:
{
"https%3A//contoso%2Esharepoint%2Ecom/sites/GroupName/Shared%20Documents/AnnualReport%2Epptx":
{
// ... snip ...
}
}
I've tried to use the following endpoint
GET /groups/{group-id}/drive/root:/sites/GroupName/Shared%20Documents/AnnualReport%2Epptx
but I get a 404 response. Indeed, when I use the query in Graph Explorer it gives me a warning about "Invalid whitespace in URL" (?).
A workaround that I've found is to use the search endpoint to look for files like this:
GET /groups/{group-id}/drive/root/search(q='AnnualReport.pptx')
and hope the file name is unique.
Anyway, with both methods I need extra information (ie. the group-id) that may not be readily available by the time I have the external reference object.
What is the proper way to get a download url for a driveItem that is referenced by an external reference object in a PlannerTask?
Do I really need the group-id to access such file?
The keys in external references are webUrl instances, so they can be used with the /shares/ endpoint. See this answer for details on how to do it.
When you get a driveItem object, the download url is available from AdditionalData: driveItem.AdditionalData["#microsoft.graph.downloadUrl"]. You can use WebClient.DownloadFile to download the file on the local machine.
Here is the final code:
var remoteUri = "https%3A//contoso%2Esharepoint%2Ecom/sites/GroupName/Shared%20Documents/AnnualReport%2Epptx";
var localFile = "/tmp/foo.pptx";
string sharingUrl = remoteUri;
string base64Value = System.Convert.ToBase64String(System.Text.Encoding.UTF8.GetBytes(sharingUrl));
string encodedUrl = "u!" + base64Value.TrimEnd('=').Replace('/','_').Replace('+','-');
DriveItem driveItem = m_graphClient
.Shares[encodedUrl]
.DriveItem
.Request()
.GetAsync().Result;
using (WebClient client = new WebClient())
{
client.DownloadFile(driveItem.AdditionalData["#microsoft.graph.downloadUrl"].ToString(),
localFile);
}

How to send a POST request from GTM custom tag template?

I'm developing a simple custom tag template for Google Tag Manager. It's supposed to bind to some events and send event data to our servers as JSON in the body of a POST request.
The sandboxed GTM Javascript runtime provides the sendPixel() API. However, that only provides GET requests.
How one sends a POST request from within this sandboxed runtime?
You can use a combination of the injectScript and copyFromWindow APIs found here Custom Template APIs.
Basically, the workflow goes like this.
Build a simple script that contains a function attached to the window object that sends a normal XHR post request. The script I made and use can be found here: https://storage.googleapis.com/common-scripts/basicMethods.js
Upload that script somewhere publically accessible so you can import it into your template.
Use the injectScript API to add the script to your custom template.
The injectScript API wants you to provide an onSuccess callback function. Within that function, use the copyWindow api to grab the post request function you created in your script and save it as a variable.
You can now use that variable to send a post request the same way you would use a normal JS function.
The script I included above also includes JSON encode and Base64 encode functions which you can use the same way via the copyWindow api.
I hope that helps. If you need some specific code examples for parts I can help.
According to #Ian Mitchell answer - I've made similar solution.
This is the basic code pattern that can be used inside GTM template code section in such as scenario:
const injectScript = require('injectScript');
const callInWindow = require('callInWindow');
const log = require('logToConsole');
const queryPermission = require('queryPermission');
const postScriptUrl = 'https://myPostScriptUrl'; //provide your script url
const endpoint = 'https://myEndpoint'; //provide your endpoint url
//provide your data; data object contains all properties from fields tab of the GTM template
const data = {
sessionId: data.sessionId,
name: data.name,
description: data.description
};
//add appropriate permission to inject script from 'https://myPostScriptUrl' url in GTM template's privileges tab
if (queryPermission('inject_script', postScriptUrl)) {
injectScript(postScriptUrl, onSuccess, data.gtmOnFailure, postScriptUrl);
} else {
log('postScriptUrl: Script load failed due to permissions mismatch.');
data.gtmOnFailure();
}
function onSuccess() {
//add appropriate permission to call `sendData` variable in GTM template's privileges tab
callInWindow('sendData', gtmData, endpoint);
data.gtmOnSuccess();
}
It's important to remember to add all necessary privillages inside GTM template. Appropriate permissions will show automatically in privillages tab after use pertinent options inside code section.
Your script at 'https://myPostScriptUrl' may looks like this:
function sendData(data, endpoint) {
var xhr = new XMLHttpRequest();
var stringifiedData = JSON.stringify(data);
xhr.open('POST', endpoint);
xhr.setRequestHeader('Content-type', 'application/json');
xhr.send(stringifiedData);
xhr.onload = function () {
if (xhr.status.toString()[0] !== '2') {
console.error(xhr.status + '> ' + xhr.statusText);
}
};
}
It is not strictly necessary to load an external script. While still a workaround, you can also pass a fetch reference into the tag through a "JavaScript Variable" type variable:
Create a GTM variable of type "JavaScript Variable" with the content "fetch", thus referencing "window.fetch"
Add a text field to your Custom Tag, e. g. named "js.fetchReference".
Use data.fetchReference in your Custom Tag's like you normally would use window.fetch
Make sure the tag instance actually references the variable created in step 2 with {{js.fetchReference}}
I jotted this down with screenshots at https://hume.dev/articles/post-request-custom-template/

how to read additional parameters in alfresco 5.1.1- aikau faceted search

Custom Search UI will be populated when user selects Complex asset in the Advance search screen drop down(apart from Folders,Contents) where 12 fields will be displayed .So when user clicks search button ,need to read those values and redirect to the alfresco repo files(org/alfresco/slingshot/search/search.get.js).We have already customized these files(search.get.js,search.lib.js) existed in the repository to suit out logic and working fine in 4.2.2;As we are migrating to 511,so we need to change this logic in customized faceted-search.get.js to read these values.How to write this logic in customized facted-search.get.js?
It's not actually possible to read those URL hash attributes in the faceted-search.get.js file because the JavaScript controller of the WebScript does not have access to that part of the URL (it only has information about the URL and the request parameters, not the hash parameters).
The hash parameters are actually handled on the client-side by the AlfSearchList widget.
Maybe you could explain what you're trying to achieve so that I can suggest an alternative - i.e. the end goal for the user, not the specifics of the coding you're trying to achieve.
We will be reading the querystring values something like below in the .get.js file.
function getNodeRef(){
var queryString = page.url.getQueryString();
var nodeRef = "NOT FOUND";
var stringArray = queryString.split("&");
for (var t = 0; t < stringArray.length; t++) {
if (stringArray[t].indexOf('nodeRef=') > -1) {
nodeRef = stringArray[t].split('=')[1];
break;
}
}
if (nodeRef !== "NOT FOUND") {
nodeRef = nodeRef.replace("://", "/");
return nodeRef;
}
else {
throw new Error("Node Reference is not found.");
}
}
It may be help you and we will wait for Dave Drapper suggestion also.

Display Custom Dashlet to only specific user on site dashboard

I have created one custom Dashlet, and added it to site dashboard.
But now my requirement is that, I want to display that custom only for site manager and i want to hide it for all other users.
Can anyone help me with this? How can hide custom Dashlet for all consumers and collaborators.
Thanks in advance.
In your controller javascript (aka .get.js file) add an extra remote.call to get the groups of the current user like:
var groupResult = remote.call("/api/people/" + stringUtils.urlEncode(user.name) + "?groups=true");
Use the result and eval it, then send it to your freemarker dashlet.
--- Update ---
You can also take a look at the default share-header webscript.
Take a look at the file org\alfresco\share\imports\share-header.lib.js
The snippet:
// Call the repository to see if the user is site manager or not
var userIsSiteManager = false,
userIsMember = false;
json = remote.call("/api/sites/" + page.url.templateArgs.site + "/memberships/" + encodeURIComponent(user.name));
if (json.status == 200)
{
var obj = eval('(' + json + ')');
if (obj)
{
userIsMember = true;
userIsSiteManager = obj.role == "SiteManager";
}
}
siteData = {};
siteData.profile = profile;
siteData.userIsSiteManager = userIsSiteManager;
siteData.userIsMember = userIsMember;
// Store this in the model to allow for repeat calls to the function (and therefore
// prevent multiple REST calls to the Repository)...
// It also needs to be set in the model as the "userIsSiteManager" is required by the template...
model.siteData = siteData;
so use this in an if statement in freemarker

Upload files directly to Amazon S3 from ASP.NET application

My ASP.NET MVC application will take a lot of bandwidth and storage space. How can I setup an ASP.NET upload page so the file the user uploaded will go straight to Amazon S3 without using my web server's storage and bandwidth?
Update Feb 2016:
The AWS SDK can handle a lot more of this now. Check out how to build the form, and how to build the signature. That should prevent you from needing the bandwidth on your end, assuming you need to do no processing of the content yourself before sending it to S3.
If you need to upload large files and display a progress bar you should consider the flajaxian component.
It uses flash to upload files directly to amazon s3, saving your bandwidth.
The best and the easiest way to upload files to amazon S3 via asp.net . Have a look at following blog post by me . i think this one will help. Here i have explained from adding a S3 bucket to creating the API Key, Installing Amazon SDK and writing code to upload files. Following are are the sample code for uploading files to amazon S3 with asp.net C#.
using System
using System.Collections.Generic
using System.Linq
using System.Web
using Amazon
using Amazon.S3
using Amazon.S3.Transfer
///
/// Summary description for AmazonUploader
///
public class AmazonUploader
{
public bool sendMyFileToS3(System.IO.Stream localFilePath, string bucketName, string subDirectoryInBucket, string fileNameInS3)
{
// input explained :
// localFilePath = we will use a file stream , instead of path
// bucketName : the name of the bucket in S3 ,the bucket should be already created
// subDirectoryInBucket : if this string is not empty the file will be uploaded to
// a subdirectory with this name
// fileNameInS3 = the file name in the S3
// create an instance of IAmazonS3 class ,in my case i choose RegionEndpoint.EUWest1
// you can change that to APNortheast1 , APSoutheast1 , APSoutheast2 , CNNorth1
// SAEast1 , USEast1 , USGovCloudWest1 , USWest1 , USWest2 . this choice will not
// store your file in a different cloud storage but (i think) it differ in performance
// depending on your location
IAmazonS3 client = new AmazonS3Client("Your Access Key", "Your Secrete Key", Amazon.RegionEndpoint.USWest2);
// create a TransferUtility instance passing it the IAmazonS3 created in the first step
TransferUtility utility = new TransferUtility(client);
// making a TransferUtilityUploadRequest instance
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest();
if (subDirectoryInBucket == "" || subDirectoryInBucket == null)
{
request.BucketName = bucketName; //no subdirectory just bucket name
}
else
{ // subdirectory and bucket name
request.BucketName = bucketName + #"/" + subDirectoryInBucket;
}
request.Key = fileNameInS3 ; //file name up in S3
//request.FilePath = localFilePath; //local file name
request.InputStream = localFilePath;
request.CannedACL = S3CannedACL.PublicReadWrite;
utility.Upload(request); //commensing the transfer
return true; //indicate that the file was sent
}
}
Here you can use the function sendMyFileToS3 to upload file stream to amazon S3.
For more details check my blog in the following link.
Upload File to Amazon S3 via asp.net
I hope the above mentioned link will help.
Look for a javascript library to handle the client side upload of these files. I stumbled upon a javascript and php example Dojo also seems to offer a clientside s3 file upload.
ThreeSharp is a library to facilitate interactions with Amazon S3 in a .NET environment.
You'll still need to host the logic to upload and send files to s3 in your mvc app, but you won't need to persist them on your server.
Save and GET data in aws s3 bucket in asp.net mvc :-
To save plain text data at amazon s3 bucket.
1.First you need a bucket created on aws than
2.You need your aws credentials like
a)aws key b) aws secretkey c) region
// code to save data at aws
// Note you can get access denied error. to remove this please check AWS account and give //read and write rights
Name space need to add from NuGet package
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
try`
{
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
// simple object put
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "put your plain text here",
ContentType = "text/plain",
BucketName = "put your bucket name here",
Key = "1"
//put unique key to uniquly idenitify your data
// you can pass here any data with unique id like primary key
//in db
};
PutObjectResponse response = client.PutObject(request);
}
catch(exception ex)
{
//
}
Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket.
Note:- if you get any other issue please ask me a question here will try to resolve it.
To get data from AWS s3 bucket:-
try
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
GetObjectRequest request = new GetObjectRequest()
{
BucketName = bucketName,
Key = "1"// because we pass 1 as unique key while save
//data at the s3 bucket
};
using (GetObjectResponse response = client.GetObject(request))
{
StreamReader reader = new
StreamReader(response.ResponseStream);
vccEncryptedData = reader.ReadToEnd();
}
}
catch (AmazonS3Exception)
{
throw;
}

Resources