I already implemented the creation of a document set at library root level. For this I used the following link: Is it possible to create a project documentset using graph API?
I follow the following steps :
1- Retrieve the document library's Drive Id:
GET https://graph.microsoft.com/v1.0/sites/${siteId}/lists/${listId}?$expand=drive
2- Create the folder:
POST https://graph.microsoft.com/v1.0/drives/${library.drive.id}/root/children
The body of the request is the following
{
"name": ${folderName},
"folder": {},
}
3- Get the folder's SharePoint item id:
GET https://graph.microsoft.com/v1.0/sites/${siteId}/drives/${library.drive.id}/items/${folder.id}?expand=sharepointids
4- Update the item in the Document Library so that it updates to the desired Document Set:
PATCH https://graph.microsoft.com/v1.0/sites/${siteId}/lists/${listId}/items/${sharepointIds.listItemId}
I do send the following body to the patch request:
{
"contentType": {
"id": "content-type-id-of-the-document-set"
},
"fields": {}
}
I'm looking now how to create a document set in a specific folder in Sharepoint.
For example, i want to create the following folder structure.
The documents folder is at the library root and I want to create a document set named billing.
documents
|_ billing
|_ 2021
|_11
|_01
|_ document1.pdf
|_ document2.pdf
|_ document3.pdf
|_02
...
|_03
...
|_04
|_10
...
thanks
I'm doing something similar but I'm a little behind you, haven't yet created the Document Set (almost there!), but may I respectfully challenge your approach?
I'm not sure it's a good idea to mix Folders and Document Sets, mainly because a Folder breaks the metadata flow, you could achieve the same results just using Document Sets.
I am assuming that your 'day' in the data structure above is your Document Set (containing document1.pdf, etc.) You might want to consider creating a 'Billing' Document Set and either specifically add a Date field to the Document Set metadata or, perhaps better still, just use the standard Created On metadata and then create Views suitably filtered/grouped/sorted on that date.
This way you can also create filtered views for 'Client' or 'Invoice' or 'Financial Year' or whatever.
As soon as your documents exist in a folder, you can no longer filter/sort/group etc., the document library based on metadata.
FURTHER INFORMATION
I am personally structuring my Sales document library thus:
Name: Opportunity; Content Type: Document Set; Metadata: Client Name, Client Address, Client Contact
Name: Proposal; Content Type: Document; Metadata: Proposal ID, Version
Name: Quote; Content Type: Document; Metadata: Quote ID, Version
Etc...
This way the basic SharePoint view is a list of Opportunities (Document Sets), inside which are Proposals, Quotes etc., but I can also filter the view to just show Proposals (i.e. filter by Content Type), or search for a specific Proposal ID, or group by Client Name, then sort chronologically, or by Proposal ID etc.
I'm just saying that you get a lot more flexibility if you avoid using Folders entirely.
p.s. I've been researching for days now how to create Document Sets with graph, it never occurred to me that it might be a two-step process i.e. create the folder, then patch its content type. Many thanks for your post!!
Just re-read your post and my assumption that the 'day' would be your document set was incorrect. In this case, there would be no benefit having a Document Set containing Folders because the moment a Folder exists in the Document Set, metadata flow stops, and the only reason (well, the main reason*) to use Document Sets in preference to Folders is that metadata flow.
*Document Sets also allow you to automatically create a set of documents based on defined templates.
Related
I'm currently working with an output from the Drupal json-api module and have noticed that the structure of an output forces an O(n^2) time complexity issue on the front by forcing the front end developers to reformat the json output given to them so an attachment can me in the same object as the entity it belongs to.
Example
So let's say I'm listing a bunch of categories with their thumbnails to be used on the front end. What a json output would normally look like for that is something like:
Normal category json structure
[
{
"uid":123,
"category_name":"cars",
"slug":"cars",
"thumbnail":"example.com/cars.jpg"
},
{
"uid":124,
"category_name":"sports",
"slug":"sports",
"thumbnail":"example.com/sports.jpg"
}
]
With drupal it seems that thumbnails are in their own includes separate from data creating an O(n^2). For example:
I make a get request using this endpoint:
example.com/jsonapi/taxonomy_term/genre?fields[taxonomy_term--genre]=name,path,field_genre_image,vid&include=field_genre_image
The structure of the data returned from the drupal json api module is going to be similar to this:
pseudo code for better readability
{
"data":[
{
"uid":123,
"category_name":"cars",
"slug":"cars",
"relationships":{
"thumbnail":{
"id":123
}
}
},
{
"uid":124,
"category_name":"sports",
"slug":"sports",
"relationships":{
"thumbnail":{
"id":124
}
}
}
],
"included":[
{
"type":"file",
"id":123,
"path":"example.com/cars.jpg"
},
{
"type":"file",
"id":124,
"path":"example.com/sports.jpg"
}
]
}
The problem with the drupal output is that I have to loop through the data and then in the data loop loop through the includes and attach each thumbnail to the category causing an O(n^2) on the frontend.
Is there a way for the frontend to request a category using the drupal json module to contain the thumbnail in the category like the normal json output above without having to restructure the json api on the frontend?
Please note I am not a drupal developer so the terminology I might use will be off.
JSON:API can output a list of entities and includes another list of entities (can have different types). each entity has UUID, so, accessing them can be O(logn) or even O(0) if you apply index to their UUID
So, you would have one loop to parse each of the included entity and store and index them (like SQLite), or simply loop over all included entities, build 1 array key by UUID and value is object of an entity
I have a unique requirement by my company to have a page for each sub-folder in a particular folder of alfresco share. So basically there would be hundreds of sub-folders and corresponding hundreds of pages representing it. The page for that folder should have links to its sub-folders and maybe even documents within it in the form of a collapsible list as shown:
Folder 1
-Category 1
Doc 1
Doc 2
-Category 2
-Sub-category 1
doc 3
I want to have something like shown above on one side of the page and the other side should have all the recent activities related to the folder, like who added a doc, what edits were made, were there any comments, etc. I searched a lot related to this but I am not sure if alfresco supports this kind of customization. I found some really good tutorials on creating custom pages in share using JSON widgets but don't think it would help in this case. Other option would be generate an html page for every new folder created and populate it using javascript. But this method won't have much flexibility in terms of designing the page. Does anyone know of a better approach or idea for this requirement ? I would really appreciate any thoughts on this.
I'll just write it as an answer (relating to my previous comment). I've done something similar in this way (using the link provided in the comments:
create a simple alfresco web script that returns a json of what you need (in you case recently modified documents). I've done it with listing a folder, this is mywebscript.get.json.ftl:
{
"docprop" : [
<#list companyhome.childByNamePath["MyFolder"].children as child>
{
"name" : "${child.properties.name}" ,
"author" : "${child.properties["cm:author"]}",
"CreatedDate" : "${child.properties.created?datetime}"
}
<#if child_has_next> , </#if>
</#list>
]
}
create Share widget controller file where you call this web script with retrievedoc.get.js:
var connector = remote.connect("alfresco");
var data = connector.get("/mywebscript.json"); //the url is declared in your `mywebscript.get.desc.xml`
// create json object from data
var result = eval('(' + data + ')');
model.docprop = result["docprop"];
create Share widget presentation template with retrievedoc.get.html.ftl:
<div class="dashlet">
<div class="title">${msg("header.retrievedocTitle")}</div>
<div class="body retrievedoc">
<table>
<tr>
<th>Name: </th>
<thAuthor: </th>
<th>Created: </th>
</tr>
<#list docprop as t>
<tr>
<td>${t.name}</td>
<td>${t.author}</td>
<td>${t.CreatedDate}</td>
</tr>
</#list>
</table>
</div>
You then need to register your widget in Share, and use it in your dashboard. It will call the Alfresco script and populate the widget with the results. Obviously you need to change your Alfresco script to return recent activities (you could make a query like: all documents modified in the last 24 hours, or something like this. But the method is the same.
Hope it helps.
You could create new folder tree component in alfresco share to meetup your requirement.
Alfresco share page madeup of multiple comoponents which are kind of self sufficient components in terms of data and dependancy(Excluding few alfresco common dependancy).
Here is the outline for the approch
Create one folder tree comopnent in alfresco, which will be nothing
but a webscript which render related webscripts output on page in
which component is included.
Create one Dynamic YUI tree with some dummy data and check weather
you are able to generate or not.(Just to make sure you have all
depenency included).
Create one data webscript on repository side which will fetch folder
structure related data from repository.Make it in such way that if
you pass folder noderef if will return all childrens under
that.There is one similar webscript also avilable out of box may be
you could reuse that.
Once you have that webscript working properly call that repository
webscript to populate your dynamic tree and remove all dummy data.
I hope this gives you good starting point.
You will certainly find documentation for each of these steps.
I'm looking to use the Artifactory property search
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-ArtifactSearch%28QuickSearch%29
Currently this will pull json listing any artifact that matches my properties.
"results" : [
{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver/lib-ver.pom"
},{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver2/lib-ver2.pom"
}
]
I need to be able to filter the artifacts I get back as i'm only interested in a certain classifier. The GAVC Search has this with &c=classifier
I can do it in code if this isn't possible via the interface
Any help appreciated
Since the release of AQL in Artifactory 3.5, it's now the official and the preferred way to find artifacts.
Here's an example similar to your needs:
items.find
(
{
"$and":[
{"#license":{"$eq":"GPL"}},
{"#version":{"$match":"1.1.*"}},
{"name":{"$match":"*.jar"}}
]
}
)
To run the query in Artifactory, copy the query to a file and name it aql.query
Run the following command from the directory that contains the aql.query file
curl -X POST -uUSER:PASSWORD 'http://HOST:PORT/artifactory/api/search/aql' -Taql.query
Don't forget to replace the templates (USER, PASSWORD,HOST and PORT) to real values.
In the example
The first two criteria are used to filter items by properties.
The third criteria filters items by the artifact name (in our case the artifact name should end with .jar)
For more details on how to write AQL query are in AQL
Old answer
Currently you can't combine the property search with GAVC search.
So you have two options:
Executing one of them (whichever gives you more precise results) and then filter the JSON list on the client by a script
Writing an execution user plugin that will execute the search by using the Searches service and then filter the results on the server side.
Of course, the later is preferable.
First, the overall description:
There are two Component Templates, NewsArticle and NewsList. NewsArticle is a Dreamweaver Template, and is used to display the content of a news article. NewsList is an xml file that contains aggregated information about all of the news articles.
Currently, a content author must publish the news article, and then re-publish the newslist to regenerate the xml.
Problem:
I have been tasked with having the publish of a news article also regenerate and publish the newslist. Through C#, I am able to retrieve the content of the newslist component, generate the updated xml from the news article, and merge it into the xml from the newslist. I am running into trouble getting the newslist to publish.
I have limited access to documentation, but from what I do have, I believe using the static PublishEngine.Publish method will allow me to do what I need. I believe the first parameter (items) is just a list that contains my updated newslist, and the second parameter is a new PublishInstruction with the RenderInstruction.RenderMode set to Publish. I am a little lost on what the publicationTargets should be.
Am I on the right track? If so, any help with the Publish method call is appreciated, and if not, any suggestions?
Like Quirijn suggested, a broker query is the cleanest approach.
In a situation if a broker isn't available (i.e. static publishing model only) I usually generate the newslist XML from a TBB that adds the XML as a binary, rather than kicking off publishing of another component or page. You can do this by calling this method in your C# TBB:
engine.PublishingContext.RenderedItem.AddBinary(
Stream yourXmlContentConvertedToMemoryStream,
string filename,
StructureGroup location,
string variantId,
string mimeType)
Make the variantId unique per the newslist XML file that you create, so that different components can overwrite/update the same file.
Better yet, do this in a Page Template rather than Component Template so that the news list is generated once per page, rather than per component (if you have multiple articles per page).
You are on the right tracks here with the engine.Publish() method:
PublishEngine.Publish(
new IdentifiableObject[] { linkedComponent },
engine.PublishingContext.PublishInstruction,
new List() { engine.PublishingContext.PublicationTarget });
You can just reuse the PublishInstruction and Target from the current context of your template. This sample shows a Component, but it should work in a page too.
One thing to keep in mind is that this is not possible in SDL Tridion 2011 SP1, as the publish action is not allowed out of the box due to security restrictions. I have an article about this here http://www.tridiondeveloper.com/the-story-of-sdl-tridion-2011-custom-resolver-and-the-allowwriteoperationsintemplates-attribute
I have the following filter for one of my profile:
filter type: Include Pattern Only
filter field: user_defined_variable (AUTO)
filter pattern: \[53\]
case sensitive: no
In my content, I have the following javascript:
_userv=0;
urchinTracker();
__utmSetVar("various string in here");
Now, the issue is that in this profiles, there are files that are showing up in the report that shouldn't. For instance, for a specific profile, in the Webmaster View > Content By Title, a page with the following variable (as seen from the source) shows up :
__utmSetVar("[3][345]")
I have no idea why this is happening. The filter pattern doesn't match thus it shouldn't show up.
It turns out that it's supposed to include files that may have a different pattern. The reason is that it will report on all the files that were seen during a single visit, which includes other files with different custom variables.
To see the report on the custom vars:
Marketing Optimization
- Visitor segment performance
-- User defined