Using r googledrive package to create custom property - r

According to the documentation for drive_mkdir() in the googledrive package, "Named parameters to pass along to the Drive API. Has the tidy dots semantics that come from using rlang::list2(). You can affect the metadata of the target file by specifying properties of the Files resource via .... Read the "Request body" section of the Drive API docs for the associated endpoint to learn about relevant parameters." In the Google Drive API, it lists adding custom properties as, "Custom file properties are key/value pairs used to store custom metadata for a file, such as tags, IDs from other data stores, information shared between workflow applications, and so on.
To add properties visible to all apps, use the properties field of files resource."
In the Google API Files Page, it lists it as
"properties": {
(key): string}
I've tried a number of different ways to pass a value, but nothing seems to work. Does anyone have an example of adding a custom property while creating a folder?
Here is one example I've tried that does not work:
GDriveTarget <- "https://drive.google.com/drive/u/1/folders/'etc'"
drID <- drive_get(GDriveTarget, verbose = TRUE)
gProps <-list2(properties = c("Region","Far Northeast"))
curFolder <- drive_mkdir(name="School Folders",
path=GDriveTarget,
gProps)
this results in:
Error: These parameters are unknown:
Backtrace:
googledrive::drive_mkdir(...)
googledrive::drive_create(...)
googledrive::request_generate(...)
gargle::request_develop(endpoint = ept, params = params)
gargle:::check_params(params, endpoint$parameters)
gargle:::stop_bad_params(unknown, reason = "unknown")
Removing "gProps" creates the desired folder, so I have the proper rights. I'm just unsure how to pass the parameters Google is expecting. When I use the "try this API" tool on Google Developer, what it is expecting is:
{
"properties": {
"Region": "Far Northeast"
}
}

Related

Create document set in Sharepoint with Graph API in a subfolder

I already implemented the creation of a document set at library root level. For this I used the following link: Is it possible to create a project documentset using graph API?
I follow the following steps :
1- Retrieve the document library's Drive Id:
GET https://graph.microsoft.com/v1.0/sites/${siteId}/lists/${listId}?$expand=drive
2- Create the folder:
POST https://graph.microsoft.com/v1.0/drives/${library.drive.id}/root/children
The body of the request is the following
{
"name": ${folderName},
"folder": {},
}
3- Get the folder's SharePoint item id:
GET https://graph.microsoft.com/v1.0/sites/${siteId}/drives/${library.drive.id}/items/${folder.id}?expand=sharepointids
4- Update the item in the Document Library so that it updates to the desired Document Set:
PATCH https://graph.microsoft.com/v1.0/sites/${siteId}/lists/${listId}/items/${sharepointIds.listItemId}
I do send the following body to the patch request:
{
"contentType": {
"id": "content-type-id-of-the-document-set"
},
"fields": {}
}
I'm looking now how to create a document set in a specific folder in Sharepoint.
For example, i want to create the following folder structure.
The documents folder is at the library root and I want to create a document set named billing.
documents
|_ billing
|_ 2021
|_11
|_01
|_ document1.pdf
|_ document2.pdf
|_ document3.pdf
|_02
...
|_03
...
|_04
|_10
...
thanks
I'm doing something similar but I'm a little behind you, haven't yet created the Document Set (almost there!), but may I respectfully challenge your approach?
I'm not sure it's a good idea to mix Folders and Document Sets, mainly because a Folder breaks the metadata flow, you could achieve the same results just using Document Sets.
I am assuming that your 'day' in the data structure above is your Document Set (containing document1.pdf, etc.) You might want to consider creating a 'Billing' Document Set and either specifically add a Date field to the Document Set metadata or, perhaps better still, just use the standard Created On metadata and then create Views suitably filtered/grouped/sorted on that date.
This way you can also create filtered views for 'Client' or 'Invoice' or 'Financial Year' or whatever.
As soon as your documents exist in a folder, you can no longer filter/sort/group etc., the document library based on metadata.
FURTHER INFORMATION
I am personally structuring my Sales document library thus:
Name: Opportunity; Content Type: Document Set; Metadata: Client Name, Client Address, Client Contact
Name: Proposal; Content Type: Document; Metadata: Proposal ID, Version
Name: Quote; Content Type: Document; Metadata: Quote ID, Version
Etc...
This way the basic SharePoint view is a list of Opportunities (Document Sets), inside which are Proposals, Quotes etc., but I can also filter the view to just show Proposals (i.e. filter by Content Type), or search for a specific Proposal ID, or group by Client Name, then sort chronologically, or by Proposal ID etc.
I'm just saying that you get a lot more flexibility if you avoid using Folders entirely.
p.s. I've been researching for days now how to create Document Sets with graph, it never occurred to me that it might be a two-step process i.e. create the folder, then patch its content type. Many thanks for your post!!
Just re-read your post and my assumption that the 'day' would be your document set was incorrect. In this case, there would be no benefit having a Document Set containing Folders because the moment a Folder exists in the Document Set, metadata flow stops, and the only reason (well, the main reason*) to use Document Sets in preference to Folders is that metadata flow.
*Document Sets also allow you to automatically create a set of documents based on defined templates.

Getting revision history of a document in .NET using Google Drive API

I am trying to find a way to get revision history for a google doc.
I have the following code which returns the Revisions:
var _driveService = GetDriveServiceInstance();
RevisionList revisions = _driveService.Revisions.List(fileId).Execute();
But, I cannot get the changes made on the document. For example, words added or removed.
I found this resource where they achieved the same task in R:
url <- modify_url(
url = "https://docs.google.com/feeds/download/documents/export/Export",
query = list(
id = fileId,
revision = revisionId,
exportFormat = "txt"
)
)
In this code, they run a query where revisionId and the fileId can be provided. But, I could not find a way to incorporate these parameters into Revisions.List(fileId) in my own code in ASP.NET.
I wonder how can I do this. Is there a way? I could not find any resources on the Internet.
You are using the list method [1] which will retrieve you the list of all revisions with their attributes for each revision, these are specified here [2]. You can use the get method if you want to get a specific revision related to a revision ID [3] .
As you can see, there is no attribute to know the check if the words were added or removed. But the exportLinks attribute is a JSON with various links to download the file (how it was after the changes on that revision) in different file types (html, pdf, etc) [4].
In the resource you posted, they use a workaround to obtain the links, because the URL is always in the same format and you only need to change the parameters on the URL (File ID, Revision ID and file type):
https://docs.google.com/feeds/download/documents/export/Export?id=FileID&revision=RevisionID&exportFormat=FileType
You would have to fetch those URLs to obtain the file in your code, read it and see what changes have been made. Also, keep in mind that if the file is not public for everyone, you’ll need credentials with the right permissions to download the file (via browser or code).
[1] https://developers.google.com/resources/api-libraries/documentation/drive/v3/csharp/latest/classGoogle_1_1Apis_1_1Drive_1_1v3_1_1RevisionsResource_1_1ListRequest.html
[2] https://developers.google.com/drive/api/v3/reference/revisions
[3] https://developers.google.com/resources/api-libraries/documentation/drive/v3/csharp/latest/classGoogle_1_1Apis_1_1Drive_1_1v3_1_1RevisionsResource_1_1GetRequest.html
[4] https://developers.google.com/resources/api-libraries/documentation/drive/v3/csharp/latest/classGoogle_1_1Apis_1_1Drive_1_1v3_1_1Data_1_1Revision.html

Invalid schema for REST JSON CreatePassengerNameRecord response

I am trying to test the integration for the Sabre CreatePassengerNameRecord rest API. As the first step, I tried downloading the JSON schemas for the Request and Response and tried to generate the POJOs using jsonschema2pojo. But it looks like the schema files are all pointing to dependent references using a URL http://services.sabre.com which is non existing. Hence the POJO generation is getting failed. This is happening for both the request and response. I was able to fix the request schema by changing the URL for XMLSchemaTypes.json dependency to provided URL link in documentation, but the response has a reference which is not specified anywhere (Please check the Response schema section of question ).
API Link: https://developer.sabre.com/docs/rest_apis/air/book/create_passenger_name_record/
Response Schema:
In the response, there is a reference to http://services.sabre.com/STL_Payload/v02_02 which is not existing.
File : http://files.developer.sabre.com/doc/providerdoc/STPS/create_passenger_name_record/v200/CreatePassengerNameRecord2.0.0RS.json
....
"CreatePassengerNameRecordRS" : {
"type" : "object",
"title" : "CreatePassengerNameRecordRS",
"properties" : {
"version" : {
"type" : "string",
"minLength" : 1,
"maxLength" : 255
},
"ApplicationResults" : {
"$ref" : "http://services.sabre.com/STL_Payload/v02_02#/definitions/ApplicationResults"
....
It would be great it you could provide the latest file for the STL_Payload or update the documentation to the latest working version.
The missing files were added to the documentation page.
This should allow you to move forward.
The id-s are still used as tags and not as absolute resource pointers so you still need to play with it they way you described to make the auto-generation working out of the box.
We will consider your request to convert them to resource pointers in the future.
Just one more hint: if you are using Java-API version of jsonschema2pojo please use this for config:
GenerationConfig config = new DefaultGenerationConfig() {
public String getRefFragmentPathDelimiters() {
return "#/";
}
};
You need it because the default path delimiters in jsonschema2pojo are "#/." and the "." does not work with some of the types declared in the schema like Text.Long
+1 Sabre please make JSON schema available on http://services.sabre.com as it is problematic when generating models using Quicktype. Types are not being resolved correctly.

Aggregating a list of http log paths in kibana

I have a nginx->fluentd->elasticsearch->kibana stack up and running. Trying to figure if I can do something like a "terms" panel but with a path string component from logs. Using a terms panel directly on that results in top used words from paths, e.g. for drupal it shows "node" as the most popular, which is quite useless without actual node id.
Is that something that is possible to do with elasticsearch?
Update: Here's a sample of my logs:
"path": "/node/123"
"path": "/node/456"
"path": "/user/create"
If I add a "terms" panel for "path" field, I get columns for "node", "user", "create", which make no statistical sense. What I need is a terms panel that aggregates on unique field values, not unique word parts of the field.
You need to configure Elasticsearch's mapping for setting your "path" field as a "not_analyzed" one. The default setting is "analyzed" and by default, ES parses the string fields and divide them in multiple tokens when possible, which is probably what happened in your case. See this related question.
As for how to configure Elasticsearch's mapping, I am also still digging, having a similar problem myself with multi-token strings I want to be able to sort on. It seems like there would be a put mapping API or the possibility of using config files, see here.

Filter property based searches in Artifactory

I'm looking to use the Artifactory property search
https://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-ArtifactSearch%28QuickSearch%29
Currently this will pull json listing any artifact that matches my properties.
"results" : [
{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver/lib-ver.pom"
},{
"uri": "http://localhost:8080/artifactory/api/storage/libs-release-local/org/acme/lib/ver2/lib-ver2.pom"
}
]
I need to be able to filter the artifacts I get back as i'm only interested in a certain classifier. The GAVC Search has this with &c=classifier
I can do it in code if this isn't possible via the interface
Any help appreciated
Since the release of AQL in Artifactory 3.5, it's now the official and the preferred way to find artifacts.
Here's an example similar to your needs:
items.find
(
{
"$and":[
{"#license":{"$eq":"GPL"}},
{"#version":{"$match":"1.1.*"}},
{"name":{"$match":"*.jar"}}
]
}
)
To run the query in Artifactory, copy the query to a file and name it aql.query
Run the following command from the directory that contains the aql.query file
curl -X POST -uUSER:PASSWORD 'http://HOST:PORT/artifactory/api/search/aql' -Taql.query
Don't forget to replace the templates (USER, PASSWORD,HOST and PORT) to real values.
In the example
The first two criteria are used to filter items by properties.
The third criteria filters items by the artifact name (in our case the artifact name should end with .jar)
For more details on how to write AQL query are in AQL
Old answer
Currently you can't combine the property search with GAVC search.
So you have two options:
Executing one of them (whichever gives you more precise results) and then filter the JSON list on the client by a script
Writing an execution user plugin that will execute the search by using the Searches service and then filter the results on the server side.
Of course, the later is preferable.

Resources