Making recursive async requests using the Play! WSClient - asynchronous

I would appreciate any hints on how to make recursive requests with the WSClient. I am accessing a REST api which returns nodes of a tree in json format, for example this would be the root node:
{
id: "root"
children:[
{
id: "node1"
children:[...]
},
{
id: "node2"
children:[...]
},
{
id: "node3"
children:[...]
}
]
}
To access each node the url pattern is
root/node1/node1-1
What i would like to do is to traverse the whole tree and get some information according to some criteria.
Thanks in advance

This task is like a twin of classic directory tree traverse.
I am pretty sure you can do this with clean recursion, but using Akka is a more clean way to do it.
Here are the examples:
Use akka actors to traverse directory tree
https://gist.github.com/TheDIM47/8bfa2bbf80e791c00e73
You can use Java as well, but it's a more verbose.

Related

How to create a search Action in Alfresco

I am using Alfresco Enterprise 6.2. Similar to the live search, I am creating a search Action for folders that I have in document library.
I have updated the custom-actions.js as follows:
onActionSearch: function dla_onActionSearch(record){
window.open(Alfresco.constants.PAGECONTEXT +'dp/ws/faceted-search?', "_self");
}
I have also added folder scope in faceted-search.get.js as below. I have hardcoded the value folder1 just to test if it works:
scopeOptions.push({
id: "FCTSRCH_SET_FOLDER_SCOPE",
name: "alfresco/menus/AlfCheckableMenuItem",
config: {
label: "folder",
value: "folder1",
group: "SEARCHLIST_SCOPE",
publishTopic: "ALF_SEARCHLIST_SCOPE_SELECTION",
checked: false,
hashName: "scope",
publishPayload: {
label: "folder",
value: "folder1"
}
}
});
However it dos not consider the folder scope when performing the search. Instead, it consider 'folder1' as a site. How can I correctly perform a search within folder scope?
Please check below widget,It is considering scope as a siteId always.
https://dev.alfresco.com/resource/docs/aikau-jsdoc/AlfSearchList.js_.html

JSON Path not working properly with athena

I have a lambda function that converts my logs to this format:
{
"events": [
{
"field1": "value",
"field2": "value",
"field3": "value"
}, (...)
]
}
When I query it on S3, I get in this format:
[
{
"events": [
{ (...) }
]
}
]
And I'm trying to run a custom classifier for it because the data I want is inside the objects kept by 'events' and not events itself.
So I started with the simplest path I could think that worked in my tests (https://jsonpath.curiousconcept.com/)
$.events[*]
And, sure, worked in the tests but when I run a crawler against the file, the table created includes only an events field with a struct inside it.
So I tried a bunch of other paths:
$[*].events
$[*].['events']
$[*].['events'].[*]
$.[*].events[*]
$.events[*].[*]
Some of these does not even make sense and absolutely every one of those got me an schema with an events field marked as array.
Can anyone point me to a better direction to handle this issue?

GAE endpoints generates wrong discovery doc

I have upgraded to the latest Cloud Endpoints 2.0 as well as the endpoints_proto_datastore to its latest commit. When I now try to generate the API discovery doc I get the following error messages:
Method user.update specifies path parameters but you are not using a ResourceContainer This will fail in future releases; please switch to using ResourceContainer as soon as possible
Method position.update specifies path parameters but you are not using a ResourceContainer This will fail in future releases; please switch to using ResourceContainer as soon as possible
The only two available endpoints are the following two methods which should update the User and the Position model:
#User.method(name='user.update', path='users/{id}', http_method='PUT')
def UserUpdate(self, user):
""" Update an user resource. """
user.put()
return user
#Position.method(name='position.update', path='positions/{id}', http_method='PUT')
def PositionUpdate(self, position):
""" Update a position resource. """
position.put()
return position
Before upgrading to Cloud Endpoints 2.0 everything worked fine. But now if I take a look into the generated discovery file both endpoints have a ProtorpcMessagesCombinedContainer in their request. But the combined container itself is defined with the properties of the Position model!
This is how both methods request attribute are defined:
"request": {
"$ref": "ProtorpcMessagesCombinedContainer",
"parameterName": "resource"
},
And this is the definition of the combined container (which has the properties of the Position model):
"ProtorpcMessagesCombinedContainer": {
"id": "ProtorpcMessagesCombinedContainer",
"type": "object",
"properties": {
"displayName": {
"type": "string"
},
"shortName": {
"type": "string"
}
}
},
Does anyone else had this issue with GAE and Cloud Endpoints 2.0?
What am I doing wrong? Usually the endpoints-proto-datastore should handle the ResourceContainer and the methods path parameters. Also the endpoints-proto-datastore wasn't updated for years ... I really don't know where the error comes from.
Thanks for your help!

How to delete a large node in firebase

I have a Firebase child node with about 15,000,000 child objects with a total size of about 8 GB of data.
exampele data structure:
firebase.com/childNode/$pushKey
each $pushKey contains a small flat dictionary:
{a: 1.0, b: 2.0, c: 3.0}
I would like to delete this data as efficiently and easy as possible. How?
What i Tried:
My first try was a put request:
PUT firebase.com/childNode.json?auth=FIRE_SECRET
data-raw: null
response: {
"error": "Data requested exceeds the maximum size that can be accessed with a single request. Contact support#firebase.com for help."
}
So that didn't work, let's do a limit request:
PUT firebase.com/childNode.json?auth=FIRE_SECRET&orderBy="$key"&limitToFirst=100
data-raw: null
response: {
"error": "Querying related parameters not supported on this request type"
}
No luck so far :( What about writing a script that will get the first X number of keys and then create a patch request with each value set to null?
GET firebase.com/childNode.json?auth=FIRE_SECRET&shallow=true&orderBy="$key"&limitToLast=100
{
"error" : "Mixing 'shallow' and querying parameters is not supported"
}
It's really not going to be easy this one? I could remove the shallow requirement and get the keys, and finish the script. I was just hoping there would be a easier/more efficient way???
Another thing i tried were to create a node script that listen for childAdded and then directly tries to remove those children?
ref.authWithCustomToken(AUTH_TOKEN, function(error, authData) {
if (error) {console.log("Login Failed!", error)}
if (!error) {console.log("Login Succeeded!", authData)}
ref.child("childNode").on("child_added", function(snap) {
console.log(`found: ${snap.key()}`)
ref.child("childNode").child(snap.key()).remove( function(err) {
if (!err) {console.log(`deleted: ${snap.key()}`)}
})
})
})
This script actually hangs right now, but earlier I did receive somethings like a max stack limit warning from firebase. I know this is not a firebase problem, but I don't see any particular easy way to solve that problem.
Downloading a shallow tree, will download only the keys. So instead of asking the server to order and limit, you can download all keys.
Then you can order and limit it client-side, and send delete requests to Firebase in batches.
You can use this script for inspiration: https://gist.github.com/wilhuff/b78e7391396e09f6c614
Use firebase cli tool for this: firebase database:remove --project .
In Browser Console this is fastest way
database.ref('data').limitToFirst(10000).once('value', snap => {
var updates = {};
snap.forEach(snap => {
updates[snap.key] = null;
});
database.ref('data').update(updates);
});

What is the gradle way to filter a subset of files in src/main/webapp?

I'm doing a maven conversion to gradle and want to see the opinions on the best way to perform the following. I currently have multiple files under src/main/webapp. Some need filtered one way and some need filtered in another.
Notionally under src/main/webapp I have a directory foo containing html and binaries and under webapp many other files including html. I want to filter just the foo/*.html files.
In my notional build.gradle I can either do:
war {
eachFile {
if(shouldFilter(it)) {
it.filter(ReplaceTokens, tokens: [key: 'value'])
}
}
}
def shouldFilter(input) {
input.path.contains('foo') && input.name.endsWith('.html')
}
or move each subset into its own directory that is not copied by default
war {
from('src/main/foo-pre-filter') {
into 'foo'
include '*.html'
filter(ReplaceTokens, tokens: [key: 'value'])
}
}
Or is there another option I missed?
If I understand the question correctly, you can use filesMatching. Also, I would do it as part of the processResources task, as opposed to the war task. It would look something like this:
processResources {
filesMatching('foo/*.html') {
filter(ReplaceTokens, tokens: [key: 'value'])
}
}
I realize the initial question was asked 2 years ago, so this probably won't help the asker, but perhaps it could help someone else in the future.
I bumped into the same question today and couldn't find specific, working example right away from any of the results from Google search, one of the results led me here. After some tries, I finally get it works. Below is a working task:
war {
filesMatching("**/foo/*.html") {
filter(ReplaceTokens, tokens: [key: 'value'])
}
}
Link: filesmatching

Resources