How cfs s3 uploading works - meteor

I've previously used slingshot and the process it pretty simple: we upload the image and it returns the uploaded url of the s3 bucket.
Now I want to resize the image and need to perform some operations on the image, so I switched to the cfs:s3 package. But when I try to upload the image, it returns some record with no url and in the db it stores it as:
{
"_id" : "Rwa7Xo65pv6cAP2aY",
"copies" : {
"thumbs" : {
"name" : "306032-facebook.jpg",
"type" : "image/jpeg",
"size" : 4262,
"key" : "thumbs/Rwa7Xo65pv6cAP2aY-306032-facebook.jpg",
"updatedAt" : ISODate("2015-02-14T06:44:04.476Z"),
"createdAt" : ISODate("2015-02-14T06:44:04.476Z")
}
},
"original" : {
"name" : "306032-facebook.jpg",
"updatedAt" : ISODate("2015-01-30T09:48:58.000Z"),
"size" : 4262,
"type" : "image/jpeg"
},
"uploadedAt" : ISODate("2015-02-14T06:43:59.062Z")
}
How can I get the URL from this record? (I suppose it is key in thumbs) Is it linking my server url to amazon s3 url?
What are the advantages of this method over slingshot?
How do I know the upload is completed? I can't figure out any ui helpers, are there any reactive helpers to track download percentage?

The URL will be your s3 end point (e.g. s3-us-west-2.amazonaws.com/) + the key. I suggest you to create a constant for your end point and a register helper to return the URL.
Something like this
Template.registerHelper('THUMBS_URL', function(key){
return S3_ENDPOINT + key;
})
I haven't used slingshot yet so can't comment on it.
There is a isUploaded helper function in collectionFS.
https://github.com/CollectionFS/Meteor-CollectionFS#isuploaded

Related

AWS AppSync - DeleteItem doesn't execute response mapping template

When attempting to delete an item using the following request mapping:
{
"version" : "2017-02-28",
"operation" : "DeleteItem",
"key" : {
"id": { "S" : "$ctx.args.id"},
"sortKey" : { "S" : "$ctx.args.sortKey"}
}
}
If the item exists it will process the result through the response template, however when the item does not exist the response template is never run.
Response template:
#set($ctx.result.status = "SUCCESS")
#set($ctx.result.message = "This was a success!")
$utils.toJson($ctx.result)
I am aware that when an item does not exist in Dynamo it will perform no action but I would expect that it would still process through the template.
Is there anything I am missing or is it impossible for AppSync to processed a DeleteItem request through the response mapping when the document does not exist?
This the expected execution behavior for the version of the template you are using (2017-02-28).
You can switch your request mapping template version to 2018-05-29 and your response mapping template will be executed, with the following characteristics:
If the datasource invocation result is null, the response mapping template is executed.
If the datasource invocation yields an error, it is now up to you to handle the error. The invocation error is accessible using $ctx.error.
The response mapping template evaluated result will always be placed inside the GraphQL response data block. You can also raise or append an error using $util.error() and $util.appendError() respectively.
More info https://docs.aws.amazon.com/appsync/latest/devguide/resolver-mapping-template-changelog.html#aws-appsync-resolver-mapping-template-version-2018-05-29
So for your example:
{
"version" : "2018-05-29", ## Note the new version
"operation" : "DeleteItem",
"key" : {
"id": { "S" : "$ctx.args.id"},
"sortKey" : { "S" : "$ctx.args.sortKey"}
}
}
and response template
#if ( $ctx.error )
$util.error($ctx.error.message, $ctx.error.type)
#end
#set($ctx.result.status = "SUCCESS")
#set($ctx.result.message = "This was a success!")
$utils.toJson($ctx.result)

Artifactory aql: find builds of job with given property

I am trying to query which build number(s) produced artifacts from build foo with artifact property vcs.Revision=aabbccddee123456.
In Artifactory 5.1.3.
I was trying like this so far:
curl -u user:apikey -i -X POST https://artifactory.foobar.com/artifactory/api/search/aql -H "content-type:text/plain" -T query.json
query.json:
builds.find(
{
"module.artifact.item.repo":"snapshot-local",
"name":"foo",
"module.artifact.item.#vcs.Revision":"aabbccddee123456"
}
)
However, none of these 3 lines seem individually correct:
builds.find({"module.artifact.item.repo":"snapshot-local"})
returns nothing,
builds.find({"name":"foo"})
returns the same empty response,
builds.find({"module.artifact.item.#vcs.Revision":"aabbccddee123456"}) also returns this:
{
"results" : [ ],
"range" : {
"start_pos" : 0,
"end_pos" : 0,
"total" : 0
}
}
What am I doing wrong here? I do see in the webapp the builds I published with this name, and with the correct artifact properties.
Here's a working solution that will give build numbers (since giving admin rights to query builds is not a solution for us):
query.json:
items.find(
{
"repo":"snapshot-local",
"artifact.module.build.name":"foo",
"artifact.item.#vcs.Revision":"aabbccddee123456"
}
).include("artifact.module.build.number")
This returns a list of all the artifacts that were built with the relevant properties, with the build number attached, e.g:
{
"results" : [ {
"repo" : "snapshot-local",
"path" : "foo/42",
"name" : "a.out",
"type" : "file",
"size" : 123456789,
"created" : "2018-07-05T12:34:56.789+09:00",
"created_by" : "jenkins",
"modified" : "2018-07-05T12:34:56.789+09:00",
"modified_by" : "jenkins",
"updated" : "2018-07-05T12:34:56.789+09:00",
"artifacts" : [ {
"modules" : [ {
"builds" : [ {
"build.number" : "42"
} ]
} ]
} ]
},
[SNIP]
}
],
"range" : {
"start_pos" : 0,
"end_pos" : 30,
"total" : 30
}
}
I can then parse this to extract build.number.
Certain AQL queries requires a user with admin permissions.
To ensure that non-privileged users do not gain access to information without the right permissions, users without admin privileges have the following restrictions:
The primary domain in the query may only be item.
The following three fields must be included in the include directive: name, repo, and path.
In your case, you are using the build domain in the query which requires admin permissions

How to make consistent delete in Firebase database when the data lies in multiple paths in a fan out way? [duplicate]

This question already has an answer here:
Firebase -- Bulk delete child nodes
(1 answer)
Closed 6 years ago.
With Firebase fan out data to different nodes and paths is recommended by Firebase like below example from Firebase sample:
{
"post-comments" : {
"PostId1" : {
"CommentID1" : {
"author" : "User1",
"text" : "Comment1!",
"uid" : "UserId1"
}
}
},
"posts" : {
"PostId1" : {
"author" : "user1",
"body" : "Firebase Mobile platform",
"starCount" : 1,
"stars" : {
"UserId1" : true
},
"title" : "About firebase",
"uid" : "UserId1"
}
},
"user-posts" : {
"UserId1" : {
"PostId1" : {
"author" : "user1",
"body" : "Firebase Mobile platform",
"starCount" : 1,
"stars" : {
"UserId1" : true
},
"title" : "About firebase",
"uid" : "UserId1"
}
}
},
"users" : {
"UserId1" : {
"email" : "user1#gmail.com",
"username" : "user1"
}
}
}
With multipath updates we can atomically update all the paths for a post, however if we want to delete a blog post in above kind of schema then how can we do it atomically? There is no multi path delete, I guess. If client losses network connection while deleting then only few paths would be deleted!
Also in case there is a requirement like when a user is deleted for all the post he has starred, we should remove the stars and unstar the post for that user. This becomes difficult as there is no direct tracking of what posts user has starred. For this do we need to fan out the starring of posts as well like have a node user-stars. Then while deleting we know what all activity the user has done and act on it while deleting user. Is there a better way of handling this?
"user-stars":{
"UserId1":{
"PostID1":true
}
}
In both cases the question on atomically or consistently deleting the data from multipaths (either all or nothing) is seemingly not available.
In that case the only option available looks to be putting the delete command in Firebase queue which will resolve the task in queue only if everything is deleted. That will be eventually consistent option but should be fine. But that is expensive option requiring server. Is there a better way?
You can implement a multi-path delete, by writing a value of null to the paths.
So:
var updates = {
"user-posts/UserId1/PostId1": null,
"post-comments/PostId1": null,
"posts/PostId1": null
}
ref.update(updates);
I had already answered this before: Firebase -- Bulk delete child nodes
It's also quite explicitly mentioned in the documentation on deleting data:
You can also delete by specifying null as the value for another write operation such as set() or update(). You can use this technique with update() to delete multiple children in a single API call.

Firebase: structuring data via per-user copies? Risk of data corruption?

Implementing an Android+Web(Angular)+Firebase app, which has a many-to-many relationship: User <-> Widget (Widgets can be shared to multiple users).
Considerations:
List all the Widgets that a User has.
A User can only see the Widgets which are shared to him/her.
Be able to see all Users to whom a given Widget is shared.
A single Widget can be owned/administered by multiple Users with equal rights (modify Widget and change to whom it is shared). Similar to how Google Drive does sharing to specific users.
One of the approaches to implement fetching (join-style), would be to go with this advice: https://www.firebase.com/docs/android/guide/structuring-data.html ("Joining Flattened Data") via multiple listeners.
However I have doubts about this approach, because I have discovered that data loading would be worryingly slow (at least on Android) - I asked about it in another question - Firebase Android: slow "join" using many listeners, seems to contradict documentation .
So, this question is about another approach: per-user copies of all Widgets that a user has. As used in the Firebase+Udacity tutorial "ShoppingList++" ( https://www.firebase.com/blog/2015-12-07-udacity-course-firebase-essentials.html ).
Their structure looks like this:
In particular this part - userLists:
"userLists" : {
"abc#gmail,com" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"listName" : "Test List 1 Rename 2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1456950573084
},
"timestampLastChanged" : {
"timestamp" : 1457044229747
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044229747
}
}
},
"xyz#gmail,com" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"listName" : "Test List 1 Rename 2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1456950573084
},
"timestampLastChanged" : {
"timestamp" : 1457044229747
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044229747
}
},
"-KByb0imU7hFzWTK4eoM" : {
"listName" : "List2",
"owner" : "xyz#gmail,com",
"timestampCreated" : {
"timestamp" : 1457044332539
},
"timestampLastChanged" : {
"timestamp" : 1457044332539
},
"timestampLastChangedReverse" : {
"timestamp" : -1457044332539
}
}
}
},
As you can see, the copies of shopping list "Test List 1 Rename 2" info appears in two places (for 2 users).
And here is the rest for completeness:
{
"ownerMappings" : {
"-KBt0MDWbvXFwNvZJXTj" : "xyz#gmail,com",
"-KByb0imU7hFzWTK4eoM" : "xyz#gmail,com"
},
"sharedWith" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
}
}
},
"shoppingListItems" : {
"-KBt0MDWbvXFwNvZJXTj" : {
"-KBt0heZh-YDWIZNV7xs" : {
"bought" : false,
"itemName" : "item",
"owner" : "xyz#gmail,com"
}
}
},
"uidMappings" : {
"google:112894577549422030859" : "abc#gmail,com",
"google:117151367009479509658" : "xyz#gmail,com"
},
"userFriends" : {
"xyz#gmail,com" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
}
}
},
"users" : {
"abc#gmail,com" : {
"email" : "abc#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Agenda TEST",
"timestampJoined" : {
"timestamp" : 1456950523145
}
},
"xyz#gmail,com" : {
"email" : "xyz#gmail,com",
"hasLoggedInWithPassword" : false,
"name" : "Karol Depka",
"timestampJoined" : {
"timestamp" : 1456952940258
}
}
}
}
However, before I jump into implementing a similar structure in my app, I would like to clarify a few doubts.
Here are my interrelated questions:
In their ShoppingList++ app, they only permit a single "owner" - assigned in the ownerMappings node. Thus no-one else can rename the shopping list. I would like to have multiple "owners"/admins, with equal rights. Would such a keep-copies-per-user structure still work for multiple owner/admin users, without risking data corruption/"desynchronization" or "pranks"?
Could data corruption arise in scenarios like this: User1 goes offline, renames Widget1 to Widget1Prim. While User1 is offline, User2 shares Widget1 to User3 (User3's copy would not yet be aware of the rename). User1 goes online and sends the info about the rename of Widget1 (only to his own and User2's copies, of which the client code was aware at the time of the rename - not updating User3's copy). Now, in a naive implementation, User3 would have the old name, while the others would have the new name. This would probably be rare, but still worrying a bit.
Could/should the data corruption scenario in point "2." be resolved via having some process (e.g. on AppEngine) listening to changes and ensuring proper propagation to all user copies?
And/or could/should the data corruption scenario in point "2." be resolved via implementing a redundant listening to both changes of sharing and renaming, and propagating the changes to per-user copies, to handle the special case? Most of the time this would not be necessary, so it could result in performance/bandwidth penalty and complicated code. Is it worth it?
Going forward, once we have multiple versions deployed "in the wild", wouldn't it become unwieldy to evolve the schema, given how much of the data-handling responsibility lies with the code in the clients? For example if we add a new relationship, that the older client versions don't yet know about, doesn't it seem fragile? Then, back to the server-side syncer-ensurerer process on e.g. AppEngine (described in question "3.") ?
Would it seem like a good idea, to also have a "master reference copy" of every Widget / shopping-list, so as to give good "source of truth" for any syncer-ensurerer type of operations that would update per-user copies?
Any special considerations/traps/blockers regarding rules.json / rules.bolt permissions for data structured in such a (redundant) way ?
PS: I know about atomic multi-path updates via updateChildren() - would definitely use them.
Any other hints/observations welcome. TIA.
I suggest having only one copy of a widget for the entire system. It would have an origin user ID, and a set of users that have access to it. The widget tree can hold user permissions and change history. Any time a change is made, a branch is added to the tree. Branches can then be "promoted" to the "master" kind of like GIT. This would guarantee data integrity because past versions are never changed or deleted. It would also simplify your fetches... I think :)
{
users:[
bob:{
widgets:[
xxx:{
widgetKey: xyz,
permissions: *,
lastEdit...
}
]
}
...
]
widgets:[
xyz:{
masterKey:abc,
data: {...},
owner: bob,
},
...
]
widgetHistory:[
xyz:[
v1:{
data:{...},
},
v2,
v3
]
123:[
...
],
...
]
}

Changing css according to data

I'm doing a widget with dashing.io and I would like to change the jenkins jobs according to the color I receive in my json file (wich I get from the Jenkins API).
ie: The job is complete, I get the color value "blue" from my json file and I want the text to be blue on the "widget jenkins" i my dashboard.
Problem: I don't really know how to get my data from my json file in my coffeescript script. Neither I know how to change the css.
My json file goes like this:
{
"assignedLabels" : [
{
}
],
"mode" : "NORMAL",
"nodeDescription" : "blabla",
"nodeName" : "",
"numExecutors" : blabla,
"description" : blabla,
"jobs" : [
{
"name" : "JOB_NAME",
"url" : "MY_JOB_URL",
"color" : "blue"
}
]
}
Here is my widget code:
require 'net/http'
require 'json'
require 'time'
JENKINS_URI = URI.parse("jenkins_url")
JENKINS_AUTH = {
'name' => 'user',
'password' => 'pwd'
}
def get_json_for_master_jenkins()
http = Net::HTTP.new(JENKINS_URI.host, JENKINS_URI.port)
request = Net::HTTP::Get.new("/jenkins/api/json?pretty=true")
if JENKINS_AUTH['name']
request.basic_auth(JENKINS_AUTH['name'], JENKINS_AUTH['password'])
end
response = http.request(request)
JSON.parse(response.body)
end
# the key of this mapping must be a unique identifier for your job, the according value must be the name that is specified in jenkins
SCHEDULER.every '100s', :first_in => 0 do |job|
thom = get_json_for_master_jenkins()
send_event('master_jobs',
jobs: thom['jobs'][0..4],
colors:thom['jobs']['color']
)
end
Could you guys help me ? I'm really new to this, try to make it simple please.
OK, I think I found the answer.
Jenkins is built on batman.js, and there is a way to interact with the DOM.
I use the provided batman.js attribute data-bind-class like this in my widget HTML:
.blue{
#CSS stuff goes here
}

Resources