How can I do nested for loop in jsonnet and access the variables? - prometheus-operator

How can I access t, which is the thing I get from the outer loop?
{
['applications-' + name + t]: kp.applications[name][t]
for name in [t for t in std.objectFields(kp.applications)]
}
My array looks something like this:
applications :
alertmanager-bot: {
deployment: {...},
service: {...},
go-import-redirector: {
deployment: {...},
service: {...},
I want to loop over all the deployments/services and put them in separate keys in order to get them into separate files.

Got it working with:
{
['applications-' + appname + '-' + kind]: kp.applications[appname][kind]
for appname in std.objectFields(kp.applications)
for kind in std.objectFields(kp.applications[appname])
}
I had misunderstood the order of the for loops.

Related

AppSync BatchDeleteItem not executes properly

I'm working on a React Native application with AppSync, and following is my schema to the problem:
type JoineeDeletedConnection {
items: [Joinee]
nextToken: String
}
type Mutation {
deleteJoinee(ids: [ID!]): [Joinee]
}
In 'request mapping template' to resolver to deleteJoinee, I have following (following the tutorial from https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html):
#set($ids = [])
#foreach($id in ${ctx.args.ids})
#set($map = {})
$util.qr($map.put("id", $util.dynamodb.toString($id)))
$util.qr($ids.add($map))
#end
{
"version" : "2018-05-29",
"operation" : "BatchDeleteItem",
"tables" : {
"JoineesTable": $util.toJson($ids)
}
}
..and in 'response mapping template' to the resolver,
$util.toJson($ctx.result.data.JoineesTable)
The problem is, when I ran the query, I got empty result and nothing deleted to database as well:
// calling the query
mutation DeleteJoinee {
deleteJoinee(ids: ["xxxx", "xxxx"])
{
id
}
}
// returns
{
"data": {
"deleteJoinee": [
null
]
}
}
I finally able to solve this puzzle, thanks to the answer mentioned here to point me to some direction.
Although, I noticed that JoineesTable does have trusted entity/role to the IAM 'Roles' section, yet it wasn't working for some reason. Looking into this more, I noticed that the existing policy had following actions as default:
"Action": [
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:UpdateItem"
]
Once I added following two more actions to the list, things have started working:
"dynamodb:BatchWriteItem",
"dynamodb:BatchGetItem"
Thanks to #Vasileios Lekakis and #Ionut Trestian on this appSync quest )

JSON Path not working properly with athena

I have a lambda function that converts my logs to this format:
{
"events": [
{
"field1": "value",
"field2": "value",
"field3": "value"
}, (...)
]
}
When I query it on S3, I get in this format:
[
{
"events": [
{ (...) }
]
}
]
And I'm trying to run a custom classifier for it because the data I want is inside the objects kept by 'events' and not events itself.
So I started with the simplest path I could think that worked in my tests (https://jsonpath.curiousconcept.com/)
$.events[*]
And, sure, worked in the tests but when I run a crawler against the file, the table created includes only an events field with a struct inside it.
So I tried a bunch of other paths:
$[*].events
$[*].['events']
$[*].['events'].[*]
$.[*].events[*]
$.events[*].[*]
Some of these does not even make sense and absolutely every one of those got me an schema with an events field marked as array.
Can anyone point me to a better direction to handle this issue?

Error: Unrecognized operator: $nearSphere in meteor js

i have simple code in meteor js for find near by garages within 10 Kilometres the query works fine in my mongodb database if run it manually in robomongo it works fine but when i run it in my routes it throws an error. like this.
Error: Unrecognized operator: $nearSphere in meteor jsi
i see some blogs they said you need to call a server side method for this. so i use below code to call a server side route.
Router.route('/search/:name',
{name:'searchlist',
data:function(){
var searchedParams = this.params.name.split('-');
var lat = searchedParams.pop();
var lng = searchedParams.pop(1);
return {searchValue: Centers.find({ coordinates: { $nearSphere: { $geometry: { type: "Point", coordinates: [lng,lat] }, $maxDistance: 10000 } } })}
}
}, { where: "server" }
);
if anyone have idea please help.
You're mixing definitions for client and server side routes.
Server-side route should look like this:
Router.route('/search/:name', function(...){...}, { where: 'server' });
Client-side route could look like this:
Router.route('/search/:name, { ... });
Thus, your route is actually client-side route and minimongo doesn't have support for $nearSphere operator as noted here: https://github.com/meteor/meteor/blob/devel/packages/minimongo/NOTES
First, look at Styx answer and make the route a client route by eliminating this part:
', { where: "server" }'
Now that the router is available to the client, let's fix the $nearSphere issue, by changing the operator to $near. Use the following code:
Centers.find(
{
geoloc: {
$near: {
$geometry: {
type: "Point",
coordinates: [lng, lat]
}
}
}
}
);
Give it a try and let me know if it works.

Storing User Data Flattened Security

I'd like to create an app that has a lot of user data. Let's say that each user tracks their own time per task. If I were to store it flattened it would look like this:
{
users: {
USER_ID_1: {
name: 'Mat',
tasks: {
TASK_ID_1: true,
TASK_ID_2: true,
...
}
},
},
tasks: {
TASK_ID_1: {
start: 0,
end: 1
},
TASK_ID_2: {
start: 1,
end: 2
}
}
}
Now I'd like to query and get all the task information for the user. Right now the data is small. From their guides: https://www.firebase.com/docs/web/guide/structuring-data.html it says (near the end) "... Until we get into tens of thousands of records..." and then doesn't explain how to handle that.
So my question is as follows. I know we can't do filtering via security, but can I use security to limit what people have access to and then when searching base it off the user id? My structure would then turn to this:
{
users: {
USER_ID_1: {
name: 'Mat'
}
},
tasks: {
TASK_ID_1: {
user: USER_ID_1,
start: 0,
end: 1
},
TASK_ID_2: {
user: USER_ID_1,
start: 1,
end: 2
},
...
}
}
Then I would set up my security rules to only allow each task to be accessed by the user who created it, and my ref query would look like this:
var ref = new Firebase("https://MY_FIREBASE.firebaseio.com/");
$scope.tasks = $firebaseArray(ref.child('tasks/')
.orderByChild('user')
.startAt('USER_ID_1')
.endAt('USER_ID_1'));
Is that how I should structure it? My query works but I'm unsure if it'll work once I introduce security where one user can't see another users tasks.
You've already read that security rules can not be used to filter data. Not even creative data modeling can change that. :-)
To properly secure access to your tasks you'll need something like:
"tasks": {
"$taskid": {
".read": "auth.uid === data.child(user).val()"
}
}
With this each user can only read their own tasks.
But with these rules, your query won't work. At it's most core your query is reading from tasks here:
ref.child('tasks/')...some-filtering...on(...
And since your user does not have read permission on tasks this read operation fails.
If you'd give the user read permission on tasks the read and query would work, but the user could then also read all tasks that you don't want to give them access to.

How to list files in folder

How can I list all files inside a folder with Meteor.I have FS collection and cfs:filesystem installed on my app. I didn't find it in the doc.
Another way of doing this is by adding the shelljs npm module.
To add npm modules see: https://github.com/meteorhacks/npm
Then you just need to do something like:
var shell = Meteor.npmRequire('shelljs');
var list = shell.ls('/yourfolder');
Shelljs docs:
https://github.com/arturadib/shelljs
The short answer is that FS.Collection creates a Mongo collection that you can treat like any other, i.e., you can list entries using find().
The long answer...
Using cfs:filesystem, you can create a mongo database that mirrors a given folder on the server, like so:
// in lib/files.js
files = new FS.Collection("my_files", {
stores: [new FS.Store.FileSystem("my_files", {"~/test"})] // creates a ~/test folder at the home directory of your server and will put files there on insert
});
You can then access this collection on the client to upload files to the server to the ~test/ directory:
files.insert(new File(['Test file contents'], 'my_test_file'));
And then you can list the files on the server like so:
files.find(); // returns [ { createdByTransform: true,
_id: 't6NoXZZdx6hmJDEQh',
original:
{ name: 'my_test_file',
updatedAt: (Date)
size: (N),
type: '' },
uploadedAt: (Date),
copies: { my_files: [Object] },
collectionName: 'my_files'
}
The copies object appears to contain the actual names of the files created, e.g.,
files.findOne().copies
{
"my_files" : {
"name" : "testy1",
"type" : "",
"size" : 6,
"key" : "my_files-t6NoXZZdx6hmJDEQh-my_test_file", // This is the name of the file on the server at ~/test/
"updatedAt" : ISODate("2015-03-29T16:53:33Z"),
"createdAt" : ISODate("2015-03-29T16:53:33Z")
}
}
The problem with this approach is that it only tracks the changes made through the Collection; if you add something manually to the ~/test directory, it won't get mirrored into the Collection. For instance, if on the server I run something like...
mkfile 1k ~/test/my_files-aaaaaaaaaa-manually-created
Then I look for it in the collection, it won't be there:
files.findOne({"original.name": {$regex: ".*manually.*"}}) // returns undefined
If you just want a straightforward list of files on the server, you might consider just running an ls. From https://gentlenode.com/journal/meteor-14-execute-a-unix-command/33 you can execute any arbitrary UNIX command using Node's child_process.exec(). You can access the app root directory with process.env.PWD (from this question). So in the end if you wanted to list all the files in your public directory, for instance, you might do something like this:
exec = Npm.require('child_process').exec;
console.log("This is the root dir:");
console.log(process.env.PWD); // running from localhost returns: /Users/me/meteor_apps/test
child = exec('ls -la ' + process.env.PWD + '/public', function(error, stdout, stderr) {
// Fill in this callback with whatever you actually want to do with the information
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if(error !== null) {
console.log('exec error: ' + error);
}
});
This will have to run on the server, so if you want the information on the client, you'll have to put it in a method. This is also pretty insecure, depending on how you structure it, so you'd want to think about how to stop people from listing all the files and folders on your server, or worse -- running arbitrary execs.
Which method you choose probably depends on what you're really trying to accomplish.

Resources