How should a Rebol-structured data file (which contains no code) be written and read? - reflection

If you build up a block structure, convert it to a string with MOLD, and write it to a file like this:
>> write %datafile.dat mold [
[{Release} 12-Dec-2012]
[{Conference} [12-Jul-2013 .. 14-Jul-2013]]
]
You can LOAD it back in later. But what about headers? If a file contains code, it is supposed to start with a header like:
rebol [
title: "Local Area Defringer"
date: 1-Jun-1957
file: %defringe.r
purpose: {
Stabilize the wide area ignition transcriber
using a double ganged defringing algorithm.
}
]
If you are just writing out data and reading it back in, are you expected to have a rebol [] header, and extend it with any properties you want to add? Should you come up with your own myformat [] header concept with your own properties?
Also, given that LOAD does binding, does it make sense to use it for data or is there a different operation?

Rebol data doesn't have to have a header, but is best practice to include one (even if it's just data).
Some notes:
SAVE is your best bet for serializing to file! or port! and has a mechanism for including a header.
MOLD and SAVE both have an /ALL refinement that corresponds to LOAD (without /ALL, some data from MOLD and SAVE cannot be reliably recovered, including Object, Logic and None values).
LOAD discards the header, though you can load it using the /HEADER refinement.
Putting this together:
save/all/header %datafile.dat reduce [next "some" 'data][
title: "Some Data"
]
header: take data: load/header %datafile.dat
To use a header other than Rebol [], you'd need to devise a separate loader/saver.

For the case of reading, construct works very well alongside load to prevent evaluation (of code as opposed to data):
prefs: construct/with load %options.reb default-prefs
It is:
Similar to context
obj: [
name: "Fred"
age: 27
city: "Ukiah"
]
obj-context: context obj
obj-construct: construct obj
In this case, the same:
>> obj-context = obj-construct
== true
Different
when it comes to evaluating code:
obj-eval: [
name: uppercase "Fred"
age: 20 + 7
time: now/time
]
obj-eval-context: context obj-eval
obj-eval-construct: construct obj-eval
This time parsing differently:
>> obj-eval-context = obj-eval-construct
false
>> ?? obj-eval-construct
obj-eval-construct: make object! [
name: 'uppercase
age: 20
time: now/time
]
Aside:
This is the point I realize the following code wasn't behaving as I expected:
obj-eval: [
title: uppercase "Fred"
age: 20 + 7
city: "Ukiah"
time: now/time
]
gives in red (and by extension, rebol2):
>> obj-eval-construct: construct obj-eval
== make object! [
title: 'uppercase
age: 20
city: "Ukiah"
time: now/time
]
lit-word! and lit-path! is different.
TODO: question
It has also
Useful refinement /with
Which can be used for defaults, similar to make

Related

Merging yaml documents together based on specific key in slice of maps in golang

Given the example below. I want to overlay B onto A based on the key type. So, if the same type exists in both A and B, then B will overwrite A, otherwise it just gets appended. If something exists in A, that doesn't exist in B, then it also gets appended to the slice.
A
service: myservice
contacts:
- type: slack
contact: https://slack.com/SDGSDF
- type: email
contact: email#email.com
B
service: myservice
contacts:
- type: slack
contact: https://slack.com/FVJTYSAFA
Overlay B on A
service: myservice
contacts:
- type: slack
contact: https://slack.com/FVJTYSAFA # Slack gets updated
- type: email # email gets appended to slice
contact: email#email.com
I have a tried a few things and It seems like there could be multiple ways to solve this problem.
custom Unmarshaler (seems like this could work). Where I would unmarshal the yaml and then unmarshal it again, but the second time merging it together. Not entirely sure if this is possible though.
Converting it to a map[string]interface{} and trying to just process it this way
Unmarshal each yaml document into a struct and then merge the structs together with reflect
Something I don't know yet?
I dug into unmarshaling both into a struct and then merging those structs. I got close, but get lost when using reflect and how I would go about doing what I want to do.
When I try using something like map[string]interface{}, I'm not sure how to dig deeper into the yaml object. For instance, I know I can do something like
var y1 = []byte(`
service: myservice
contacts:
- type: "slack"
contact: "https://slack.com/SDGSDF"
- type: "email"
contact: "email#email.com"
`)
func main() {
data:= map[string]interface{}{}
err := yaml.Unmarshal(y1, &data)
if err != nil {
log.Fatal(err)
}
}
But then how do i loop over contacts and compare using the map[string]interface{} method?
I am trying to learn go, so sorry if I don't phrase this correctly or I am missing something. Thanks for the help!

Karate: Using data-driven embedded template approach for API testing

I want to write data-driven tests passing dynamic values reading from external file (csv).
Able to pass dynamic values from csv for simple strings (account number & affiliate id below). But, using embedded expressions, how can I pass dynamic values from csv file for "DealerReportFormats" json array below?
Any help is highly-appreciated!!
Scenario Outline: Dealer dynamic requests
Given path '/dealer-reports/retrieval'
And request read('../DealerTemplate.json')
When method POST
Then status 200
Examples:
| read('../DealerData.csv') |
DealerTemplate.json is below
{
"DealerId": "FIXED",
"DealerName": "FIXED",
"DealerType": "FIXED",
"DealerCredentials": {
"accountNumber": "#(DealerCredentials_AccountNumber)",
"affiliateId": "#(DealerCredentials_AffiliateId)"
},
"DealerReportFormats": [
{
"name": "SalesReport",
"format": "xml"
},
{
"name": "CustomerReport",
"format": "txt"
}
]
}
DealerData.csv:
DealerCredentials_AccountNumber,DealerCredentials_AffiliateId
testaccount1,123
testaccount2,12345
testaccount3,123456
CSV is only for "flat" structures, so trying to mix that with JSON is too ambitious in my honest opinion. Please look for another framework if needed :)
That said I see 2 options:
a) use proper quoting and escaping in the CSV
b) refer to JSON files
Here is an example:
Scenario Outline:
* json foo = foo
* print foo
Examples:
| read('test.csv') |
And test.csv is:
foo,bar
"{ a: 'a1', b: 'b1' }",test1
"{ a: 'a2', b: 'b2' }",test2
I leave it as an exercise to you if you want to escape double-quotes. It is possible.
Option (b) is you can refer to stand-alone JSON files and read them:
foo,bar
j1.json,test1
j2.json,test2
And you can do * def foo = read(foo) in your feature.

JSON Path not working properly with athena

I have a lambda function that converts my logs to this format:
{
"events": [
{
"field1": "value",
"field2": "value",
"field3": "value"
}, (...)
]
}
When I query it on S3, I get in this format:
[
{
"events": [
{ (...) }
]
}
]
And I'm trying to run a custom classifier for it because the data I want is inside the objects kept by 'events' and not events itself.
So I started with the simplest path I could think that worked in my tests (https://jsonpath.curiousconcept.com/)
$.events[*]
And, sure, worked in the tests but when I run a crawler against the file, the table created includes only an events field with a struct inside it.
So I tried a bunch of other paths:
$[*].events
$[*].['events']
$[*].['events'].[*]
$.[*].events[*]
$.events[*].[*]
Some of these does not even make sense and absolutely every one of those got me an schema with an events field marked as array.
Can anyone point me to a better direction to handle this issue?

Firebase Firestore comment tree architecture

I'm trying to implement a Reddit/HackerNews style tree of comments as part of a project, and am trying out Firestore as a database solution. However, I'm unsure as to the correct design reading through the docs. In a SQL database I would use numeric keys like:
0
1.0
1.1
1.1.1
0.0
to represent my tree. However, numeric keys like that seem to be a Firebase antipattern. The other route is using an actual tree in the json where a post is represented like:
{
uid: 'A0000',
content: 'foo',
children: [
{uid:..., content:..., children: []}]
}
but supposedly deep trees are bad in Firestore. As I understand it the reason deep trees are bad is that you have to fetch the whole thing, but in my case I'm not sure if that's a problem. A client fetching a post would fetch the root content node and the first 20 or so child trees. That could be a pretty big fetch, but not insanely so.
Does anyone know of a good standard way to implement this kind of structure?
Extra: Here is the more verbose expression of what the structure should look like once the client processes it.
{
uid: 0,
title: 'Check out this cat!',
body: 'It\'s pretty cute! This **text** is [markdown](link), so it can have ' +
'links and *stuff*. Yay!',
poster: {
uid: 0,
name: 'VivaLaPanda',
aviUrl: 'badlink',
},
posted: '2018-03-28',
children: [{
uid: 0,
body: 'This is a comment, it\'s angry!',
poster: {
uid: 0,
name: 'VivaLaPanda',
aviUrl: 'badlink',
},
posted: '2018-03-20',
children: [{
uid: 0,
body: 'This is a comment, it\'s neutral!',
poster: {
uid: 0,
name: 'Steve',
aviUrl: 'badlink',
},
posted: '2018-03-20',
children: [{
uid: 0,
body: 'This is a comment, it\'s neutral!',
poster: {
uid: 0,
name: 'Craig',
aviUrl: 'badlink',
},
posted: '2018-04-10',
children: []
}, ]
}, ]
},
{
uid: 0,
body: 'This is a comment, it\'s happy!',
poster: {
uid: 0,
name: 'Craig',
aviUrl: 'badlink',
},
posted: '2018-03-28',
children: []
},
]
};
Edit:
While I've marked this as answered because there is an answer, I'm still really interested in seeing something more elegant/efficient.
Edit2:
For posterity: I ended up deciding that any Firebase solution was hopelessly convoluted and just used DGraph for the data, with Firebase sitting in front for Auth.
This is tough since the structure you have is naturally recursive. The obvious options are each comment is a new document in a collection and each reply is a single document in the same collection.
Each comment as a new document could work something like this. Each comment has a "postId" attribute which dictates which post it belongs to. Some comments, those which are replies to other comments, have a "replyToId". These two attribute in conjunction allow your client app to:
Get the top level comments (look for comments with the correct postId and which don't have replyToId). Top level comments allows you to limit the size of payloads if you need to worry about that in the future.
Get all comments (look for comments with the correct postId only). If you don't care about payload sizes you can get everything and figure out the tree structure on the client.
Get replies to a particular comment if you want "see replies" YouTube style comments interaction (looks for comments which have a particular replyToId). This works well in conjunction with 1. for limiting payload sizes.
But the logic here is obviously complex.
Your children approach could be really messy if there are a lot of people commenting on eachother. A nicer approach would be the following structure for every comment:
// single comment
postUid // <- random generated by firebase
{
postedBy: userUid
postedTime: timestamp
postIsChildOfUid: postUid // <- reference to an other post (optional if the comment didn't respond to another comment(top-level comment))
}
This doesn't even require nesting at all :). You can generate easily now a comment tree with this approach, but this has to be client side. But that should be easy!

How to dynamically change autoscaling instance names

I have created a heat stack which autoscales depending on CPU use. Each time a new instance is created, it is given a random name.
Is there a way to set a specific name with a counter added to the end of it so that each time a new instance is created it increases by 1?
E.g. Myinstance1, Myinstance2, Myinstance3 ... MyinstanceX
Thanks in advance!
In Openstack HEAT, stack resource names are manipulated with stack_name and suffixed with a short_id. That's why on every autoscaled up instance you could see the instance name as such. This is how the implementation done in overall HEAT project and it is not possible to define instance name suffixed with incremental number.
if i understood you correctly, and if you are Object Oriented Programing:
you are looking for a design pattern called Factory, or more simply, create a static member that will increase in the constructor, and will be added to the name member of the instance created.
You can set the custom names by going to your Auto Scaling Groups and Tags tab, and then adding a tag with the key of "Name" and the value of "MyInstance". Numbering does not make that much sense since your instances are going to be launched and terminated constantly.
Update at 21/09/2020 :
Seems that creating an incremental number is impossible so far, but I found a workaround to achieve my goal, so post here hoping that could give you some ideas.
Mindset:
I tried to find something (which is number) that is created dynamically with the instance for scaling up, to me that is OS::Neutron::Port, so I append one part of IP address after a string to get a distinctive name for each instance.
Solution:
1.Create a port OS::Neutron::Port.
2.Get IP address using get_attr.
3.Split it with dot as delimiter using str_split.
4.Append one part of the address to the string using str_replace.
Sample Code:
lb_server.yaml
resources:
corey_port:
type: OS::Neutron::Port
properties:
network: { get_param: network }
fixed_ips:
- subnet: { get_param: subnet }
number:
type: OS::Heat::Value
properties:
value:
# 192.168.xxx.yyy => [192,168,xxx,yyy]
str_split: ['.', { get_attr: [corey_port, fixed_ips, 0, ip_address] }]
server:
type: OS::Nova::Server
properties:
name:
str_replace:
template: Corey-%last%
params:
# 0 1 2 3
#[192,168,xxx,yyy]
"last%": { get_attr: [number, value, 3] }
flavor: { get_param: flavor }
......
The outcome shoud be Corey-168, Corey-50, Corey-254, etc.

Resources