Issue with Infering attribures from master if child entity does not have them in TypeDB - vaticle-typedb

Hi I'm having the following issue.
In my database there are some entities (requirement) that have a set of attributes (name, req-type, description).
and an entity can be "branched" from another.
The idea is to infer that all the attributes that the child entity did not overwrite should be taken from the master entity.
Here is the existing schema.
> database schema test-req-branch
define
description sub attribute,
value string;
name sub attribute,
value string;
req-type sub attribute,
value string;
req-branching sub relation,
relates branch-item,
relates branched-from;
requirement sub entity,
owns description,
owns name,
owns req-type,
plays req-branching:branch-item,
plays req-branching:branched-from;
and the dummy data
test-req-branch::data::read> match $e isa entity, has $a;
get $e, $a; group $e;
iid 0x826e80018000000000000001 isa requirement => {
{
$a "master" isa name;
$e iid 0x826e80018000000000000001 isa requirement;
}
{
$a "demo master" isa description;
$e iid 0x826e80018000000000000001 isa requirement;
}
{
$a "whatever" isa req-type;
$e iid 0x826e80018000000000000001 isa requirement;
}
}
iid 0x826e80018000000000000000 isa requirement => {
{
$a "other" isa name;
$e iid 0x826e80018000000000000000 isa requirement;
}
}
answers: 2, total (with concept details) duration: 12 ms
test-req-branch::data::read> match $r (branch-item:$bi, branched-from:$bf)isa req-branching;
$bf has name $bfn;
$bi has name $bin;
{
$r iid 0x847080018000000000000000 (branch-item: iid 0x826e80018000000000000000, branched-from: iid 0x826e80018000000000000001) isa req-branching;
$bf iid 0x826e80018000000000000001 isa requirement;
$bin "other" isa name;
$bi iid 0x826e80018000000000000000 isa requirement;
$bfn "master" isa name;
}
answers: 1, total (with concept details) duration: 8 ms
wince the master requirement has (name, req-type, description)
and the child entity "other" has only (name)
I want to infer req-type and description form the master.
I devised a query that gets the attributes that the child entity does not have.
test-req-branch::data::read> match
$rr (branched-from:$rm, branch-item:$cr) isa req-branching;
$rm has $ma;
$cr has attribute $ca;
$ma isa! $t;
not {
$ca isa! $t;
};
get $ma;
{ $ma "demo master" isa description; }
{ $ma "whatever" isa req-type; }
answers: 2, total (with concept details) duration: 5 ms
And I converted this into a rule:
rule when-not-defined-attributes-derive-from-parent:
when
{
$rr (branched-from: $rm, branch-item: $cr) isa req-branching;
$rm has $ma;
$cr has attribute $ca;
$ma isa! $t;
not
{
$ca isa! $t;
};
}
then
{
$cr has $ma;
};
But when i queried the attributes I got the master's name attached to the child as well.
> transaction test-req-branch data read --infer true
test-req-branch::data::read> match $rr (branched-from: $rm, branch-item: $cr) isa req-branching;
$cr has $a;
get $a;
{ $a "other" isa name; }
{ $a "whatever" isa req-type; }
{ $a "demo master" isa description; }
{ $a "master" isa name; }
answers: 4, total (with concept details) duration: 91 ms
What am I doing wrong?
PS: I'm using the typedb docker image:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b8e4d1200059 vaticle/typedb:latest "/opt/typedb-all-lin…" 4 days ago Up 4 days 0.0.0.0:1729->1729/tcp, :::1729->1729/tcp typedb
./typedb console --version
2.11.0
2.11.0

That match query is not correct for what you're trying to achieve, which you would have noticed had the branch-item had another attribute. I've added req-type "blah" to the item, and this is what what I got:
test-req-branch::data::read> match
$rr (branched-from:$rm, branch-item:$cr) isa req-branching;
$rm has $ma;
$cr has attribute $ca;
$ma isa! $t;
not {
$ca isa! $t;
};
get $ma;
{ $ma "master" isa name; }
{ $ma "demo master" isa description; }
{ $ma "whatever" isa req-type; }
To understand why, let's take a look at what the result of the query looks like before negation.
test-req-branch::data::read> match
$rr (branched-from:$rm, branch-item:$cr) isa req-branching;
$rm has $ma;
$cr has attribute $ca;
get $ma, $ca;
{ $ma "whatever" isa req-type;
$ca "blah" isa req-type; }
{ $ma "master" isa name;
$ca "blah" isa req-type; }
{ $ma "demo master" isa description;
$ca "blah" isa req-type; }
{ $ma "whatever" isa req-type;
$ca "other" isa name; }
{ $ma "master" isa name;
$ca "other" isa name; }
{ $ma "demo master" isa description;
$ca "other" isa name; }
The negation acts as a sort of filter on all these pairs of attributes $ca and $ma. If they have different types, i.e. $ma isa! $t; not { $ca isa! $t; };, then $ma is a valid output. But if $ca has two potential types, we're guaranteed to find one that doesn't match $t! So we'll get all the attributes of the branched-from entity.
A solution would be to tweak the query like so:
test-req-branch::data::read> match
$rr (branched-from:$rm, branch-item:$cr) isa req-branching;
$rm has $ma;
$ma isa! $t;
not {
$cr has $ca;
$ca isa! $t;
};
get $ma;
{ $ma "demo master" isa description; }
Here, the negation acts as a filter on $ma alone. For each $ma, as long as there is at least one $ca with the same type, we reject it.
Unfortunately, this won't work as a rule. The negation as part of the query essentially boild the rule down to "if $cr does not have an attribute with the type of $ma, then it has $ma". But that means that it has an attribute with the type of $ma, namely, $ma! This causes a contradiction (A cycle containing negation(s) that can cause inference contradictions), and so the rule is invalid.
P.S. Thanks to Krishnan Govindraj for his help with this answer.

Related

Newtonsoft.json: cut JSON according to json path whitelist

Suppose, I have some complex JSON:
{
"path1": {
"path1Inner1": {
"id": "id1"
},
"path1Inner2": {
"id": "id2"
}
},
"path2": {
"path2Inner1": {
"id": "id3"
},
"path2Inner2": {
"id": "id4",
"key": "key4"
}
}
}
And there is also some whitelist of json path expressions, for example:
$.path1.path1Inner1
$.path2.path2Inner2.key
I want to leave in the JSON tree only nodes and properties that match the "whitelist", so, the result would be:
{
"path1": {
"path1Inner1": {
"id": "id1"
}
},
"path2": {
"path2Inner2": {
"key": "key4"
}
}
}
I.e. this is not just a selection by JSON path (which is a trivial task) but the nodes and properties have to keep the initial place in the source JSON tree.
First of all, I have many thanks for this and this answers. They became the starting point of my analysis of the problem.
Those answers present two different approaches in achieving the goal of "whitelisting" by paths. The first one rebuilds the whitelist paths structure from scratch (i.e. starting from the empty object creates the needed routes). The implementation parses the string paths and tries to rebuild the tree based on the parsed path. This approach needs very handy work of considering all possible types of paths and therefore might be error-prone. You can find some of the mistakes I have found in my comment to the answer.
The second approach is based on the json.net object tree API (Parent, Ancestors, Descendants, etc. etc.). The algorithm traverses the tree and removes paths that are not "whitelisted". I find that approach much easier and much less error-prone as well as supporting the wide range of cases "in one go".
The algorithm I have implemented is in many points similar to the second answer but, I think, is much easier in implementation and understanding. Also, I don't think it is worse in its performance.
public static class JsonExtensions
{
public static TJToken RemoveAllExcept<TJToken>(this TJToken token, IEnumerable<string> paths) where TJToken : JContainer
{
HashSet<JToken> nodesToRemove = new(ReferenceEqualityComparer.Instance);
HashSet<JToken> nodesToKeep = new(ReferenceEqualityComparer.Instance);
foreach (var whitelistedToken in paths.SelectMany(token.SelectTokens))
TraverseTokenPath(whitelistedToken, nodesToRemove, nodesToKeep);
//In that case neither path from paths has returned any token
if (nodesToKeep.Count == 0)
{
token.RemoveAll();
return token;
}
nodesToRemove.ExceptWith(nodesToKeep);
foreach (var notWhitelistedNode in nodesToRemove)
notWhitelistedNode.Remove();
return token;
}
private static void TraverseTokenPath(JToken value, ISet<JToken> nodesToRemove, ISet<JToken> nodesToKeep)
{
JToken? immediateValue = value;
do
{
nodesToKeep.Add(immediateValue);
if (immediateValue.Parent is JObject or JArray)
{
foreach (var child in immediateValue.Parent.Children())
if (!ReferenceEqualityComparer.Instance.Equals(child, value))
nodesToRemove.Add(child);
}
immediateValue = immediateValue.Parent;
} while (immediateValue != null);
}
}
To compare the JToken instances it's necessary to use reference equality comparer since some of JToken types use "by value" comparison like JValue does. Otherwise, you could get buggy behaviour in some cases.
For example, having source JSON
{
"path2":{
"path2Inner2":[
"id",
"id"
]
}
}
and a path $..path2Inner2[0] you will get the result JSON
{
"path2":{
"path2Inner2":[
"id",
"id"
]
}
}
instead of
{
"path2":{
"path2Inner2":[
"id"
]
}
}
As far as .net 5.0 is concerned the standard ReferenceEqualityComparer can be used. If you use an earlier version of .net you might need to implement it.
Let's suppose that you have a valid json inside a sample.json file:
{
"path1": {
"path1Inner1": {
"id": "id1"
},
"path1Inner2": {
"id": "id2"
}
},
"path2": {
"path2Inner1": {
"id": "id3"
},
"path2Inner2": {
"id": "id4",
"key": "key4"
}
}
}
Then you can achieve the desired output with the following program:
static void Main()
{
var whitelist = new[] { "$.path1.path1Inner1", "$.path2.path2Inner2.key" };
var rawJson = File.ReadAllText("sample.json");
var semiParsed = JObject.Parse(rawJson);
var root = new JObject();
foreach (var path in whitelist)
{
var value = semiParsed.SelectToken(path);
if (value == null) continue; //no node exists under the path
var toplevelNode = CreateNode(path, value);
root.Merge(toplevelNode);
}
Console.WriteLine(root);
}
We read the json file and semi parse it to a JObject
We define a root where will merge the processing results
We iterate through the whitelisted json paths to process them
We retrieve the actual value of the node (specified by the path) via the SelectToken call
If the path is pointing to a non-existing node then SelectToken returns null
Then we create a new JObject which contains the full hierarchy and the retrieved value
Finally we merge that object to the root
Now let's see the two helper methods
static JObject CreateNode(string path, JToken value)
{
var entryLevels = path.Split('.').Skip(1).Reverse().ToArray();
return CreateHierarchy(new Queue<string>(entryLevels), value);
}
We split the path by dots and remove the first element ($)
We reverse the order to be able to put it into a Queue
We want to build up the hierarchy from inside out
Finally we call a recursive function with the queue and the retrieved value
static JObject CreateHierarchy(Queue<string> pathLevels, JToken currentNode)
{
if (pathLevels.Count == 0) return currentNode as JObject;
var newNode = new JObject(new JProperty(pathLevels.Dequeue(), currentNode));
return CreateHierarchy(pathLevels, newNode);
}
We first define the exit condition to make sure that we will not create an infinite recursion
We create a new JObject where we specify the name and value
The output of the program will be the following:
{
"path1": {
"path1Inner1": {
"id": "id1"
}
},
"path2": {
"path2Inner2": {
"key": "key4"
}
}
}

StackOverflowException running SerializeObject

I want to replicate the TypeNameHandling = TypeNameHandling.Objects setting but have my own property name, rather than $type, and have it find the objects based on the simple class name rather than have the assembly referenced.
I have a nested object model that I am trying to sterilise using the Newtonsoft tool. When I run it I get a System.StackOverflowException and I really cant figure out why... I have reviewed Custom JsonConverter WriteJson Does Not Alter Serialization of Sub-properties and the solution there does not work natively within Newtonsoft and thus ignores all of the Newtonsoft native attributes.
If I pass a single convertor (all the object inherit from IOptions) I get only the top-level object with the required ObjectType:
{
"ObjectType": "ProcessorOptionsA",
"ReplayRevisions": true,
"PrefixProjectToNodes": false,
"CollapseRevisions": false,
"WorkItemCreateRetryLimit": 5,
"Enabled": true,
"Endpoints": null,
"ProcessorEnrichers": [
{
"Enabled": true
},
{
"Enabled": true
}
]
}
I have 4 classes that all have my custom OptionsJsonConvertor set as the convertor.
[JsonConverter(typeof(OptionsJsonConvertor<IProcessorEnricherOptions>))]
public interface IProcessorEnricherOptions : IEnricherOptions
{
}
[JsonConverter(typeof(OptionsJsonConvertor<IProcessorOptions>))]
public interface IProcessorOptions : IProcessorConfig, IOptions
{
List<IEndpointOptions> Endpoints { get; set; }
List<IProcessorEnricherOptions> ProcessorEnrichers { get; set; }
IProcessorOptions GetDefault();
}
[JsonConverter(typeof(OptionsJsonConvertor<IEndpointOptions>))]
public interface IEndpointOptions : IOptions
{
[JsonConverter(typeof(StringEnumConverter))]
public EndpointDirection Direction { get; set; }
public List<IEndpointEnricherOptions> EndpointEnrichers { get; set; }
}
[JsonConverter(typeof(OptionsJsonConvertor<IEndpointEnricherOptions>))]
public interface IEndpointEnricherOptions : IEnricherOptions
{
}
The object model does not nest the same object type at any point, but does have List<IEndpointEnricherOptions> contained within List<IEndpointOptions> contained within List<IProcessorOptions>.
"Processors": [
{
"ObjectType": "ProcessorOptionsA",
"Enabled": true,
"ProcessorEnrichers": [
{
"ObjectType": "ProcessorEnricherOptionsA",
"Enabled": true
},
{
"ObjectType": "ProcessorEnricherOptionsB",
"Enabled": true,
}
],
"Endpoints": [
{
"ObjectType": "EndpointOptionsA",
"EndpointEnrichers": [
{
"ObjectType": "EndpointEnricherOptionsA",
"Enabled": true,
}
]
},
{
"ObjectType": "EndpointOptionsA",
"EndpointEnrichers": [
{
"ObjectType": "EndpointEnricherOptionsA",
"Enabled": true,
},
{
"ObjectType": "EndpointEnricherOptionsB",
"Enabled": true,
}
]
}
]
}
]
I want to replicate the TypeNameHandling = TypeNameHandling.Objects setting but have my own property name as well as finding the objects, but everything else should be the same.
Right now I have public class OptionsJsonConvertor<TOptions> : JsonConverter which works for a single nested list, but no sub-lists.
public override void WriteJson(JsonWriter writer,object value,JsonSerializer serializer)
{
JToken jt = JToken.FromObject(value);
if (jt.Type != JTokenType.Object)
{
jt.WriteTo(writer);
}
else
{
JObject o = (JObject)jt;
o.AddFirst(new JProperty("ObjectType", value.GetType().Name));
o.WriteTo(writer);
}
}
If I remove all of the [JsonConverter] class attributes then it executes and adds ObjectType to the IProcessorOptions, but not to any of the subtypes. However, with those attributes, I get System.StackOverflowException on JToken jt = JToken.FromObject(value);
I had thought that this was due to it being the same object type, however even with 4 custom JsonConverter classes that don't share a common codebase I get the same exception.
I'm stumped and really don't want to have the ugly "$type" = "MyAssmbly.Namespace.Class, Assembly" node!
UPDATE Even if I only have the OptionsJsonConvertor<IProcessorOptions> enabled on the Class IProcessorOptions I get a System.StackOverflowException.
OK, so the resolution for this was somewhat of a compromise. We now have no customer JsonConverter types in the system and instead, use ISerializationBinder.
public class OptionsSerializationBinder : ISerializationBinder
{
public void BindToName(Type serializedType, out string assemblyName, out string typeName)
{
assemblyName = null;
typeName = serializedType.Name;
}
public Type BindToType(string assemblyName, string typeName)
{
Type type = AppDomain.CurrentDomain.GetAssemblies()
.Where(a => !a.IsDynamic)
.SelectMany(a => a.GetTypes())
.FirstOrDefault(t => t.Name.Equals(typeName) || t.FullName.Equals(typeName));
if (type is null || type.IsAbstract || type.IsInterface)
{
Log.Warning("Unable to load Processor: {typename}", typeName);
throw new InvalidOperationException();
}
return type;
}
}
Setting the Assembly option to null is critical to maintain the friendly names that we want.
private static JsonSerializerSettings GetSerializerSettings(TypeNameHandling typeHandling = TypeNameHandling.Auto)
{
return new JsonSerializerSettings()
{
ReferenceLoopHandling = ReferenceLoopHandling.Ignore,
TypeNameHandling = typeHandling,
TypeNameAssemblyFormatHandling = TypeNameAssemblyFormatHandling.Simple,
SerializationBinder = new OptionsSerializationBinder(),
Formatting = Formatting.Indented
};
}
We can then set SerializationBinder and the default the TypeNameHandling to auto. We found when setting this to Object that it was too greedy and tried to write a $type for generic lists and such, creating a nasty look that did not serilise. Auto provided the right level for us.
If you need to use Object which will also apply to the root of your object map then you may need to create a customer wrapper class around any List<> or Dictionary<> object that you want to serialise to make sure it gets a friendly name.

How convert IConfigurationRoot or IConfigurationSection to JObject/JSON

I have the following code in my Program.cs:
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("clientsettings.json", optional: true, reloadOnChange: true)
.AddJsonFile($"clientsettings.{host.GetSetting("environment")}.json", optional: true, reloadOnChange: true)
.AddEnvironmentVariables()
.Build();
I want to convert the result of building my configuration to JObject\Json for sending to the client. How can I do it?
and I don't want to create my custom class for my settings.
My answer: merge
public static JObject GetSettingsObject(string environmentName)
{
object[] fileNames = { "settings.json", $"settings.{environmentName}.json" };
var jObjects = new List<object>();
foreach (var fileName in fileNames)
{
var fPath = Directory.GetCurrentDirectory() + Path.DirectorySeparatorChar + fileName;
if (!File.Exists(fPath))
continue;
using (var file = new StreamReader(fPath, Encoding.UTF8))
jObjects.Add(JsonConvert.DeserializeObject(file.ReadToEnd()));
}
if (jObjects.Count == 0)
throw new InvalidOperationException();
var result = (JObject)jObjects[0];
for (var i = 1; i < jObjects.Count; i++)
result.Merge(jObjects[i], new JsonMergeSettings
{
MergeArrayHandling = MergeArrayHandling.Merge
});
return result;
}
Since configuration is actually just a key value store where the keys have a certain format to represent a path, serializing it back into a JSON is not that simple.
What you could do is recursively traverse through the configuration children and write its values to a JObject. This would look like this:
public JToken Serialize(IConfiguration config)
{
JObject obj = new JObject();
foreach (var child in config.GetChildren())
{
obj.Add(child.Key, Serialize(child));
}
if (!obj.HasValues && config is IConfigurationSection section)
return new JValue(section.Value);
return obj;
}
Note that this is extremely limited in how the output looks. For example, numbers or booleans, which are valid types in JSON, will be represented as strings. And since arrays are represented through numerical key paths (e.g. key:0 and key:1), you will get property names that are strings of indexes.
Let’s take for example the following JSON:
{
"foo": "bar",
"bar": {
"a": "string",
"b": 123,
"c": true
},
"baz": [
{ "x": 1, "y": 2 },
{ "x": 3, "y": 4 }
]
}
This will be represented in configuration through the following key paths:
"foo" -> "bar"
"bar:a" -> "string"
"bar:b" -> "123"
"bar:c" -> "true"
"baz:0:x" -> "1"
"baz:0:y" -> "2"
"baz:1:x" -> "3"
"baz:1:y" -> "4"
As such, the resulting JSON for the above Serialize method would look like this:
{
"foo": "bar",
"bar": {
"a": "string",
"b": "123",
"c": "true"
},
"baz": {
"0": { "x": "1", "y": "2" },
"1": { "x": "3", "y": "4" }
}
}
So this will not allow you to get back the original representation. That being said, when reading the resulting JSON again with Microsoft.Extensions.Configuration.Json, then it will result in the same configuration object. So you can use this to store the configuration as JSON.
If you want anything prettier than that, you will have to add logic to detect array and non-string types, since both of these are not concepts of the configuration framework.
I want to merge appsettings.json and appsettings.{host.GetSetting("environment")}.json to one object [and send that to the client]
Keep in mind that environment-specific configuration files often contain secrets that shouldn’t leave the machine. This is also especially true for environment variables. If you want to transmit the configuration values, then make sure not to include the environment variables when building the configuration.
The configuration data is represented by a flattened collection of KeyValuePair<string, string>. You could create a dictionary from it and serialize that to JSON. However, that will probably not give you the desired result:
Configuration.AsEnumerable().ToDictionary(k => k.Key, v => v.Value);
Also, please take in mind that this configuration object will contain environment variables, you definitely don't want to send these to the client.
A better option might be to first bind the configuration to your POCO's and serialize those to JSON:
var appConfig = new AppConfig();
Configuration.Bind(appConfig);
var json = JsonConvert.SerializeObject(appConfig);
public class AppConfig
{
// Your settings here
public string Foo { get; set; }
public int Bar { get; set; }
}
The resultant IConfiguration object from the Build() method will encompass all of your configuration sources, and will merge based on the priority order defined by the order in which you added your config sources.
In your case this would be:
clientsettings.json
clientsettings.env.json
Environment Variables
You wont need to worry about merging sources manually or loading the files, as its already done for you.
To improve on poke's answer, I came up with this:
private JToken Serialize(IConfiguration config)
{
JObject obj = new JObject();
foreach (var child in config.GetChildren())
{
if (child.Path.EndsWith(":0"))
{
var arr = new JArray();
foreach (var arrayChild in config.GetChildren())
{
arr.Add(Serialize(arrayChild));
}
return arr;
}
else
{
obj.Add(child.Key, Serialize(child));
}
}
if (!obj.HasValues && config is IConfigurationSection section)
{
if (bool.TryParse(section.Value, out bool boolean))
{
return new JValue(boolean);
}
else if (decimal.TryParse(section.Value, out decimal real))
{
return new JValue(real);
}
else if (long.TryParse(section.Value, out int integer))
{
return new JValue(integer);
}
return new JValue(section.Value);
}
return obj;
}
The code above accounts for data types such as boolean, long & decimal.
long & decimal are the largest data types available for integers so will encompass any smaller values like short or float.
The code will also construct your arrays properly, so you end up with a like for like representation of all of your config in one json file.
Here is Tom's solution converted to use System.Text.Json.
static internal JsonNode? Serialize(IConfiguration config)
{
JsonObject obj = new();
foreach (var child in config.GetChildren())
{
if (child.Path.EndsWith(":0"))
{
var arr = new JsonArray();
foreach (var arrayChild in config.GetChildren())
{
arr.Add(Serialize(arrayChild));
}
return arr;
}
else
{
obj.Add(child.Key, Serialize(child));
}
}
if (obj.Count() == 0 && config is IConfigurationSection section)
{
if (bool.TryParse(section.Value, out bool boolean))
{
return JsonValue.Create(boolean);
}
else if (decimal.TryParse(section.Value, out decimal real))
{
return JsonValue.Create(real);
}
else if (long.TryParse(section.Value, out long integer))
{
return JsonValue.Create(integer);
}
return JsonValue.Create(section.Value);
}
return obj;
}
// Use like this...
var json = Serialize(Config);
File.WriteAllText("out.json",
json.ToJsonString(new JsonSerializerOptions() { WriteIndented = true}));
Do you really want to sent to client all your environment variables (.AddEnvironmentVariables()), connections string and all other stuff in appsettings??? I recommend you do not do this.
Instead, make one class (say ClientConfigOptions), configure it binding using services.Configure<ClientConfigOptions>(configuration.GetSection("clientConfig")) and send it to client.
With this approach, you may also tune your ClientConfigOptions with Actions, copy some values from different appsetting paths, etc.

Recursively Create ReadOnly Object in FlowJS

In redux, the state should be immutable. I would like Flow to prevent anyone from mutating that state. So, given an object of arbitrary depth:
type object = {
a: {
b: {
d: string
}
},
c: number
}
How can I create a new type that is recursively readonly, so that I cannot do:
let TestFunction = (param: $RecursiveReadOnly<object>) => {
param.a.b.d = 'some string'
}
The builtin $ReadOnly utility of Flow will create a type like this, which isn't what is needed, because b & d are still writable:
{
+a: {
b: {
d: string
}
},
+c: number
}
I've been trying to use the $Call & $ObjMap(i), but I can't figure out how to recursively travel an object in Flow. The objective is to have this:
{
+a: {
+b: {
+d: string
}
},
+c: number
}
Thanks to kalley for his solution. From what I understood, kalley tried to make any object received by a function recursively read only. Since I really only needed known objects as parameters, this works perfectly:
// Type definition that works with arbitrary nested objects/arrays etc.
declare type RecursiveReadOnly<O: Object> = $ReadOnly<$ObjMap<O, typeof makeRecursive>>
declare type RecursiveReadOnlyArray<O: Object> = $ReadOnlyArray<$ReadOnly<$ObjMap<O, typeof makeRecursive>>>
type Recursive<O: Object> = $ObjMap<O, typeof makeRecursive>
declare function makeRecursive<F: Function>(F): F
declare function makeRecursive<A: Object[]>(A): $ReadOnlyArray<$ReadOnly<Recursive<$ElementType<A, number>>>>
declare function makeRecursive<O: Object>(O): RecursiveReadOnly<O>
declare function makeRecursive<I: string[] | boolean[] | number[]>(I): $ReadOnlyArray<$ElementType<I, number>>
declare function makeRecursive<I: string | boolean | number | void | null>(I): I
// Usage example.
type obj = {
a: {
b: {
d: string,
}
}
}
let TestFunction = (param: RecursiveReadOnly<obj>) => {
param.a.b.d = 'some string' // Flow throws an error
}

symfony 2 doctrine $query->getArrayResult() how to remove selected key->values from result

As I don't want id values from a select with createQuery, but the select command doesn't allow omitting id (primary key) from the actual query (using "partial") I need to remove the id's from the result from getArrayResult()
I made this small recursive key remover static class:
class arrayTool
{
public static function cleanup($array, $deleteKeys)
{
foreach($array as $key => $value )
{
if(is_array( $value))
{
$array[$key] = self::cleanup($array[$key], $deleteKeys);
} else {
if (in_array($key, $deleteKeys)) unset($array[$key]);
}
}
return $array;
}
}
Which is called by an array containing one or more keys to be removed from the result, of any array depth:
$array = arrayTool::cleanup($array, array('id', 'id2'));

Resources