I need to transform some JSON data, and Ramda's indexBy does just about exactly what I want. The code below works for a single object:
const operativeIndex = R.pipe(R.head, R.keysIn,
R.intersection(['Weight', 'Height', 'Month', 'Week']), R.head);
const reIndex = R.indexBy(R.prop(operativeIndex(testObject)), testObject);
But to map an array of objects through my re-indexing function, I believe I need to rewrite reIndex so that it needs only a single injection of testObject.
How can I do that?
To help visualize the task: the current code transforms testObject from an array like this, which will have one of the 4 allowed indices:
[{ Height: '45',
L: '-0.3521',
M: '2.441',
S: '0.09182'},
{ Height: '45.5',
L: '-0.3521',
M: '2.5244',
S: '0.09153'}]
into an object like this:
{ '45':
{ Height: '45',
L: '-0.3521',
M: '2.441',
S: '0.09182' },
'45.5':
{ Height: '45.5',
L: '-0.3521',
M: '2.5244',
S: '0.09153' } }
If I understand your question correctly you want reIndex to be a function that takes a list of objects and produces an object index.
If yes you could do it this way
const operativeIndex = R.pipe(
R.keysIn,
R.intersection(['Weight', 'Height', 'Month', 'Week']),
R.head
)
const reIndex = R.indexBy(R.chain(R.prop, operativeIndex))
Then you could do reIndex(list) Demo.
BTW keep in mind that keysIn goes up the prototype chain and the order is NOT guaranteed.
Related
Here's a minimal example:
import weaviate
CLASS = "Superhero"
PROP = "superhero_name"
client = weaviate.Client("http://localhost:8080")
class_obj = {
"class": CLASS,
"properties": [
{
"name": PROP,
"dataType": ["string"],
"moduleConfig": {
"text2vec-transformers": {
"vectorizePropertyName": False,
}
},
}
],
"moduleConfig": {
"text2vec-transformers": {
"vectorizeClassName": False
}
}
}
client.schema.delete_all()
client.schema.create_class(class_obj)
batman_id = client.data_object.create({PROP: "Batman"}, CLASS)
by_text = (
client.query.get(CLASS, [PROP])
.with_additional(["distance", "id"])
.with_near_text({"concepts": ["Batman"]})
.do()
)
print(by_text)
batman_vector = client.data_object.get(
uuid=batman_id, with_vector=True, class_name=CLASS
)["vector"]
by_vector = (
client.query.get(CLASS, [PROP])
.with_additional(["distance", "id"])
.with_near_vector({"vector": batman_vector})
.do()
)
print(by_vector)
Please note that I specified both "vectorizePropertyName": False and "vectorizeClassName": False
The code above returns:
{'data': {'Get': {'Superhero': [{'_additional': {'distance': 0.08034378, 'id': '05fbd0cb-e79c-4ff2-850d-80c861cd1509'}, 'superhero_name': 'Batman'}]}}}
{'data': {'Get': {'Superhero': [{'_additional': {'distance': 1.1920929e-07, 'id': '05fbd0cb-e79c-4ff2-850d-80c861cd1509'}, 'superhero_name': 'Batman'}]}}}
If I look up the exact vector I get 'distance': 1.1920929e-07, which I guess is actually 0 (for some floating point evil magic), as expected.
But if I use near_text to search for the exact property, I get a distance > 0.
This is leading me to believe that, when using near_text, the embedding is somehow different.
My question is:
Why does this happen?
With two corollaries:
Is 1.1920929e-07 actually 0 or do I need to read something deeper into that?
Is there a way to check the embedding created during the near_text search?
here is some information that may help:
Is 1.1920929e-07 actually 0 or do I need to read something deeper into that?
Yes, this value 1.1920929e-07 should be interpreted as 0. I think there are some unfortunate float32/64 conversions going on that need to be rooted out.
Is there a way to check the embedding created during the near_text search?
The embeddings are either imported or generated during object creation, not at search-time. So performing multiple queries on an unchanged object will utilize the same search vector.
We are looking into both of these issues.
I've got an Image Collection like:
ImageCollection : {
features : [
0 : {
type: Image,
id: MODIS/006/MOD11A1/2019_01_01,
properties : {
LST_Day_1km : 12345,
LST_Night_1km : 11223,
system:index : "2019-01-01",
system:asset_size: 764884189,
system:footprint: LinearRing,
system:time_end: 1546387200000,
system:time_start: 1546300800000
},
1 : { ... }
2 : { ... }
...
],
...
]
From this collection, how can I get an array of objects of specific properties? Like:
[
{
LST_Day_1km : 12345,
LST_Night_1km : 11223,
system:index : "2019-01-01"
},
{
LST_Day_1km : null,
LST_Night_1km : 11223,
system:index : "2019-01-02"
}
...
];
I tried ImageCollection.aggregate_array(property) but it allows only one parameter at one time.
The problem is that the length of "LST_Day_1km" is different from the length of "system:index" because "LST_Day_1km" includes empty values, so it's hard to combine arrays after get them separately.
Thanks in advance!
Whenever you want to extract data from a collection in Earth Engine, it is often a straightforward and efficient strategy to first arrange for the data to be present as a single property on the features/images of that collection, using map.
var wanted = ['LST_Day_1km', 'LST_Night_1km', 'system:index'];
var augmented = imageCollection.map(function (image) {
return image.set('dict', image.toDictionary(wanted));
});
Then, as you're already familiar with, just use aggregate_array to extract that property's values:
var list = augmented.aggregate_array('dict');
print(list);
Runnable complete example:
var imageCollection = ee.ImageCollection('MODIS/006/MOD11A1')
.filterDate('2019-01-01', '2019-01-07')
.map(function (image) {
// Fake values to match the question
return image.set('LST_Day_1km', 1).set('LST_Night_1km', 2)
});
print(imageCollection);
// Add a single property whose value is a dictionary containing
// all the properties we want.
var wanted = ['LST_Day_1km', 'LST_Night_1km', 'system:index'];
var augmented = imageCollection.map(function (image) {
return image.set('dict', image.toDictionary(wanted));
});
print(augmented.first());
// Extract that property.
var list = augmented.aggregate_array('dict');
print(list);
https://code.earthengine.google.com/ffe444339d484823108e23241db04629
I think my question is easy, but nonetheless I could not find an answer anywhere.
I want to typecheck a function, but what I cannot seem to do is bind the return type to the input type.
Say I have a deck of cards that is typed, and I want a (imaginairy) return type that depends on the input given an existing mapping.
The deck with the function:
type Suit = "diamonds" | "clubs" | "hearts" | "spades"
const suitMapping = {
"diamonds": ["are", "forever"],
"clubs": ["fabric", "fuse"],
"hearts": ["she", "loves", "me"],
"spades": ["lemmy", "loud"]
}
const suitToList = (suit: Suit) => {
return suitMapping[suit]
}
So for instance, I know that suitToList("diamonds") will return ["are", "forever"]. And the mapping in the object is fixed and computer generated. But I would love it if there would be a way to typespec the mapping with Flow. That way, if somewhere down the road someone wants to add "motorhead" to "spades", the typecheck would fail at first, so the functions depending on the output could be checked.
For now, I have tests for it, but somewhere I feel this could be possible with Flow too.
I find a way to typecheck this, but with usage of any. Not too clean way, but I think it's a flow bug. See https://github.com/facebook/flow/issues/2057#issuecomment-395412981
type Suit = "diamonds" | "clubs" | "hearts" | "spades"
type SuitMapping = {
diamonds: string,
clubs: number,
hearts: Array<string>,
spades: Array<string>,
}
const suitMapping: SuitMapping = {
"diamonds": '',
"clubs": 1,
"hearts": ["she", "loves", "me"],
"spades": ["lemmy", "loud"]
}
const suitToList = <K: Suit>(suit: K): $ElementType<SuitMapping, K> => {
return (suitMapping[suit]: any);
}
// suitToList('xxx'); // error
// const x: number = suitToList('diamonds'); // error
const y: string = suitToList('diamonds'); // works
See on flow try
I am using redux and immutablejs, and I am trying to create a reducer function.
I have come across some behaviour I did not expect
const a = new Immutable.Map({
numbers: [{id: 1},{id: 2}]
});
const b = a.merge(
{
numbers: [{id: 4},{id: 5}]
}
);
Here are the values of a and b
a.get("numbers");
[Object, Object]
b.get("numbers");
List {size: 2, _origin: 0, _capacity: 2, _level: 5, _root: null…}
b.get("numbers").get(0);
Map {size: 1, _root: ArrayMapNode, __ownerID: undefined, __hash: undefined, __altered: false}
I did not expect numbers to be an immutable List of Map objects.
In my application, using redux, I set the initial state to:
const initialState = new Immutable.Map({
error: null,
isBusy: false,
films: []
});
I the reducer, when I fetch films I try to merge them as follows:
return state.merge({
isBusy: false,
films: action.payload,
error: null
});
This causes issues in the react component, as films are initially an array of objects, and then they become an Immutable List of Maps.
Should I create a different initial state, or should I be using a different type of merge? Or something else?
Thanks
I think what you are trying to do is not merge of whole map object, at least should not be in the case you say, it should be update + ( concat or merge ):
const a = new Immutable.Map({
numbers: [{id: 1},{id: 2}]
});
const b = a.update("numbers", numbers =>
numbers.concat([{id: 4},{id: 5}])
// or
numbers.merge([{id: 4},{id: 5}])
);
by doing merge in your code, you are overriding the existing ones due to the nature of "merge" because the keys are the same in the merge; "numbers".
I'm trying to write a trait (in Scala 2.8) that can be mixed in to a case class, allowing its fields to be inspected at runtime, for a particular debugging purpose. I want to get them back in the order that they were declared in the source file, and I'd like to omit any other fields inside the case class. For example:
trait CaseClassReflector extends Product {
def getFields: List[(String, Any)] = {
var fieldValueToName: Map[Any, String] = Map()
for (field <- getClass.getDeclaredFields) {
field.setAccessible(true)
fieldValueToName += (field.get(this) -> field.getName)
}
productIterator.toList map { value => fieldValueToName(value) -> value }
}
}
case class Colour(red: Int, green: Int, blue: Int) extends CaseClassReflector {
val other: Int = 42
}
scala> val c = Colour(234, 123, 23)
c: Colour = Colour(234,123,23)
scala> val fields = c.getFields
fields: List[(String, Any)] = List((red,234), (green,123), (blue,23))
The above implementation is clearly flawed because it guesses the relationship between a field's position in the Product and its name by equality of the value on those field, so that the following, say, will not work:
Colour(0, 0, 0).getFields
Is there any way this can be implemented?
Look in trunk and you'll find this. Listen to the comment, this is not supported: but since I also needed those names...
/** private[scala] so nobody gets the idea this is a supported interface.
*/
private[scala] def caseParamNames(path: String): Option[List[String]] = {
val (outer, inner) = (path indexOf '$') match {
case -1 => (path, "")
case x => (path take x, path drop (x + 1))
}
for {
clazz <- getSystemLoader.tryToLoadClass[AnyRef](outer)
ssig <- ScalaSigParser.parse(clazz)
}
yield {
val f: PartialFunction[Symbol, List[String]] =
if (inner.isEmpty) {
case x: MethodSymbol if x.isCaseAccessor && (x.name endsWith " ") => List(x.name dropRight 1)
}
else {
case x: ClassSymbol if x.name == inner =>
val xs = x.children filter (child => child.isCaseAccessor && (child.name endsWith " "))
xs.toList map (_.name dropRight 1)
}
(ssig.symbols partialMap f).flatten toList
}
}
Here's a short and working version, based on the example above
trait CaseClassReflector extends Product {
def getFields = getClass.getDeclaredFields.map(field => {
field setAccessible true
field.getName -> field.get(this)
})
}
In every example I've seen the fields are in reverse order: the last item in the getFields array is the first one listed in the case class. If you use case classes "nicely", then you should just be able to map productElement(n) onto getDeclaredFields()( getDeclaredFields.length-n-1).
But this is rather dangerous, as I don't know of anything in the spec that insists that it must be that way, and if you override a val in the case class, it won't even appear in getDeclaredFields (it'll appear in the fields of that superclass).
You might change your code to assume things are this way, but check that the getter method with that name and the productIterator return the same value and throw an exception if they don't (which means that you don't actually know what corresponds to what).
You can also use the ProductCompletion from the interpreter package to get to attribute names and values of case classes:
import tools.nsc.interpreter.ProductCompletion
// get attribute names
new ProductCompletion(Colour(1, 2, 3)).caseNames
// returns: List(red, green, blue)
// get attribute values
new ProductCompletion(Colour(1, 2, 3)).caseFields
Edit: hints by roland and virtualeyes
It is necessary to include the scalap library which is part of the scala-lang collection.
Thanks for your hints, roland and virtualeyes.