Returning vector with Weaviate - weaviate

I am considering using Weaviate as a vector store. I see on the docs that you can set the vectorizer to none and provide custom vectors in addition to the other metadata. After I do that I want to get all the metadata back using the Get{} query, as well as the vectors. Does Weaviate support returning the vector as well as the metadata?
I see in the demo they add vector to the additional properties and it seemed to return the vector associated with the object. Is this in fact simply returning the vector associated with results or is it tied to specifying some kind of search query like nearText?

Yes, that's certainly possible.
{
Get{
Article(
nearText: {
concepts: ["music"]
}
limit: 3
){
title
_additional {
vector # <== this is what you're looking for
}
}
}
}
You can also try it out here
Additional property docs

Related

Firestore Update fields in nested objects with dynamic key

I need to update a field in a nested object with a dynamic key.
the path could look like this: level1.level2.DYNAMIC_KEY : updatedValue
The update-method deletes everything else on level1 instead of only updating the field in the nested object. The update() acts more like a set(). What am I doing wrong?
I tried the following already:
I read the documentation https://firebase.google.com/docs/firestore/manage-data/add-data#update-data
but that way it is a) static and b) still deletes the other fields.
Update fields in nested objects
If your document contains nested objects, you can use "dot notation" to reference nested fields within the document when you call update()
This would be static and result in
update({
'level1.level2.STATIC_KEY' : 'updatedValue'
});
Then I found this answer https://stackoverflow.com/a/47296152/5552695
which helped me to make the updatepath dynamic.
The desired solution after this could look like
field[`level1.level2.${DYNAMIC_KEY}`] = updateValue;
update(field);
But still: it'll delete the other fields in this path.
UPDATE:
The Structure of my Doc is as follows:
So inside this structure i want to update only complexArray > 0 > innerObject > age
Writing the above path into the update() method will delete everything else on the complexArray-level.
A simple update on first-level-fields works fine and lets the other first-level-fields untouched.
Is it possible, that firestore functions like update() can only act on the lowest field-level on an document. And as soon as i put complex objects into an document its not possible to select such inner fields?
I know there would be the solution to extract those "complex" objects into separate collections + documents and put these into my current lowest document level. I think this would be a more accurate way to stick to the FireStore principles. But on Application side it is easier to work with complex object than to always dig deeper in firestore collection + document structure.
So my current solution is to send the whole complex object into the update() method even though I just changed only one field on application side.
Have you tried using the { merge: true } option in your request?
db
.collection("myCollection")
.doc("myDoc")
.set(
{
level1: { level2: { myField: "myValue" } }
},
{ merge: true }
)

Project all fields except X

I'm trying to return partial documents with DynamoDB. Instead of listing all the items I want returned using ProjectionExpression, it would be far easier in this case to just filter the single item I don't want returned.
i.e. below, I'd like to return everything except privateItem.
{
"item1" : ...,
"item2" : ...,
"privateItem" : {
...
}
}
Is this possible? I've scoured the docs to no avail.
Thanks.
Based upon the docs it seems that you can't, you can only get either every field or a whitelist of fields (maybe that'll change in future though).
In your case I would imagine the best thing to do is to delete/filter the field you don't want after you've retrieved the document from DynamoDB.

Size of firestore rules path

I'm trying to use the size of the path in the firestore rules, but can't get anything to work, and can't find any reference in the firestore docs on how to do this.
I want to use the last collection name as a parameter in the rule, so tried this:
match test/{document=**}
allow read, write: if document[document.size() - 2] == 'subpath';
But .size() does not seem to work, neither does .length
This can be done but you first have to coerce the Path to a String.
To get the Path of the current resource, you can use the __name__ property.
https://firebase.google.com/docs/reference/rules/rules.firestore.Resource#name
For reference, resource is a general property that is available on every request that represents the Firestore Document being read or written.
https://firebase.google.com/docs/reference/rules/rules.firestore.Resource
resource['__name__']
The value returned by __name__ is a Path, which is lacking in useful methods, so before you can use size you will need to coerce the Path to a String.
https://firebase.google.com/docs/reference/rules/rules.String.html
string(resource['__name__'])
Once converted to a string, you can then split the string on the / operator and convert it into a List of String path parts.
https://firebase.google.com/docs/reference/rules/rules.String.html#split
string(resource['__name__']).split('/')
Now you can retrieve the size of the List using the List's size method.
https://firebase.google.com/docs/reference/rules/rules.List#size
string(resource['__name__']).split('/').size()
One of the challenging things about Firestore rules is that there's no support for variables so you often will repeat code when you need to use a result more than once. For instance, in this case, we need to use the result of the split twice but cannot store it into a variable.
string(resource['__name__']).split('/')[string(resource['__name__']).split('/').size() - 2]
You can DRY this up a bit by making use of functions and using the parameter as your variable.
function getSecondToLastPathPart(pathParts) {
return pathParts[pathParts.size() - 2];
}
getSecondToLastPathPart(string(resource['__name__']).split('/'))
Tying it all together for your solution, it would look like this...
function getSecondToLastPathPart(pathParts) {
return pathParts[pathParts.size() - 2];
}
match test/{document=**} {
allow read, write: if getSecondToLastPathPart(string(resource['__name__']).split('/')) == 'subpath';
}
Hope this helps!
You can learn rules here
// Allow reads of documents that begin with 'abcdef'
match /{document} {
allow read: if document[0:6] == 'abcdef';
}

Collection of Unique Functions in Go

I am trying to implement a set of functions in go. The context is an event server; I would like to prevent (or at least warn) adding the same handler more than once for an event.
I have read that maps are idiomatic to use as sets because of the ease of checking for membership:
if _, ok := set[item]; ok {
// don't add item
} else {
// do add item
}
I'm having some trouble with using this paradigm for functions though. Here is my first attempt:
// this is not the actual signature
type EventResponse func(args interface{})
type EventResponseSet map[*EventResponse]struct{}
func (ers EventResponseSet) Add(r EventResponse) {
if _, ok := ers[&r]; ok {
// warn here
return
}
ers[&r] = struct{}{}
}
func (ers EventResponseSet) Remove(r EventResponse) {
// if key is not there, doesn't matter
delete(ers, &r)
}
It is clear why this doesn't work: functions are not reference types in Go, though some people will tell you they are. I have proof, though we shouldn't need it since the language specification says that everything other than maps, slices, and pointers are passed by value.
Attempt 2:
func (ers EventResponseSet) Add(r *EventResponse) {
// ...
}
This has a couple of problems:
Any EventResponse has to be declared like fn := func(args interface{}){} because you can't address functions declared in the usual manner.
You can't pass a closure at all.
Using a wrapper is not an option because any function passed to the wrapper will get a new address from the wrapper - no function will be uniquely identifiable by address, and all this careful planning is for nought.
Is it silly of me to not accept defining functions as variables as a solution? Is there another (good) solution?
To be clear, I accept that there are cases that I can't catch (closures), and that's fine. The use case that I envision is defining a bunch of handlers and being relatively safe that I won't accidentally add one to the same event twice, if that makes sense.
You could use reflect.Value presented by Uvelichitel, or the function address as a string acquired by fmt.Sprint() or the address as uintptr acquired by reflect.Value.Pointer() (more in the answer How to compare 2 functions in Go?), but I recommend against it.
Since the language spec does not allow to compare function values, nor does it allow to take their addresses, you have no guarantee that something that works at a time in your program will work always, including a specific run, and including different (future) Go compilers. I would not use it.
Since the spec is strict about this, this means compilers are allowed to generate code that would for example change the address of a function at runtime (e.g. unload an unused function, then load it again later if needed again). I don't know about such behavior currently, but this doesn't mean that a future Go compiler will not take advantage of such thing.
If you store a function address (in whatever format), that value does not count as keeping the function value anymore. And if no one else would "own" the function value anymore, the generated code (and the Go runtime) would be "free" to modify / relocate the function (and thus changing its address) – without violating the spec and Go's type safety. So you could not be rightfully angry at and blame the compiler, but only yourself.
If you want to check against reusing, you could work with interface values.
Let's say you need functions with signature:
func(p ParamType) RetType
Create an interface:
type EventResponse interface {
Do(p ParamType) RetType
}
For example, you could have an unexported struct type, and a pointer to it could implement your EventResponse interface. Make an exported function to return the single value, so no new values may be created.
E.g.:
type myEvtResp struct{}
func (m *myEvtResp) Do(p ParamType) RetType {
// Your logic comes here
}
var single = &myEvtResp{}
func Get() EventResponse { return single }
Is it really needed to hide the implementation in a package, and only create and "publish" a single instance? Unfortunately yes, because else you could create other value like &myEvtResp{} which may be different pointers still having the same Do() method, but the interface wrapper values might not be equal:
Interface values are comparable. Two interface values are equal if they have identical dynamic types and equal dynamic values or if both have value nil.
[...and...]
Pointer values are comparable. Two pointer values are equal if they point to the same variable or if both have value nil. Pointers to distinct zero-size variables may or may not be equal.
The type *myEvtResp implements EventResponse and so you can register a value of it (the only value, accessible via Get()). You can have a map of type map[EventResponse]bool in which you may store your registered handlers, the interface values as keys, and true as values. Indexing a map with a key that is not in the map yields the zero value of the value type of the map. So if the value type of the map is bool, indexing it with a non-existing key will result in false – telling it's not in the map. Indexing with an already registered EventResponse (an existing key) will result in the stored value – true – telling it's in the map, it's already registered.
You can simply check if one already been registered:
type EventResponseSet map[*EventResponse]bool
func (ers EventResponseSet) Add(r EventResponse) {
if ers[r] {
// warn here
return
}
ers[r] = true
}
Closing: This may seem a little too much hassle just to avoid duplicated use. I agree, and I wouldn't go for it. But if you want to...
Which functions you mean to be equal? Comparability is not defined for functions types in language specification. reflect.Value gives you the desired behaviour more or less
type EventResponseSet map[reflect.Value]struct{}
set := make(EventResponseSet)
if _, ok := set[reflect.ValueOf(item)]; ok {
// don't add item
} else {
// do add item
set[reflect.ValueOf(item)] = struct{}{}
}
this assertion will treat as equal items produced by assignments only
//for example
item1 := fmt.Println
item2 := fmt.Println
item3 := item1
//would have all same reflect.Value
but I don't think this behaviour guaranteed by any documentation.

What's the right way to check if collection.find success in Meteor?

The Meteor document said 'find returns a cursor', and can use 'fetch' to return all matching documents, but I didn't find a complete reference of this 'cursor' object.
I want to use this 'cursor' object to check if find sucessfully got some result or got nothing.
Following is what I am doing now:
if (Tags.find({name: tag["tag"]}).fetch().length === 0) {
// Not found, add default documents
}
Not sure if this is right way(best practice) to do this?
The idiom is to use .findOne:
if (!Tags.findOne(...)) {
// nothing found, add default documents
}
This is more efficient than fetch() because it only queries the database for one document.
You can also use <cursor>.count, but beware that in Mongo, count operations are expensive.

Resources