Google Cloud Vision API response structure - google-cloud-vision

Getting this response when invoking vision.documentTextDetection()
[{
faceAnnotations: [],
landmarkAnnotations: [],
logoAnnotations: [],
labelAnnotations: [],
textAnnotations: [],
localizedObjectAnnotations: [],
safeSearchAnnotation: null,
imagePropertiesAnnotation: null,
error: null,
cropHintsAnnotation: null,
fullTextAnnotation: null,
webDetection: null,
context: null
}]
What's the point in some properties as empty arrays and some as null?
I'm trying to make conditional rendering and got stuck on truthy values when IRL they aren't.

I can't speak for the creators of the API, but it makes sense in terms of you writing less code to get more work done.
If the array fields in question are always arrays and nothing else, then you simply make the assumption in your code that they are arrays. This means that if you want to know if something was returned in that array safely, then all you have to do is check the length of the array, or just iterate it.
On the other hand, if the array fields in question are sometimes arrays, maybe null, then you have to write code to first check if the array field is null, then write code to check the array.
Which would you rather do? 1) Assume the field is an array and deal with it. Or 2) Check to see if it's actually an array, then deal with it as such? Seems to me that it's more convenient to just assume array, and write less code.

Related

Firebase Function Data Type of "data" in onCall function

I thought i had a simple question but it seems to be somewhat harder and the documentation does not help alot.
What exactly is the type of 'data' in "functions.https.onCall((data, context) {});"
I thought it varies between a simple value, a map or a list.
But even if i call the function with a map object and try to delete a key from it, it fails because it isn't a map.
It also can't be immutable and casting it to a map doesn't work too.
So whatever it is, i just want to remove a key from it. Does anyone know the datatype so i am able to find the correct function?
As #Delwinn stated out, the 'data' object seems to be a 'json object' (if this is a type in typescript) and not a map.
to delete a value from this object, a plain line like
delete json[key]
will do the job.
And yes, 'delete' is written like an operator and not a function.

How do I handle reads and writes of objects containing a server timestamp in Firebase?

I'll explain.
I've been stuck with figuring out how to handle timestamps in Firebase using FieldValue.serverTimestamp().
So let's assume I have an object called question, and I want the object to contain server stamped timestamp. Is how the class should look like (the timestamp part is the only important part):
class Question(
val id: String,
val title: String,
val details: String,
val author_ID: String,
val timestamp: FieldValue,
) {
constructor() : this(
"",
"",
"",
"",
FieldValue.serverTimestamp()
)
}
And then I'll set it like this?
val mQuestion = Question("id", "title", "details", "author", FieldValue.serverTimestamp())
db.collection("questions").document().set(mQuestion)
Is this the correct way to go?
If so, how do I handle the read? Because when the data is being read, the time stamp field would now correspond to a Date type and would cause a crash because Date can't be converted to FieldValue.
Do I need to have two classes for each type of object? One used for reading and one for writing? It doesn't feel right.
I was thinking also maybe I have the timestamp in the class be of type Data and then I upload it empty, and a cloud function would write the date immediately. I feel like this might work but also doesn't feel efficient.
The automatic serialization and deserialization is mostly (from my perspective) a convenience for common read and write operations. I don't see it as a one-size-fits-all mechanism for all reads and writes that could be performed. BOTTOM LINE: If the convenience layer doesn't work for you, then don't use it.
What you're trying to do with FieldValue.serverTimestamp() seems like one of the outlying cases where convenience is not met, since that value has to be determined on the server and not on the client. As it's implemented, the client and server can't agree on a specific type that applies to both reads and writes. If the client wants the server to write a current timestamp, it has to send a token to indicate that, not an actual typed timestamp.
You could certainly implement different types for reading and writing, and that's OK. Or, you can take control of the serialization by passing and parsing Map values, which would probably be more common (and more efficient, as it doesn't involve reflection). In short, I don't think there is an easy way out with the currently strongly typed system.

Apollo / GraphQl - Type must be Input type

Reaching to you all as I am in the learning process and integration of Apollo and graphQL into one of my projects. So far it goes ok but now I am trying to have some mutations and I am struggling with the Input type and Query type. I feel like it's way more complicated than it should be and therefore I am looking for advice on how I should manage my situation. Examples I found online are always with very basic Schemas but the reality is always more complex as my Schema is quite big and look as follow (I'll copy just a part):
type Calculation {
_id: String!
userId: String!
data: CalculationData
lastUpdated: Int
name: String
}
type CalculationData {
Loads: [Load]
validated: Boolean
x: Float
y: Float
z: Float
Inputs: [Input]
metric: Boolean
}
Then Inputs and Loads are defined, and so on...
For this I want a mutation to save the "Calculation", so in the same file I have this:
type Mutation {
saveCalculation(data: CalculationData!, name: String!): Calculation
}
My resolver is as follow:
export default resolvers = {
Mutation: {
saveCalculation(obj, args, context) {
if(context.user && context.user._id){
const calculationId = Calculations.insert({
userId: context.user._id,
data: args.data,
name: args.name
})
return Calculations.findOne({ _id: calculationId})
}
throw new Error('Need an account to save a calculation')
}
}
}
Then my mutation is the following :
import gql from 'graphql-tag';
export const SAVE_CALCULATION = gql`
mutation saveCalculation($data: CalculationData!, $name: String!){
saveCalculation(data: $data, name: $name){
_id
}
}
`
Finally I am using the Mutation component to try to save the data:
<Mutation mutation={SAVE_CALCULATION}>
{(saveCalculation, { data }) => (
<div onClick={() => saveCalculation({ variables : { data: this.state, name:'name calcul' }})}>SAVE</div>
}}
</Mutation>
Now I get the following error :
[GraphQL error]: Message: The type of Mutation.saveCalculation(data:)
must be Input Type but got: CalculationData!., Location: undefined,
Path: undefined
From my research and some other SO posts, I get that I should define Input type in addition to the Query type but Input type can only avec Scalar types but my schema depends on other schemas (and that is not scalar). Can I create Input types depending on other Input types and so on when the last one has only scalar types? I am kinda lost cause it seems like a lot of redundancy. Would very much appreciate some guidance on the best practice. I am convinced Apollo/graphql could bring me quite good help over time on my project but I have to admit it is more complicated than I thought to implement it when the Schemas are a bit complex. Online examples generally stick to a String and a Boolean.
From the spec:
Fields may accept arguments to configure their behavior. These inputs are often scalars or enums, but they sometimes need to represent more complex values.
A GraphQL Input Object defines a set of input fields; the input fields are either scalars, enums, or other input objects. This allows arguments to accept arbitrarily complex structs.
In other words, you can't use regular GraphQLObjectTypes as the type for an GraphQLInputObjectType field -- you must use another GraphQLInputObjectType.
When you write out your schema using SDL, it may seem redundant to have to create a Load type and a LoadInput input, especially if they have the same fields. However, under the hood, the types and inputs you define are turned into very different classes of object, each with different properties and methods. There is functionality that is specific to a GraphQLObjectType (like accepting arguments) that doesn't exist on an GraphQLInputObjectType -- and vice versa.
Trying to use in place of another is kind of like trying to put a square peg in a round hole. "I don't know why I need a circle. I have a square. They both have a diameter. Why do I need both?"
Outside of that, there's a good practical reason to keep types and inputs separate. That's because in plenty of scenarios, you will expose plenty of fields on the type that you won't expose on the input.
For example, your type might include derived fields that are actually a combination of the underlying data. Or it might include fields to relationships with other data (like a friends field on a User). In both these case, it wouldn't make sense to make these fields part of the data that's submitted as as argument for some field. Likewise, you might have some input field that you wouldn't want to expose on its type counterpart (a password field comes to mind).
Yes, you can:
The fields on an input object type can themselves refer to input object types, but you can't mix input and output types in your schema. Input object types also can't have arguments on their fields.
Input types are meant to be defined in addition to normal types. Usually they'll have some differences, eg input won't have an id or createdAt field.

Meteor publish null or un-named record set

Being new to javascript and surly baby in Meteor. I could not make sense reading this part of the docs. about Meteor.publish(name, func)
Arguments
name String
Name of the record set. If null, the set has no name, and the record set is automatically sent to all connected clients.
I take it that record set means Meteor Collection, if that is correct, then how can a publish action take place on a collection with name "null" or even a collection with no name as stated? I mean, where is the collection to publish if the first parameter "that collection name" is null or does not exist? Thanks
The name parameter in Meteor.publish has absolutely nothing to do with the collection. While the convention is that you should have similar naming to what collection(s) you're using, you could literally call a publish function "asdjfsaidfj" and it would be valid. As yudap said, what data you're sending to the client is entirely determined in the function. You can also return data from multiple collections using an array:
return [
ExampleCollection.find(),
AnotherCollection.find()
];
If you publish data with null argument:
Meteor.publish(null, func)
Basically it is the same as you autopublish without autopublish package. That's mean you don't need to subscribe and you don't need to install autopublish package. The data is ready in client and reactive and you can use it in whatever template without subscribing.
where is the collection to publish? Whatever collection you want to autopublish. Just define it in func:
Meteor.publish(null, function () {
CollectionName.find({}, {
/*
sort: Sort specifier,
skip: Number,
limit: Number,
fields: Field specifier,
reactive: Boolean,
transform: Function
*/
});
});

Should I create pointers on struct field or on struct? Go

I'm wondering what's the best practice on pointers. Should I define them on the struct or on its fields. I though it makes sense to define a pointer to the struct itself but here is an example I find intriguing. If all the fields are pointers why shouldn't I use a pointer to the entire struct instead to get an address for each field?
type Tag struct {
Tag *string `json:"tag,omitempty"`
SHA *string `json:"sha,omitempty"`
URL *string `json:"url,omitempty"`
Message *string `json:"message,omitempty"`
Tagger *CommitAuthor `json:"tagger,omitempty"`
Object *GitObject `json:"object,omitempty"`
}
A sample of the struct content below
{
"tag": "v0.0.1",
"sha": "940bd336248efae0f9ee5bc7b2d5c985887b16ac",
"url": "https://api.github.com/repos/octocat/Hello-World/git/tags/940bd336248efae0f9ee5bc7b2d5c985887b16ac",
"message": "initial version\n",
"tagger": {
"name": "Scott Chacon",
"email": "schacon#gmail.com",
"date": "2011-06-17T14:53:35-07:00"
},
"object": {
"type": "commit",
"sha": "c3d0be41ecbe669545ee3e94d31ed9a4bc91ee3c",
"url": "https://api.github.com/repos/octocat/Hello-World/git/commits/c3d0be41ecbe669545ee3e94d31ed9a4bc91ee3c"
}
}
It is more efficient to have non-pointer fields, but in this case they have an odd reason to use pointers, discussed at the blog post "Go, REST APIs, and Pointers".
It looks like the struct you're talking about is defined here, in the go-github library. It makes every field a pointer so that it's trivial to pass nil for any subset of fields (just don't specify them). That way when you're constructing, say, a PATCH call to update something via the GitHub API, you can specify whether Description is just not relevant to your request (you're not updating the description) or whether you intend to set Description to "". The key thing is that "" and nil have different meanings in PATCH calls to their API.
If you have a similar desire to distinguish a zero string/struct/etc. from "not applicable to this object", you can also use pointers. If you don't need that, though, it's better not to make every field a pointer, because that will tend to make your memory usage patterns worse--little more RAM taken up, more cache misses, more stuff the GC needs to trace through, etc. An approach that doesn't add that layer of pointer indirection (but looks a tiny bit more verbose when writing code) is sql.NullString, which is just a struct with a bool and a string.
In GitHub's case, any performance impact of it isn't a huge deal--the time GitHub takes to respond to the Web request is going to dwarf any CPU-bound work their library does anyway.

Resources