How to get away from Non-capitalized key value for Record construction? - functional-programming

I am using graphql, nexus-plugin-prisma, prisma to build a backend application using ReScript. The problem I face is that there are some columns starting with capital letter, and I want to set types for such schemas using records instead of objects. (to make use of pattern matching utility)
But ReScript prevents capitalized letters to appear as the very first character of a key of a record. Is there any way that I can somehow get away with this issue? Any help would be appreciated.

Usually when dealing with graphQL, the best way to circumvent this issue is to make use of graphQL aliasing feature:
fragment Foo on foo {
uncapitalizedAlias: CapitalizedName
}
edit: I don't know if the records you're trying to use are defined beforehand or generated by your GraphQL client, but you have other more general solutions when you want to bind to JS objects with capitalized names:
you can make use bs.as to change the name of the field (works both in Ocaml/Reason and Rescript syntaxes):
type myGQLrecord = {
#bs.as("CapitalizedName")
uncapitalizedName: string,
}
or you can directly use any identifier name you want for your field thanks to this feature of rescript syntax (works also for value identifiers):
type myGQLrecord = {
\"CapitalizedName": string
}
In my opinion, it makes it a bit more cumbersome to use though.

Related

Graphql Schema doku displays Input type automatically with Input

I have added leangen/graphql-spqr as described in the readme.
Before we had a custom implementation of graphql types like in customtype.types.gql.
After implementation, everything works fine, except the type which are called e.g. OperatorInput, are named in the autogenerated graphql doc like "OperatorInputInput".
I tried to Change it like this in the declaration:
#GraphQLArgument(name = "OperatorInput", description = "Required fields for Operator") OperatorInput operator
But it wasn't applied.
Do you know any workaround?
This is intended. Keep in mind that in GraphQL's type system, input and output types are strictly different. Let's say you have this in Java:
public Book saveBook(Book in) {...}
The Book that is the return type and the Book that is the argument type are 2 different types in GraphQL, and must have unique names. So Input is added automatically to make the names unique.
If you're 100% sure you want to override this, register a custom TypeInfoGenerator via GraphQLSchemaGenerator#withTypeInfoGenerator. You can inherit from DefaultTypeInfoGenerator and override generateInputTypeName. BUT! Do pay attention that if you end up producing the same name for 2 different types, all hell breaks loose. So maybe only drop the suffix Input if the classname already ends with Input and never ever use such classes for output.
Btw #GraphQLArgument sets the name of the argument, not the name of the type of the argument.

Can a Cerberus schema have an arbitrary name for the base dict?

I need to validate Python dicts that will have arbitrary names. When I attempt to validate them using Cerberus, I get unknown field. Is there a way of allowing for arbitrary dict names?
I was thinking that keysrules might work, but it appears to only work on items within the base dict.
{'account_created': {'category': 'Accounts',
'conversion_event': True,
'description': 'A new account is created'}
}
I would like to be able to use an arbitrary name where account_created is in this dict.
Assuming you don't need to validate that base key, I just attempted to answer a question similar to this on the Cerberus GitHub. My suggestion was to maybe use a dynamically formed schema. You could follow the GitHub issue thread and see if anyone there comes up with a better answer.

How do you manage adding new attributes on existing objects when using firebase?

I have an app using React + Redux and coupled with Firebase for the backend.
Often times, I will want to add some new attributes to existing objects.
When doing so, existing objects won't get the attribute until they're modified with the new version of the app that handles those new attributes.
For example, let's say I have a /categories/ node, in there I've got objects such as this :
{
name: "Medical"
}
Now let's say I want to add an icon field with a default of "
Is it possible to update all categories at once so that field always exists with the default value?
Or do you handle this in the client code?
Right now I'm always testing the values to see if they're here or not, but it doesn't seem like a very good way to go about it. I'd like to have one place to define defaults.
It seems like having classes for each object type would be interesting but I'm not sure how to go about this in Redux.
Do you just use the reducer to turn all categories into class instances when you fetch them for example? I'm worried this would be heavy performance wise.
Any write operation to the Firebase Database requires that you know the exact path to the node that you're writing.
There is no built-in operation to bulk update nodes with a path that is only partially known.
You can either keep your client-side code robust enough to handle the missing properties, or you can indeed run a migration script to add the new property to each relevant node. But since that script will have to know the exact path of each node to write, it will likely first have to read/query the database to determine those paths. Depending on the number of items to update, it could possibly use multi-location updates after that to update multiple nodes in one call. E.g.
firebase.database().ref("categories").update({
"idOfMedicalCategory/icon": "newIconForMedical",
"idOfCommercialCategory/icon": "newIconForCommercial"
"idOfTechCategory/icon": "newIconForTech"
})

How to strip/ignore unused attributes when saving a model object?

I am sending angular model objects to bookshelf to save, but it may carry extraneous attributes that aren't in the database. When I save, bookshelf will try to save all attributes and say it can't find these extra attributes.
What is the recommended way to handle this? I'm sure I can set out an array of whitelisted attributes, and strip the object manually, but is there another way?
IE, is there a built in way to ignore unused attributes? Or is there a way to query the DB to get the array of columns, then use that to strip my object?
You may use parse() in addition to an array of permitted attributes, like Ghost team did.
Mode = bookshelf.Model.extend({
permittedAttributes: [ 'field1', 'field2', 'field3' ],
parse: function (attrs) {
return _.pick(attrs, this.permittedAttributes)
}
})
If you define parse() in a base model, all models that extend it will behave the same way
Tooting my own horn here, but I encountered this problem so many times that I created a plugin for bookshelf. Didn't want to have to manually define permitted attributes every single time.
https://www.npmjs.com/package/bookshelf-strip-save

SQLAlchemy, reflection, different backends and table/column case insensitivity

Intro: I'm writing web interface with SQLAlchemy reflection that supports multiple databases. It turns out that authors of application defined postgresql with lowercase tables/columns, eg. job.jobstatus while sqlite has mixed case, eg Job.JobStatus. I'm using DeclarativeReflectedBase from examples to combine reflection and declarative style.
The issue: Configure SQLAlchemy to work with tables/columns case insensitive with reflection
I have done so far:
I have changed DeclarativeReflectedBase.prepare() method to pass quote=False into Table.__init__
What is left to be solved:
relationship definitions still has to obey case when configuring joins, like primaryjoin="Job.JobStatus==Status.JobStatus".
configure __tablename__ based on engine type
The question: Are my assumptions correct or is there more straightforward way? Maybe I could tell reflection to reflect everything lowercase and all problems are gone.
you'd probably want to look into defining a ".key" on each Column that's in lower case, that way you can refer to columns as lower case within application code. You should use the column_reflect event (See http://docs.sqlalchemy.org/en/latest/core/events.html#schema-events) to define this key as a lower case version of the .name.
then, when you reflect the table, I'd just do something like this:
def reflect_table(name, engine):
if engine.dialect.name == 'postgresql':
name = name.lower()
return Table(name, autoload=True, autoload_with=engine)
my_table = reflect_table("MyTable", engine)
I think that might cover it.

Resources