Creating custom GraphQL schema for Drupal 9 empty fields which are paragraph references - drupal

Have a decoupled Drupal 9 with Gatsby and a GraphQL. On Drupal's side there is a node called school there is a field (field_components) which is an Entity Reference Field which can have unlimited number of references to paragraphs. There are over 50 paragraph types on the site but that specific field only accept four (4) types. And I am trying to define their types in GraphQL.
exports.createSchemaCustomization = ({ actions }) => {
const { createTypes } = actions;
const typeDefs = `
type node__school implements Node {
field_location: String
field_type_r_a: Boolean
relationships: node__schoolRelationships
}
type node__schoolRelationships {
field_components: // this is where i need to define the 4 types
}
`
createTypes(typeDefs);
};
As you can see by the example above I've written out 90% of the necessary schema but I have no idea how to define the for paragraph types. I'm guessing that each individual paragraph would be called paragraph__machine_name (e.g. paragraph__carousel) but since there are four (4) of them I don't know how to chain them (define them all)
Any ideas?

From the GraphQL docs:
Unions and interfaces are abstract GraphQL types that enable a schema field to return one of multiple object types.
In action (using paragraph entities):
const typeDefs = `
type node__school implements Node {
field_location: String
field_type_r_a: Boolean
relationships: node__schoolRelationships
}
union fieldContentParagraphUnion =
paragraph__foo
| paragraph__bar
| paragraph__bat
type paragraph__foo implements Node {
field_1: String
}
type paragraph__bar implements Node {
field_2: String
}
type paragraph__bat implements Node {
field_3: String
}
type node__schoolRelationships {
field_components: [fieldContentParagraphUnion] #link(from: "field_components___NODE")
}
`
This official Gatsby plugin helped me tame my schema-related issues: gatsby-plugin-schema-snapshot
Saves a minimal schema to file, adds the #dontInfer directive to all top-level types, and re-creates the schema from the saved type definitions during bootstrap. Use this plugin if you intend to lock-down a project’s GraphQL schema.

Related

References a sub model from parent in typescript using prisma client

I am learning prisma and I can't figure out how to use the prisma types correctly if the returned data includes a sub model.
For example, I have the following two tables
model Services {
id Int #id #default(autoincrement())
service_name String #db.VarChar(255)
description String #db.MediumText
overall_status ServiceStatus #default(OPERATIONAL)
deleted Boolean #default(false)
sub_services SubServices[]
}
model SubServices {
id Int #id #default(autoincrement())
name String #db.VarChar(255)
description String #db.MediumText
current_status ServiceStatus #default(OPERATIONAL)
service Services? #relation(fields: [service_id], references: [id], onDelete: Cascade)
service_id Int?
}
I am then pulling data from the Services model using the following:
const services = await prisma.services.findMany({
where: {
deleted: false
},
include: {
sub_services: true
}
});
I am then in the client side referencing the Services model, but the IDE isn't detecting that Services can include sub_services. I can use it and it works but the IDE is always showing a squiggly line as if the code is wrong, example is below:
import {Services} from "#prisma/client";
const MyComponent : React.FC<{service: Services}> = ({services}) => {
return (
<>
service.sub_services.map(service => {
})
</>
)
}
but in the above example sub_services is underlined with the error TS2339: Property 'sub_services' does not exist on type 'Services'.
So how would I type it in a way that IDE can see that I can access sub_services from within services model.
UPDATE
I found a way to do it, but I'm not sure if this is the correct way or not as I am creating a new type as below:
type ServiceWithSubServices <Services> = Partial<Services> & {
sub_services: SubServices[]
}
and then change the const definition to the below
const ServiceParent : React.FC<{service: ServiceWithSubServices<Services>}> = ({service}) => {
Although this does seem to work, is this the right way to do it, or is there some more prisma specific that can do it without creating a new type.
In Prisma, by default only the scalar fields are included in the generated type. So, in your case for the Services type, all the scalar fields except sub_services would be included in the type. sub_services is not included because it's a relation field.
To include the relation fields, you would need to use Prisma.validator, here's a guide on generating types that include the relation field.

What is a good practice for adding flow type annotations on project specific named exports

Please correct me if i am wrong. As far as i understand it up till now; type annotations can be added to a file or in libdefs (for shareable code)
For example in a project specific file helpers.js
// #flow
export function square(value: number): number {
return value * value
}
export function someOtherFunction(arg: string): string {
}
etc...
And in a libdef helpers.js
declare module 'helpers' {
declare export function square(value: number): number;
declare export function someOtherFunction(arg: string): string;
}
What would be a good practice for writing flow annotations on project specific code and especially lots of code. For example helpers exposing 20+ named exports, as this is the point where i am starting to think having a libdef would be more clearer to reason about.
And is it at all possible to use that libdef file as the single entry? I've fooled around a bit and i always had to annotate in the file itself even though i had added the libdef and told flow through the config to include these libdefs.
In our project, we use the following approach:
// #flow
export const square: SquareType = (value) => {
return value * value;
}
So you can declare SquareType in the helpers.js file just above the function or you can move it to a separate file and import it then into helpers.js
Many third-party modules don’t have types or only TypeScript types.
And libdefs need for one reason. To declare types for untyped modules!
More info: https://flow.org/en/docs/libdefs/

How to implement redux-search

I am trying to implement a search filter in my application which uses react/redux using redux-search. The first gotcha I get is when I try to add the store enhancer as in the example.
// Compose :reduxSearch with other store enhancers
const enhancer = compose(
applyMiddleware(...yourMiddleware),
reduxSearch({
// Configure redux-search by telling it which resources to index for searching
resourceIndexes: {
// In this example Books will be searchable by :title and :author
books: ['author', 'title']
},
// This selector is responsible for returning each collection of searchable resources
resourceSelector: (resourceName, state) => {
// In our example, all resources are stored in the state under a :resources Map
// For example "books" are stored under state.resources.books
return state.resources.get(resourceName)
}
})
)
I understand evarything up to the resourceSelector, when I tried to get a deep dive into the example to see how it works but I can barely see how they are generated and the last line returns an error, Cannot read property 'get' of undefined
My state object looks like this
state: {
//books is an array of objects...each object represents a book
books:[
//a book has these properties
{name, id, author, datePublished}
]
}
Any help from anyone who understands redux-search is helpful
If this line:
return state.resources.get(resourceName)
Is causing this error:
Cannot read property 'get' of undefined
That indicates that state.resources is not defined. And sure enough, your state doesn't define a resources attribute.
The examples were written with the idea in mind of using redux-search to index many types of resources, eg:
state: {
resources: {
books: [...],
authors: [...],
// etc
}
}
The solution to the issue you've reported would be to either:
A: Add an intermediary resources object (if you think you might want to index other things in the future and you like that organization).
B: Replace state.resources.get(resourceName) with state[resourceName] or similar.

Objects with multiple key columns in realm.io

I am writing an app using the Realm.io database that will pull data from another, server database. The server database has some tables whose primary keys are composed of more than one field. Right now I can't find a way to specify a multiple column key in realm, since the primaryKey() function only returns a String optional.
This one works:
//index
override static func primaryKey() ->String?
{
return "login"
}
But what I would need looks like this:
//index
override static func primaryKey() ->[String]?
{
return ["key_column1","key_column2"]
}
I can't find anything on the docs on how to do this.
Supplying multiple properties as the primary key isn't possible in Realm. At the moment, you can only specify one.
Could you potentially use the information in those two columns to create a single unique value that you could use instead?
It's not natively supported but there is a decent workaround. You can add another property that holds the compound key and make that property the primary key.
Check out this conversation on github for more details https://github.com/realm/realm-cocoa/issues/1192
You can do this, conceptually, by using hash method drived from two or more fields.
Let's assume that these two fields 'name' and 'lastname' are used as multiple primary keys. Here is a sample pseudo code:
StudentSchema = {
name: 'student',
primaryKey: 'pk',
properties: {
pk: 'string',
name: 'string',
lastname: 'string',
schoolno: 'int'
}
};
...
...
// Create a hash string drived from related fields. Before creating hash combine the fields in order.
myname="Uranus";
mylastname="SUN";
myschoolno=345;
hash_pk = Hash( Concat(myname, mylastname ) ); /* Hash(myname + mylastname) */
// Create a student object
realm.create('student',{pk:hash_pk,name:myname,lastname:mylastname,schoolno: myschoolno});
If ObjectId is necessary then goto Convert string to ObjectID in MongoDB

Different RavenDB collections with documents of same type

In RavenDB I can store objects of type Products and Categories and they will automatically be located in different collections. This is fine.
But what if I have 2 logically completely different types of products but they use the same class? Or instead of 2 I could have a generic number of different types of products. Would it then be possible to tell Raven to split the product documents up in collections, lets say based on a string property available on the Product class?
Thankyou in advance.
EDIT:
I Have created and registered the following StoreListener that changes the collection for the documents to be stored on runtime. This results in the documents correctly being stored in different collections and thus making a nice, logically grouping of the documents.
public class DynamicCollectionDefinerStoreListener : IDocumentStoreListener
{
public bool BeforeStore(string key, object entityInstance, RavenJObject metadata)
{
var entity = entityInstance as EntityData;
if(entity == null)
throw new Exception("Cannot handle object of type " + EntityInstance.GetType());
metadata["Raven-Entity-Name"] = RavenJToken.FromObject(entity.TypeId);
return true;
}
public void AfterStore(string key, object entityInstance, RavenJObject metadata)
{
}
}
However, it seems I have to adjust my queries too in order to be able to get the objects back. My typical query of mine used to look like this:
session => session.Query<EntityData>().Where(e => e.TypeId == typeId)
With the 'typeId' being the name of the new raven collections (and the name of the entity type saved as a seperate field on the EntityData-object too).
How would I go about quering back my objects? I can't find the spot where I can define my collection at runtime prioring to executing my query.
Do I have to execute some raw lucene queries? Or can I maybe implement a query listener?
EDIT:
I found a way of storing, querying and deleting objects using dynamically defined collections, but I'm not sure this is the right way to do it:
Document store listener:
(I use the class defined above)
Method resolving index names:
private string GetIndexName(string typeId)
{
return "dynamic/" + typeId;
}
Store/Query/Delete:
// Storing
session.Store(entity);
// Query
var someResults = session.Query<EntityData>(GetIndexName(entity.TypeId)).Where(e => e.EntityId == entity.EntityId)
var someMoreResults = session.Advanced.LuceneQuery<EntityData>(GetIndexName(entityTypeId)).Where("TypeId:Colors AND Range.Basic.ColorCode:Yellow)
// Deleting
var loadedEntity = session.Query<EntityData>(GetIndexName(entity.TypeId)).Where(e =>
e.EntityId == entity.EntityId).SingleOrDefault();
if (loadedEntity != null)
{
session.Delete<EntityData>(loadedEntity);
}
I have the feeling its getting a little dirty, but is this the way to store/query/delete when specifying the collection names runtime? Or do I trap myself this way?
Stephan,
You can provide the logic for deciding on the collection name using:
store.Conventions.FindTypeTagName
This is handled statically, using the generic type.
If you want to make that decision at runtime, you can provide it using a DocumentStoreListner

Resources