MikroORM: How to ignore duplicate entries on flush() - mariadb

Is there a way to ignore duplicate entries (typically a 1062 SQL error on MySQL/MariaDB), when calling flush()?
If the entry exists, is there a way to get the EntityManager to use the existing row to override the Entity with the new reference and continue with the Unit Of Work?
If not, would the best solution be to write my own function to findOne() and then add to the entity graph?
Thanks for you help!
Edit 1:
For example is there any downfall to simply not persisting if found?Maybe a function something like this...
/**
*
* #param em an EntityManager
* #param ent an Entity that is already populated and ready to be persisted.
* #param collision field and value for potential duplicate.
* #returns the referenced entity to be used in an entity graph if necessary
*/
async persistOrIgnore<T extends AnyEntity<T>>(em: EntityManager, ent: Loaded<T>, collision: ObjectQuery<T>): Promise<Loaded<T>> {
const entName = ent.__meta?.className;
const found = await em.findOne(entName, collision);
if (!found) em.persist(ent);
return found || ent;
}
Then use like this:
const order = new Order(); //parent to persist
let contents = new OrderContents(); // new entity
contents.checksum = 'xxx' // unique field that will produce 1062 SQL error
// [...add other properties you would like to persist.]
contents = await persistOrIgnore(em, contents, { checksum: contents.checksum });
order.contentsId = contents;
em.persist(order);
// [...do more work, e.g. add more to the parent order entity]
await em.flush();
Edit 2:
There is now upsert functionality in v5.5.0:
https://mikro-orm.io/docs/entity-manager#upsert

You can't, it is your responsibility to have a valid state in the context (identity map). You should use the QB for upserts, there is a support for on conflict queries.
const qb = em.createQueryBuilder(User)
.insert({
email: 'ignore#example.com',
name: 'John Doe',
})
.onConflict('email').ignore();
See more examples in the QB tests:
https://github.com/mikro-orm/mikro-orm/blob/master/tests/QueryBuilder.test.ts#L1327
Or check the knex documentation on this, as the API is designed based on it (and just wraps it basically):
https://knexjs.org/guide/query-builder.html#onconflict

Related

References a sub model from parent in typescript using prisma client

I am learning prisma and I can't figure out how to use the prisma types correctly if the returned data includes a sub model.
For example, I have the following two tables
model Services {
id Int #id #default(autoincrement())
service_name String #db.VarChar(255)
description String #db.MediumText
overall_status ServiceStatus #default(OPERATIONAL)
deleted Boolean #default(false)
sub_services SubServices[]
}
model SubServices {
id Int #id #default(autoincrement())
name String #db.VarChar(255)
description String #db.MediumText
current_status ServiceStatus #default(OPERATIONAL)
service Services? #relation(fields: [service_id], references: [id], onDelete: Cascade)
service_id Int?
}
I am then pulling data from the Services model using the following:
const services = await prisma.services.findMany({
where: {
deleted: false
},
include: {
sub_services: true
}
});
I am then in the client side referencing the Services model, but the IDE isn't detecting that Services can include sub_services. I can use it and it works but the IDE is always showing a squiggly line as if the code is wrong, example is below:
import {Services} from "#prisma/client";
const MyComponent : React.FC<{service: Services}> = ({services}) => {
return (
<>
service.sub_services.map(service => {
})
</>
)
}
but in the above example sub_services is underlined with the error TS2339: Property 'sub_services' does not exist on type 'Services'.
So how would I type it in a way that IDE can see that I can access sub_services from within services model.
UPDATE
I found a way to do it, but I'm not sure if this is the correct way or not as I am creating a new type as below:
type ServiceWithSubServices <Services> = Partial<Services> & {
sub_services: SubServices[]
}
and then change the const definition to the below
const ServiceParent : React.FC<{service: ServiceWithSubServices<Services>}> = ({service}) => {
Although this does seem to work, is this the right way to do it, or is there some more prisma specific that can do it without creating a new type.
In Prisma, by default only the scalar fields are included in the generated type. So, in your case for the Services type, all the scalar fields except sub_services would be included in the type. sub_services is not included because it's a relation field.
To include the relation fields, you would need to use Prisma.validator, here's a guide on generating types that include the relation field.

Repository in PRE_WRITE event returns query data, not saved data

During a PUT call to an item I need to get the current saved values in order to compare them to request params.
Say the PUT call contains a name parameter that is different from the currently saved one.
I thought getting the entity with $repository->findOneBy would return the saved value but it's not, I'm getting the PUT param value instead.
The setup is taken from https://api-platform.com/docs/core/events :
const ALLOWED_METHOD = Request::METHOD_PUT;
public static function getSubscribedEvents()
{
return [
KernelEvents::VIEW => [
['preWriteWorkflow', EventPriorities::PRE_WRITE],
],
];
}
public function preWriteWorkflow(GetResponseForControllerResultEvent $event)
{
$entity = $event->getControllerResult();
if (!($entity instanceof MyEntity)) {
return;
}
$route = "/{$entity->getId()}";
$result = $this->checkRequestFromControllerResult($event, $route);
if (!$result) {
return;
}
// Getting entity from repository in order to get the currently saved value
$savedEntity = $this->MyEntityRepository->findOneBy(['id' => $entity->getId()]);
// Both will return the Name value of the PUT call
// Shouldn't $savedEntity return the currently saved name ?
$entity->getName();
$savedEntity->getName();
}
What is the reason behind this behavior? Is there a way to get eventArgs injected in this method so that I can use getEntityChangeSet or hasChangedField?
What is the reason behind this behavior?
This is doctrine behaviour. Once you've fetched an entity, the instance is stored and always returned. Given that, you have one and only one instance of your entity during request's lifecycle.
$event->getControllerResult() === $repository->findBy($id); //true !
Roughly, Api-platform calls Doctrine and fetch your entity while executing the ReadListener. Because this is an object, doctrine's find*() methods always returns a pointer/reference to the entity, even if it is updated.
Yes, during a PUT request, the updated instance is the fetched one, in order to trigger doctrine update actions at the end of the request.
An easy way to keep an instance of the so called previous object is to clone it before the Deserialization event.
Note that this strategy is used by api-platform with the security_post_denormalize and previous_object security attributes.
EDIT
Working on a similar use case, i've found that the ReadListener stores the current object within the Request under the "data" key, whereas the previous object is stored within the "previous_data" key.
$entity = $request->get('data');
$previousEntity = $request->get('previous_data'); // This is a clone.

How to get Google UserId from active user session in App Maker?

Is there a way to get "User Google Id" from the session in App Maker. In the documentation its only mentioned how to retrieve the email of the logged in user Session.getActiveUser().getEmail() but no where it says how to get the id. I need this because the user email might sometimes changes. So I need the user id to keep track of users and related permission tasks. Or is there something I'm missing out here in how this should be implemented.
Yet an easier way to find Google Id simply using the Directory model. Although its mentioned in documentation that there is a way to get current signed in user id ( which is Google Id), its not clearly stated how - maybe documentation could be improved here. Another problem is that in many occasions the email of current active user is referred to as the id for example in deprecated method Session.getActiveUser().getUserLoginId(). Anyways this is a proper way to get the id.
var query = app.models.Directory.newQuery();
query.filters.PrimaryEmail._equals = Session.getActiveUser().getEmail();
var result = query.run();
var GoogleId = result[0]._key;
So with this GoogleId you can safely relate different models with each other and not worry that database integrity might break if an already referenced user email is changed.
Relating the different models could be done simply by creating a model that acts as a wrapper model around the Directory model and storing GoogleId in it. Then linking that model to other models where you want to track user related data because unfortunately we can not directly link The Directory Model to other models.
A team member has figured it out. This should be done using Apps Script - which works within App Maker environment using server side script.
var GoogleUser = (function (){
/**
*
* #param {string} email
*/
function getUserObjByEmail(email){
// Same as using AdminDirectory class.
var apiUrl = "https://www.googleapis.com/admin/directory/v1/users/"+email+"?fields=id";
var token = ScriptApp.getOAuthToken();
var header = {"Authorization":"Bearer " + token};
var options = {
"method": "GET",
"headers": header
};
var response = JSON.parse(UrlFetchApp.fetch(apiUrl, options));
return response;
}
/**
*
* #param {string} email - User email.
*/
function getIdByEmail(email){
return getUserObjByEmail(email)['id'];
}
var publicApi = {
getIdByEmail: getIdByEmail
};
return publicApi;
})();
Note that using var apiUrl = "https://www.googleapis.com/admin/directory/v1/users/"+email+"?fields=id"; is not going to be asynchronously called because its already happening in the server.
Is this a dup of this question?
I think this will solve your problem, even though it's a bit of a hack.

When utilizing the .push method can I write a copy of the id to the object?

I'm using the .push method on firebase to write new records. I'd like to save the key where the new record is saved to the record itself at the id key. Currently, I do this in 2 operations, first push the record and then update using the ref returned. Can I do this in 1 write? Does it not matter?
If you invoke the Firebase push() method without arguments it is a pure client-side operation.
var newRef = ref.push(); // this does *not* call the server
You can then add the key() of the new ref to your item:
var newItem = {
name: 'anauleau'
id: newRef.key()
};
And write the item to the new location:
newRef.set(newItem);
There's no method to do this in one operation. However, it typically does not matter, because you can always get the push id from the .key() method on the DataSnapshot.
But, there's nothing wrong either about storing the push id. So you coul create a function on the Firebase prototype.
Firebase.prototype.pushWithId = function pushWithid(data) {
var childRef = this.push();
data.key = childRef.key();
childRef.update(data); // or .set() depending on your case
return childRef;
};
var ref = new Firebase('<my-firebase-app>');
ref.pushWithId({ name: 'Alice' });
Take caution with modifying the prototype of functions you do not own. In this case, you'll likely be fine. This method does little, and there's not much of a chance that the Firebase SDK gains a .pushWithId() method.

Datanucleus Query: accessing transient collection after close

I have a specific query method in all my managers that must return a transient collection, but i want to CLOSE the query immediately after executing it.
tx.begin();
Query query=pm.newQuery(...);
Collection col=(Collection)query.execute();
pm.makeTransientAll(col,true);
query.close();
tx.commit();
The problem: The collection CANNOT be accessed after the query is closed (DN knows the identity?) otherwise it throws a "Query has been closed" error!
The solution: Create a COPY of the original collection!
Collection col=new ArrayList((Collection)query.execute());
But i want to avoid that... Even if it's a local copy and it's not a deep clone, it still allocates the space needed for the entire array of elements (so, at some point there is going to be 2x allocated memory) and i would like to avoid that.
I'm i missing something? Is there a way to avoid the creation of a clone?
Well, i found the reason of this behavior:
The query Object returned (collection) if an instance of: org.datanucleus.store.rdbms.query.ForwardQueryResult
that extends: AbstractRDBMSQueryResult
that extends: AbstractQueryResult
that extends: AbstractList
so, i get an object that is a LIST implementation, and the query result is bound to that implementation.
/** The Result Objects. */
protected List resultObjs = new ArrayList();
/**
* Method to return the results as an array.
* #return The array.
*/
public synchronized Object[] toArray()
{
assertIsOpen();
advanceToEndOfResultSet();
return resultObjs.toArray();
}
So, i cannot avoid the creation of a NEW array...

Resources