Unable to save aggregate root with UUID id in sqlite - sqlite

In my project I have the following entity which can be successfully saved in Postgres:
public class Aggregate {
#Id
private UUID id;
private JsonNode data;
// getters and setters omitted
}
When I try to save the same entity in SQLite I get the following exception:
org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.Integer] to type [java.util.UUID]
at org.springframework.core.convert.support.GenericConversionService.handleConverterNotFound(GenericConversionService.java:322) ~[spring-core-5.3.5.jar:5.3.5]
at org.springframework.core.convert.support.GenericConversionService.convert(GenericConversionService.java:195) ~[spring-core-5.3.5.jar:5.3.5]
at org.springframework.core.convert.support.GenericConversionService.convert(GenericConversionService.java:175) ~[spring-core-5.3.5.jar:5.3.5]
at org.springframework.data.mapping.model.ConvertingPropertyAccessor.convertIfNecessary(ConvertingPropertyAccessor.java:120) ~[spring-data-commons-2.4.6.jar:2.4.6]
at org.springframework.data.mapping.model.ConvertingPropertyAccessor.setProperty(ConvertingPropertyAccessor.java:63) ~[spring-data-commons-2.4.6.jar:2.4.6]
at org.springframework.data.jdbc.core.JdbcAggregateChangeExecutionContext.setIdAndCascadingProperties(JdbcAggregateChangeExecutionContext.java:337) ~[spring-data-jdbc-2.1.6.jar:2.1.6]
at org.springframework.data.jdbc.core.JdbcAggregateChangeExecutionContext.populateIdsIfNecessary(JdbcAggregateChangeExecutionContext.java:305) ~[spring-data-jdbc-2.1.6.jar:2.1.6]
at org.springframework.data.jdbc.core.AggregateChangeExecutor.execute(AggregateChangeExecutor.java:52) ~[spring-data-jdbc-2.1.6.jar:2.1.6]
at org.springframework.data.jdbc.core.JdbcAggregateTemplate.store(JdbcAggregateTemplate.java:339) ~[spring-data-jdbc-2.1.6.jar:2.1.6]
at org.springframework.data.jdbc.core.JdbcAggregateTemplate.save(JdbcAggregateTemplate.java:149) ~[spring-data-jdbc-2.1.6.jar:2.1.6]
at org.springframework.data.jdbc.repository.support.SimpleJdbcRepository.save(SimpleJdbcRepository.java:60) ~[spring-data-jdbc-2.1.6.jar:2.1.6]
Table definition for SQLite:
create table aggregate
(
id text primary key,
data text not null
);
For some reason Spring Data JDBC tries to set an id of type integer in the aggregate although it's a) of another type and b) already there. I experimented with a different IdGenerator, using without rowid in the table definiton but nothing changed.
I'm using Spring Data JDBC 2.1.6 and xerial/sqlite-jdbc 3.34.0.
How can I fix this?
Update to answer Jens Schauder's question: I forgot to mention the callback that sets the UUID appropriately. I also have two converters to translate between JsonNode and PGobject (not shown here).
#Component
public class AggregateCallback implements BeforeConvertCallback<Aggregate> {
#Override
public Aggregate onBeforeConvert(final Aggregate aggregate) {
if (aggregate.getId() == null) {
aggregate.setId(UUID.randomUUID());
}
return aggregate;
}
}
Regardless of the database the AggregateCallback is called, but with SQLite the exception is thrown after the execution of the callback.

As #tammOr answered using a release after Spring Data 2021.0.0-M5 fixes the problem.
Here is the explanation why and how.
When Spring Data JDBC saves the aggregate it does the following in the old version:
It determines if the aggregate is new. There are various ways how it might do that, but the case that applies here is to check the id, if it is empty (null for object types or 0 for numeric primitives.
Since the id is null it decides the aggregate is new and an INSERT is necessary. Before performing that it triggers some events which #tammOr uses to set the id.
It then performs the Insert
Since the default strategy is to generate the id in the database, it tries to obtain the generated id from the JDBC driver after the insert. For some reason the SQLite driver actually returns a value for that, of type Integer.
Spring Data JDBC then tries to convert that to an UUID and fails.
With the later version Spring Data JDBC realises it already has an ID in step 4 and does not ask the driver for a generated one, and skips step 5, so everything is happy.
This is the pull request that solved the problem: https://github.com/spring-projects/spring-data-jdbc/pull/939

Switching to Spring Data 2021.0.0-M5 as suggested by Jens Schauder in the comments solved the problem!

Related

D365FO x++ syscomputedcolumn table name

How can you use the syscomputedcolumn class to retrieve a table or field name for an entity? this is fairly easy using virtual field entity postload method something like
public class SysDatabaseLogHeaderEntity extends common
{
public void postLoad()
{
super();
this.TableName = tableId2Name(this.table);
}
}
but there's a rumour that virtual fields won't be supported in upcoming synapse link for D 365 FnO so want to know how to do this with computed columns...
https://learn.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/data-entities/data-entity-computed-columns-virtual-fields
SysComputedColumn is used to help create computed columns in views.
Supposing for some reason you want a column in which every row contains the string value "CustTable", you'd create create a method (AX 2012 syntax):
public static server string TableNameColumn()
{
return SysComputedColumn::returnLiteral(tableStr(CustTable));
}
and then you'd add a computed column to the view as outlined here: https://learn.microsoft.com/en-us/dynamicsax-2012/developer/walkthrough-add-a-computed-column-to-a-view
Note: hopefully this is a toy example, there is no reason to ever actually do this particular column. Or really any fully static columns.
View computed columns are essentially based on static server methods which return the SQL definition for the computed column, and then the SysComputedColumn class has a bunch of helper methods to let you build those SQL string statements without using specific implementation knowledge of the backend database such as column names.
A complete description is beyond the scope of this comment, but the big one you'll use is SysComputedColumn::returnField(view,datasource,field) which gets the specified field from the specified datasource in the specified view. You want to use intrinsic functions for these parameters to keep your cross references valid (https://learn.microsoft.com/en-us/dynamicsax-2012/developer/intrinsic-functions).
There will be a lot you can't do though. These are sql only so they cannot send individual rows into X++ business logic. You need to reconstruct said business logic in SQL which can't always be done easily.
I like to keep those methods public so I can info them out and test them in SQL directly. I'm guessing you can't access the sql in d365, but looking at the string returned from your method can still help in troubleshooting.

Does entry to DynamoDB depends on the model class of you application?

I have a model class which has four member variables like this:
A.java
1.Attribute-1
2.Attribute-2
3.Attribute-3
4.Attribute-4
Now I make entries to the DynamoDB and the entries go successful and I can see the entries going.
Now I make few changes in the model and add 2 more attributes.
5.Attribute-5
6.Attribute-6
And in the application I am setting the values of these 2 attributes there.
Now when I try to insert some entry into the application then I am getting this error.
com.amazonaws.services.dynamodbv2.model.AmazonDynamoDBException: The provided key element does not match the schema (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ValidationException;
What is the error
I believe you are using DynamoDBMapper class in AWS SDK JAVA with save() method.
1) Firstly, yes, you need to set the key values in the model object when you perform the update on existing item. save() performs either create or update the item
2) Secondly, save method has a save behavior config. You need to set the behavior accordingly based on your use case
public void save(T object,
DynamoDBSaveExpression saveExpression,
DynamoDBMapperConfig config)
UPDATE (default) : UPDATE will not affect unmodeled attributes on a
save operation and a null value for the modeled attribute will remove
it from that item in DynamoDB. Because of the limitation of updateItem
request, the implementation of UPDATE will send a putItem request when
a key-only object is being saved, and it will send another updateItem
request if the given key(s) already exists in the table.
UPDATE_SKIP_NULL_ATTRIBUTES : Similar to UPDATE except that it ignores
any null value attribute(s) and will NOT remove them from that item in
DynamoDB. It also guarantees to send only one single updateItem
request, no matter the object is key-only or not.
CLOBBER : CLOBBER
will clear and replace all attributes, included unmodeled ones,
(delete and recreate) on save. Versioned field constraints will also
be disregarded. Any options specified in the saveExpression parameter
will be overlaid on any constraints due to versioned attributes.
Reference link
Sample Code:-
Movies movies = new Movies();
//setting the partition and sort key
movies.setYearKey(1999);
movies.setTitle("MyMovie BOOL");
//New attribute added
movies.setProducts(Stream.of("prod1").collect(Collectors.toSet()));
DynamoDBMapper dynamoDBMapper = new DynamoDBMapper(dynamoDBClient);
//Set the save behavior, so that it wouldn't impact the existing attributes. Refer the full definition above
dynamoDBMapper.save(movies, SaveBehavior.UPDATE_SKIP_NULL_ATTRIBUTES.config());

Entity framework 6 no results

Ive just setup entity framework 6 (for first time) using a model with same fields as db table but im getting 0 results on debug (no errors)
public class footballContext : DbContext
{
public DbSet<football> football { get; set; }
}
and:
var context = new footballContext();
var matches = context.football.Take(20).ToList();
If I view the query its using on "context" I can run it on my database and results are returned fine. I do have entity framework power tools but it only seems to validate the model, is there a way I can test if it can get data or is there something obvious I've missed?
Just found this:
"If you don't specify a connection string or the name of one explicitly, Entity Framework assumes that the connection string name is the same as the class name. The default connection string name in this example would then be SchoolContext, the same as what you're specifying explicitly."
Think I need to start reading up on the naming conventions for this...
For your code to work as it stands, you will need to have a connection string in your web.config called footballContext
If you don't want the connection string to be called that you can create a constructor for your context which calls the base constructor with a specified name.
If you prefer to pass in the connection string explicitly during creation of the context you can again create a constructor for footballContext which accepts a connection string and calls the appropriate base constructor.
See this SO answer for an example.

How do you use optimistic concurrency with WebAPI OData controller

I've got a WebAPI OData controller which is using the Delta to do partial updates of my entity.
In my entity framework model I've got a Version field. This is a rowversion in the SQL Server database and is mapped to a byte array in Entity Framework with its concurrency mode set to Fixed (it's using database first).
I'm using fiddler to send back a partial update using a stale value for the Version field. I load the current record from my context and then I patch my changed fields over the top which changes the values in the Version column without throwing an error and then when I save changes on my context everything is saved without error. Obviously this is expected, the entity which is being saved has not been detacched from the context so how can I implement optimistic concurrency with a Delta.
I'm using the very latest versions of everything (or was just before christmas) so Entity Framework 6.0.1 and OData 5.6.0
public IHttpActionResult Put([FromODataUri]int key, [FromBody]Delta<Job> delta)
{
using (var tran = new TransactionScope())
{
Job j = this._context.Jobs.SingleOrDefault(x => x.JobId == key);
delta.Patch(j);
this._context.SaveChanges();
tran.Complete();
return Ok(j);
}
}
Thanks
I've just come across this too using Entity Framework 6 and Web API 2 OData controllers.
The EF DbContext seems to use the original value of the timestamp obtained when the entity was loaded at the start of the PUT/PATCH methods for the concurrency check when the subsequent update takes place.
Updating the current value of the timestamp to a value different to that in the database before saving changes does not result in a concurrency error.
I've found you can "fix" this behaviour by forcing the original value of the timestamp to be that of the current in the context.
For example, you can do this by overriding SaveChanges on the context, e.g.:
public partial class DataContext
{
public override int SaveChanges()
{
foreach (DbEntityEntry<Job> entry in ChangeTracker.Entries<Job>().Where(u => u.State == EntityState.Modified))
entry.Property("Timestamp").OriginalValue = entry.Property("Timestamp").CurrentValue;
return base.SaveChanges();
}
}
(Assuming the concurrency column is named "Timestamp" and the concurrency mode for this column is set to "Fixed" in the EDMX)
A further improvement to this would be to write and apply a custom interface to all your models requiring this fix and just replace "Job" with the interface in the code above.
Feedback from Rowan in the Entity Framework Team (4th August 2015):
This is by design. In some cases it is perfectly valid to update a
concurrency token, in which case we need the current value to hold the
value it should be set to and the original value to contain the value
we should check against. For example, you could configure
Person.LastName as a concurrency token. This is one of the downsides
of the "query and update" pattern being used in this action.
The logic
you added to set the correct original value is the right approach to
use in this scenario.
When you're posting the data to server, you need to send RowVersion field as well. If you're testing it with fiddler, get the latest RowVersion value from your database and add the value to your Request Body.
Should be something like;
RowVersion: "AAAAAAAAB9E="
If it's a web page, while you're loading the data from the server, again get RowVersion field from server, keep it in a hidden field and send it back to server along with the other changes.
Basically, when you call PATCH method, RowField needs to be in your patch object.
Then update your code like this;
Job j = this._context.Jobs.SingleOrDefault(x => x.JobId == key);
// Concurrency check
if (!j.RowVersion.SequenceEqual(patch.GetEntity().RowVersion))
{
return Conflict();
}
this._context.Entry(entity).State = EntityState.Modified; // Probably you need this line as well?
this._context.SaveChanges();
Simple, the way you always do it with Entity Framework: you add a Timestamp field and put that field's Concurrency Mode to Fixed. That makes sure EF knows this timestamp field is not part of any queries but is used to determine versioning.
See also http://blogs.msdn.com/b/alexj/archive/2009/05/20/tip-19-how-to-use-optimistic-concurrency-in-the-entity-framework.aspx

jdbcTemplate and Oracle 10

Trying to do an insert, I have:
jdbcTemplate.update("insert into....", new Object[]{foo.getId(), foo.getName()})
foo.getId() returns a long, and getName() a String.
I have "NUMBER" as the id type in Oracle, and varchar2 for the name field.
I'm getting SQLtype unknown problem.
the update method has a version where I do not have to put in the SQL types, but do I have to, and if so, how?
I'm assuming you mean the Spring Framework JdbcTemplate class. The JdbcTemplate methods will attempt to guess the java.sql.Type for value references, but apparently isn't guessing correctly in this case.
There are a couple of ways to include the type:
The JdbcTemplate.update(String, Object[]) [javadoc](http://static.springframework.org/spring/docs/2.5.x/api/org/springframework/jdbc/core/JdbcTemplate.html#update(java.lang.String, java.lang.Object[])) indicates that you can pass SqlParameterValue instances, consisting of the java.sql.Type and a value.
Alternatively, you can use JdbcTemplate.update(String, Object[], int[]) passing an array of java.sql.Type

Resources