"could not determine data type of parameter $1" with Postgres - spring-jdbc

I am facing this seem to be famous issue by setting a timestamp binding against postgres.
#Test
public void testGetCurrentDate() {
NamedParameterJdbcTemplate template = new NamedParameterJdbcTemplate(datasources.get(dbType));
PlatformTransactionManager manager = new DataSourceTransactionManager(datasources.get(dbType));
final Timestamp ts = new Timestamp(System
.currentTimeMillis());
new TransactionTemplate(manager).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
template.update("create table test_timestamp (col timestamp)", Collections.emptyMap());
template.update("insert into test_timestamp (col) values (:TS) ", Collections.singletonMap("TS", ts));
}
});
Assert.assertEquals((Integer) 1, new TransactionTemplate(manager).execute(new TransactionCallback<Integer>() {
#Override
public Integer doInTransaction(TransactionStatus status) {
MapSqlParameterSource paramsSource = new MapSqlParameterSource();
paramsSource.addValue("TS", ts, Types.TIMESTAMP, "timestamp");
return template.queryForObject("select 1 from test_timestamp where :TS is not null and col=:TS ", paramsSource,
Integer.class);
}
}));
}
This test fails with ERROR: could not determine data type of parameter $1
See here the discution with postgres about that:
http://www.postgresql-archive.org/quot-could-not-determine-data-type-of-parameter-quot-with-timestamp-td5995489.html
I already have a workaround for that (Setting a PGTimestamp instance instead of Timestamp). I was wondering if it is something that could benefits to Spring JDBC to increase portability accross multiple databases.
In this case the workaround would be something like this in org.springframework.jdbc.core.StatementCreatorUtils ( around line 380):
else if (sqlType == Types.TIMESTAMP &&
"PostgreSQL".equals(ps.getConnection().getMetaData().getDatabaseProductName()))) {
Class<?> pgTimestampClass = Class.forName("org.postgresql.util.PGTimestamp");
Timestamp ts = (Timestamp)pgTimestampClass.getConstructor(long.class).newInstance(javaValue.getTime());
ps.setTimestamp(paramIndex, ts);
If you think that it something that makes sense, I could propose a pull request.
Arnaud

Related

ServiceStack OrmLite - Elegant way to handle SQL Server Connection Drops

We are currently using ORMLite and it is working really well.
One of the places that we are using it is for running large batch processes.
These processes run a single large batch all within a single transaction, if there are any errors then it rolls back the transaction and then it needs to be run again.
Is there a way that something like a connection drop(which could be very quick) could be better handled and that it could then, just re-establish the connection and then re-continue from there?
The only thing that's resembles something close to what you're after is using a Custom OrmLite Exec Fitler which you can use to inject your own custom Execution strategy.
The example on OrmLite's home page shows an example of using an Exec filter to execute each query 3 times:
public class ReplayOrmLiteExecFilter : OrmLiteExecFilter
{
public int ReplayTimes { get; set; }
public override T Exec<T>(IDbConnection dbConn, Func<IDbCommand, T> filter)
{
var holdProvider = OrmLiteConfig.DialectProvider;
var dbCmd = CreateCommand(dbConn);
try
{
var ret = default(T);
for (var i = 0; i < ReplayTimes; i++)
{
ret = filter(dbCmd);
}
return ret;
}
finally
{
DisposeCommand(dbCmd);
OrmLiteConfig.DialectProvider = holdProvider;
}
}
}
OrmLiteConfig.ExecFilter = new ReplayOrmLiteExecFilter { ReplayTimes = 3 };
using (var db = OpenDbConnection())
{
db.DropAndCreateTable<PocoTable>();
db.Insert(new PocoTable { Name = "Multiplicity" });
var rowsInserted = db.Count<PocoTable>(x => x.Name == "Multiplicity"); //3
}
But it uses the same IDbConnection, i.e. it doesn't create a new DB Connection.

OrientDB execute script asynchronously and fetch records in a lazy fashion

Currently, we are using the Document API in OrientDB version 2.2. Let us suppose we have a class Company and a class Employee. Let's suppose we are interested in all Companies with at least one employee having a name from an arbitrary list. Employees are defined as LINKEDLISTs in our Company schema.
Our query would look smth like this:
select from Company where employees in (select from Employee where name in ["John", "Paul"])
Currently we have defined the following two indexes:
Company.employees (index on the employee links (their #rid)) -> dictionary hash index and
Employee.name -> notunique index
When executing the above query with explain we see that only the second index Employee.name is used, since we did not define the above indexes as a compound index. AS far as I could understand compound indexes across different classes like in our case are not supported in Orient 2.x.
Queries like this:
select from Company let $e = select from employees where name in ["John", "Paul"] where employees in $e
do not solve our problem either.
Searching across different blogs revealed two suggestions so far:
trying to define a compound index via inheritance by introducing a parent class on employee and company and defining the above two indexes on that
https://github.com/orientechnologies/orientdb/issues/5069
bundle the two queries in a batch scrip like this:
https://github.com/orientechnologies/orientdb/issues/6684
String cmd = "begin\n";
cmd += "let a = select from Employees where name " + query + "\n";
cmd += "let b = select from Company where employees in $a\n";
cmd += "COMMIT\n";
cmd += "return $b";
Suggestion 1 did not work for us.
Suggestion 2. worked. Both indexes have been used in each separate query, but then we ran into the next limitation of Orient. Batch scripts seem to be executed only synchronously, meaning that we can only get the results as a list all at once and not one by one in a lazy fashion, which in our case is a NO GO due to the memory overhead.
One naive workaround we tried is as follows:
public class OCommandAsyncScript extends OCommandScript implements OCommandRequestAsynch{
public OCommandAsyncScript(String sql, String cmd) {
super(sql, cmd);
}
#Override
public boolean isAsynchronous() {
return true;
}
private void containsAtLeastOne(final #Nonnull ODatabaseDocumentTx documentTx,
final #Nonnull Consumer<Company> matchConsumer,
final #Nonnull String queryText
) throws TimeoutException {
String cmd = "begin\n";
cmd += "let a = select from Employee where name " + queryText + "\n";
cmd += "let b = select from Company where employees in $a\n";
cmd += "COMMIT\n";
cmd += "return $b";
final OCommandHandler resultListener = new OCommandHandler(documentTx, (document -> {
final Company companies = document2model(document);
matchConsumer.accept(company);
}));
OCommandAsyncScript request = new OCommandAsyncScript("sql", cmd);
request.setResultListener(resultListener);
documentTx.command(request).execute();
...
}
}
public class OCommandHandler implements OCommandResultListener {
private final ODatabaseDocumentTx database;
private final Consumer<ODocument> matchConsumer;
public OCommandHandler(
final #Nonnull ODatabaseDocumentTx database,
final #Nonnull Consumer<ODocument> matchConsumer
) {
this.database = database;
this.matchConsumer = matchConsumer;
}
#Override
public boolean result(Object iRecord) {
if (iRecord != null) {
final ODocument document = (ODocument) iRecord;
/*
Result handler might be asynchronous, if document is loaded in a lazy mode,
database will be queries to fetch various fields. Need to activate it on the current thread.
*/
database.activateOnCurrentThread();
matchConsumer.accept(document);
}
return true;
}
...
}
The approach of defining a custom OCommandAsyncScript did not work unfortunately. When debugging the OStorageRemote class of Orient it seems that no partial results could be read, Here the respective extract from the source code:
public Object command(final OCommandRequestText iCommand) {
....
try {
OStorageRemote.this.beginResponse(network, session);
List<ORecord> temporaryResults = new ArrayList();
boolean addNextRecord = true;
byte status;
if(asynch) {
while((status = network.readByte()) > 0) {
ORecord record = (ORecord)OChannelBinaryProtocol.readIdentifiable(network);
if(record != null) {
switch(status) {
case 1:
if(addNextRecord) {
addNextRecord = iCommand.getResultListener().result(record);
database.getLocalCache().updateRecord(record);
}
break;
case 2:
if(record.getIdentity().getClusterId() == -2) {
temporaryResults.add(record);
}
database.getLocalCache().updateRecord(record);
}
}
}
}
}
Network.readbyte() is always null, hence no records could be fetched at all.
Is there any other workaround how we could execute a sql script in asynchronus mode and retrieve results in a lazy fashion preventing the generation of large lists on our application side?

Application Cache and Slow Process

I want to create an application wide feed on my ASP.net 3.5 web site using the application cache. The data that I am using to populate the cache is slow to obtain, maybe up to 10 seconds (from a remote server's data feed). My question/confusion is, what is the best way to structure the cache management.
private const string CacheKey = "MyCachedString";
private static string lockString = "";
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
if (data == null)
{
// A - Should this method call go here?
newData = SlowResourceMethod();
lock (lockString)
{
data = (string)Cache[CacheKey];
if (data != null)
{
return data;
}
// B - Or here, within the lock?
newData = SlowResourceMethod();
Cache[CacheKey] = data = newData;
}
}
return data;
}
The actual method would be presented by and HttpHandler (.ashx).
If I collect the data at point 'A', I keep the lock time short, but might end up calling the external resource many times (from web pages all trying to reference the feed). If I put it at point 'B', the lock time will be long, which I am assuming is a bad thing.
What is the best approach, or is there a better pattern that I could use?
Any advice would be appreciated.
I add the comments on the code.
private const string CacheKey = "MyCachedString";
private static readonly object syncLock = new object();
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
// start to check if you have it on cache
if (data == null)
{
// A - Should this method call go here?
// absolut not here
// newData = SlowResourceMethod();
// we are now here and wait for someone else to make it or not
lock (syncLock)
{
// now lets see if some one else make it...
data = (string)Cache[CacheKey];
// we have it, send it
if (data != null)
{
return data;
}
// not have it, now is the time to look for it.
// B - Or here, within the lock?
newData = SlowResourceMethod();
// set it on cache
Cache[CacheKey] = data = newData;
}
}
return data;
}
Better for me is to use mutex and lock depended on the name CacheKey and not lock all resource and the non relative one. With mutex one basic simple example will be:
private const string CacheKey = "MyCachedString";
public string GetCachedString()
{
string data = (string)Cache[CacheKey];
string newData = "";
// start to check if you have it on cache
if (data == null)
{
// lock it base on resource key
// (note that not all chars are valid for name)
var mut = new Mutex(true, CacheKey);
try
{
// Wait until it is safe to enter.
// but also add 30 seconds max
mut.WaitOne(30000);
// now lets see if some one else make it...
data = (string)Cache[CacheKey];
// we have it, send it
if (data != null)
{
return data;
}
// not have it, now is the time to look for it.
// B - Or here, within the lock?
newData = SlowResourceMethod();
// set it on cache
Cache[CacheKey] = data = newData;
}
finally
{
// Release the Mutex.
mut.ReleaseMutex();
}
}
return data;
}
You can also read
Image caching issue by using files in ASP.NET

Why is java.lang.Long not persistable?

I am trying to query for a list of ids of type Long in GAE/JDO. And I'm getting the following exception when I call detachCopyAll() on the result set.
org.datanucleus.jdo.exceptions.ClassNotPersistenceCapableException: The class "The class "java.lang.Long" is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data/annotations for the class are not found." is not persistable. This means that it either hasnt been enhanced, or that the enhanced version of the file is not in the CLASSPATH (or is hidden by an unenhanced version), or the Meta-Data for the class is not found.
at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:241)
at org.datanucleus.jdo.JDOPersistenceManager.jdoDetachCopy(JDOPersistenceManager.java:1110)
at org.datanucleus.jdo.JDOPersistenceManager.detachCopyAll(JDOPersistenceManager.java:1183)
...
I can query for a list of User objects and detach them just fine. I expected all primitive wrapper classes like Long to be persistable. What am I doing wrong? Below is the code I'm working with.
#PersistenceCapable(identityType=IdentityType.APPLICATION, detachable="true")
public class User
{
#PrimaryKey
#Persistent(valueStrategy=IdGeneratorStrategy.IDENTITY)
private Long id;
private String email;
}
#SuppressWarnings("unchecked")
public static List<Long> getUserKeys(String email)
{
assert email != null;
List<Long> keyList = null;
PersistenceManager pm = null;
Query query = null;
try {
pm = PMF.get().getPersistenceManager();
query = pm.newQuery("select id from " + User.class.getName());
query.declareParameters("String emailParam");
query.setFilter("email == emailParam");
List<Long> resultList = (List<Long>) query.execute(email);
// next line causes the ClassNotPersistenceCapableException
keyList = (List<Long>) pm.detachCopyAll(resultList);
}
finally {
if (query != null) query.closeAll();
if (pm != null) pm.close();
}
return keyList;
}
List<Long> resultList = (List<Long>) query.execute(email);
// next line causes the ClassNotPersistenceCapableException
keyList = (List<Long>) pm.detachCopyAll(resultList);
I don't understand what you are doing here. A List<Long> does not have to be detached. You'd want to detach instances of your User entity class, but a Long is a Long, and you can just do whatever you need to do with the resultList.
The error message is confusing, but just caused by Long not being an entity class.

Picking out Just JSON Data Returned from ASP.NET MVC3 controller Update

I've got data returned from my JavaScript client that just includes the data that has changed. That is, I may have an array with each row containing 10 columns of JSON downloaded, but on the Update, only the data that is returned to me is the data that got updated. On my update, I only want to update those columns that are changed (not all of them).
In other words, I have code like below but because I'm passing in an instance of the "President" class, I have no way of knowing what actually came in on the original JSON.
How can I just update what comes into my MVC3 update method and not all columns. That is, 8 of the columns may not come in and will be null in the "data" parameter passed in. I don't want to wipe out all my data because of that.
[HttpPost]
public JsonResult Update(President data)
{
bool success = false;
string message = "no record found";
if (data != null && data.Id > 0)
{
using (var db = new USPresidentsDb())
{
var rec = db.Presidents.FirstOrDefault(a => a.Id == data.Id);
rec.FirstName = data.FirstName;
db.SaveChanges();
success = true;
message = "Update method called successfully";
}
}
return Json(new
{
data,
success,
message
});
}
rec.FirstName = data.FirstName ?? rec.FirstName;
I would use reflection in this case because the code will be too messy like
if (data.FirstName != null)
rec.FirstName = data.FirstName
.
.
.
and so on for all the fields
Using reflection, it would be easier to do this. See this method
public static void CopyOnlyModifiedData<T>(T source, ref T destination)
{
foreach (var propertyInfo in source.GetType().GetProperties())
{
object value = propertyInfo.GetValue(source, null);
if (value!= null && !value.GetType().IsValueType)
{
destination.GetType().GetProperty(propertyInfo.Name, value.GetType()).SetValue(destination, value, null);
}
}
}
USAGE
CopyOnlyModifiedData<President>(data, ref rec);
Please mind that, this won't work for value type properties.

Resources