I've an existing project in Symfony 1.4 and Propel 1.4
For a new requirement, I tried following code:
static public function extendMomentoLock($momentoId,$memberId)
{
$wherec = new Criteria();
$updatec = new Criteria();
$updatec->add(MomentoPeer::LOCKED, 1);
$updatec->addAnd(MomentoPeer::LOCKEDBY, $memberId);
$wherec->add(MomentoPeer::ID, $momentoId, Criteria::EQUAL);
$con = Propel::getConnection(MemberPeer::DATABASE_NAME, Propel::CONNECTION_WRITE);
BasePeer::doUpdate($wherec, $updatec, $con);
return "extended";
}
As expected, it generated correct query, as taken from logs
UPDATE momento SET `LOCKED`=1, `LOCKEDBY`=6 WHERE momento.ID='198'
No issue till here.
Problem starts, as I need to run this query every 3 minutes. Rule is, row gets unlocked automatically every 5 minutes, if record is updated 5 minutes earlier. To keep it locked, updated_at column must be less then 5 minutes so browser sends request to keep record locked.
I was expecting query to update updated_at column. However, since nothing is updating in query, updated_at column is not updating.
Is there any way to force propel to execute query, even when no records are updated.
I guess you're using the Timestampable behavior so it will update date fields only if changes are perform on the row.
I think you can force the update using a basic SQL statement:
$updatec->addAnd(MomentoPeer::UPDATED_AT, 'NOW()');
Related
Other than than re-writing an entity to the datastore, is there a method to get existing entity properties indexed?
A scenario:
I created new queries using a property of previously created entities.
The queries failed with "Cannot query for un-indexed property ", which is correct behaviour as the initial Class definition had (indexed=false)
for the relevant property.
I then set (indexed=true) for the relevant property. The queries using this property than ran without generating errors, the output however was INCORRECT!
I assume this is because existing entities do not automatically get added to the index (although Google documents elude to indexes being automatically generated).
I could get the index updated and queries working correctly only by updating each entity. This is perhaps OK if one is aware of the problem is using small datasets. I am concerned about live data. I need to do more testing with live but the same effect/behaviour could be the same.
It appeared changing index.yaml and restarting the GAE instance had no effect. It also appears gcloud datastore create-indexes index.yaml will not affect this behaviour.
What appears to be a solution
It seems the only solution is to write back the existing entities so an entry is created in the new index. Probably to the effect of:
all_entities_in_kind=theModel.query()
list_of_keys= ndb.put_multi(all_entities_in_kind)
If there's a better way please post.
Simulation of the effect
To illustrate, I ran the code snippet below from the Dev SDK Interactive console. First run of the code below creates test data. The queries will fail (correct behaviour).
Changing bool_p to indexed=true, re-running the modified code means the query will run but the results are incorrect.
Deleting all the data and re-running the code (with bool_p indexed) the queries return the correct result.
Observation
It appears the index on the property is not automatically generated. At least from the Interactive console. Restarting the Instance has no effect. Changing the index.yaml file also seems to make no difference. It appears to be the index on the property needs to be built but so far I have not discovered how. The only solution would be export all the data and re-import with the modified class. This is not so much a problem in development but not for the live datastore.
Code example
from google.appengine.ext import ndb
from datetime import datetime, timedelta
class TModel(ndb.Model):
test_date = ndb.DateProperty(indexed=True)
text = ndb.StringProperty(indexed=True)
bool_p = ndb.BooleanProperty(indexed=False) #First run
#bool_p = ndb.BooleanProperty(indexed=True) #Second run.
query0=TModel.query()
if query0.count() == 0:
print 'Create test data'
TModel(id='a',test_date=datetime.strptime('2017-01-01','%Y-%m-%d').date(),text='One',bool_p=True).put()
TModel(id='b',test_date=datetime.strptime('2017-01-02','%Y-%m-%d').date(),text='Two',bool_p=False).put()
TModel(id='c',test_date=datetime.strptime('2017-01-03','%Y-%m-%d').date(),text='Three',bool_p=True).put()
TModel(id='d',test_date=datetime.strptime('2017-01-01','%Y-%m-%d').date(),text='One').put() #To check behaviour with bool_p undefined
query1 = TModel.query(TModel.bool_p == True)
print query1.count()
query2 = TModel.query(TModel.bool_p == False)
print query2.count()
query3 = TModel.query(TModel.test_date <= datetime.strptime('2017-01-02','%Y-%m-%d').date())
print query3.count()
query4 = TModel.query(TModel.test_date <= datetime.strptime('2017-01-02','%Y-%m-%d').date(), TModel.bool_p == True)
print query4.count()
#Equivalent Queries using GQL
queryG1=ndb.gql('SELECT * FROM TModel WHERE bool_p = True')
print queryG1.count()
queryG2=ndb.gql('SELECT * FROM TModel WHERE bool_p = True')
print queryG2.count()
queryG3 =ndb.gql("SELECT * FROM TModel WHERE test_date <= DATE('2017-01-02')")
print queryG3.count()
queryG4 =ndb.gql("SELECT * FROM TModel WHERE test_date <= DATE('2017-01-02') AND bool_p = True ")
print queryG4.count()
Correct Results/Incorrect Result after changing: (indexed=False) to True
2 0
1 0
3 3
1 0
2 0
2 0
3 3
1 0
It is necessary to rewrite the existing entities when changing a property from unindexed to indexed in order to re-index the properties. From https://cloud.google.com/datastore/docs/concepts/indexes#unindexed_properties:
Note, however, that changing a property from excluded to indexed does not affect any existing entities that may have been created before the change. Queries filtering on the property will not return such existing entities, because the entities weren't written to the query's index when they were created. To make the entities accessible by future queries, you must rewrite them to Cloud Datastore so that they will be entered in the appropriate indexes.
We are using Solr 5.0.0. Delta import configuration is very simple, just like the apache-wiki
We have setup cron job to do delta-imports every 30 mins, simple setup as well:
0,30 * * * * /usr/bin/wget http://<solr_host>:8983/solr/<core_name>/dataimport?command=delta-import
Now, what happens if sometimes currently running delta-import takes longer than the next scheduled chron job?
Does SOLR Launches next delta-import in a parallel thread? Or ignores job until previous one is done?
Extending time in cron scheduler isn't an option as similar problem could happen as user and document number increases over the time...
I had the similar problem at my end.
Here is how I had a work around for it.
Note : I have implemented solr with core.
I have one table where in I have kept the info about solr like core name, last re-index date and re-indexing-required, current_status.
I have written a scheduler where it check which all cores needs re-indexing(delta-import) from the above table and starts the re-index.
Re-indexing request are sent/invoked after every 20 minutes(In your its 30 min).
When I start the re-indexing also update table and mark the status for the specific core as "inprogress".
After ten minutes I fire a request checking if the re-indexing is completed.
For checking the re-indexing I have used the request as :
final URL url = new URL(SOLR_INDEX_SERVER_PROTOCOL, SOLR_INDEX_SERVER_IP, Integer.valueOf(SOLR_INDEX_SERVER_PORT),
"/solr/"+ core_name +"/select?qt=/dataimport&command=status");
check the status for Committed or idle and the consider it as re-indexing is completed and mark the status of it as Idle in the table.
So re-indexing scheduler wont pick core which are in inprogress status.
Also it considers only those cores for re-indexing where in there some updates (which can be identified by flag "re-indexing-required").
Re-indexing is invoked only if re-indexing-required is true and current status is idle.
If there are some updates(identified by "re-indexing-required") but the current_status is inprogress the scheduler wont pick it for re-indexing.
I hope this may help you.
Note : I have used DIH for indexing and re-indexing.
Solr will simple ignore next import request until the end of the first one and it will not cache the second request. I can observe the behaviour and I've been read it somewhere but couldn't find it now.
Infact I'm dealing with same problem. I try to optimize the queries:
deltaImportQuery="select * from Assests where ID='${dih.delta.ID}'"
deltaQuery="select [ID] from Assests where date_created > '${dih.last_index_time}' "
I only retrieved ID field in first hand and than try to retrive the intended doc.
You may also specify your fields instead of '*' sign. since I use view it doesn't apply in my case
I will update if I had another solution.
Edit After Solution
Beyond the suggested request above I change one more think that speed up my indexing process 10 times. I had two big Entities nested. I used Entity inside another one like
<entity name="TableA" query="select * from TableA">
<entity name="TableB" query="select * from TableB where TableB.TableA_ID='${TableA.ID}'" >
Which yields to multi valued tableB fields. But For every row one request maded to db for TableB.
I changed my view using a with clause combined with a comma separeted field value. And parse the value from solr field mapping. and indexed it in to multivalued field.
My whole indexing process speed up from hours to minutes. Below is my view and solr mappping config.
WITH tableb_with as (SELECT * from TableB)
SELECT *,STUFF( (SELECT ',' + REPLACE( fieldb1, ',', ';') from tableb_with where tableb_with.tableA.ID = tableA.ID
for xml path(''), type).value('.', 'varchar(max)') , 1, 1, '') AS field2WithComma,
STUFF( (SELECT ',' + REPLACE( fieldb1, ',', ';') from tableb_with where tableb_with.tableA.ID = tableA.ID
for xml path(''), type).value('.', 'varchar(max)') , 1, 1, '') AS field2WithComma,
Al fancy Joins and unions goes into with clouse in tableB and also alot of joins in tableA. Actually this view held 200 hundred field in total.
solr mappping is goes like this :
<field column="field1WithComma" name="field1" splitBy=","/>
Hope It may help someone.
I have a SQL query which is running on a view and on top has lot of wild card operators, hence taking a lot of time to complete.
The data is consumed by an ASP.net application, is there any way I could pre-run the query once in a day so data is already there when asp.net application needs it and only pass on the parameter to fetch specific records.
Much simplified example would be
select * from table
Run every day and result stored somewhere and when asp.net passes on the parameter only specific records are fetched like
select * from table where field3 = 'something'
Either use SQLAgent (MSSQL) or equivalent to run a scheduled process that stores the result into a Table like this...
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[MyTemporaryTable]') AND type in (N'U'))
BEGIN
TRUNCATE TABLE [dbo].[MyTemporaryTable];
INSERT INTO [dbo].[MyTemporaryTable]
SELECT * FROM [vwMyTemporaryTable];
END
ELSE
BEGIN
SELECT *
INTO [MyTemporaryTable]
FROM [vwMyTemporaryTableDataSource];
END
or you could store the result in ASP.Net as an Application/Session variable or even a Property in a class that is stored in Application/Session. The Property approach will load the data the first time it is requested, and use memory thereafter.
private MyObjectType _objMyStoredData;
public MyObjectType MyStoredData
{
get
{
if (_objMyStoredData == null)
{
_objMyStoredData = GetMyData();
}
return _objMyStoredData;
}
}
However, if your source data for this report is only 2,000 rows... I wonder if all this is really necessary. Perhaps increasing the efficiency of the query could solve the problem without delving into pre caching and the downsides that go with it, such as re-using data that could be out of date.
You can use redis. You can run the view once the user logs in. Then fill redis with the view data. Set that object in Session Context of user so that it is accessible on all pages. Then when the user logs out. clean up the redis. By this way user won't go to database everytime for result instead will get the data from redis cache. it's very fast. You can contact me if more help is needed.
you can mark it as answer if you find it helpful.
I've got a flex/air app I've been working on, it uses a local sqlite database that is created on the initial application start.
I've added some features to the application and in the process I had to add a new field to one of the database tables. My questions is how to I go about getting the application to create one new field that is located in a table that already exists?
this is a the line that creates the table
stmt.text = "CREATE TABLE IF NOT EXISTS tbl_status ("+"status_id INTEGER PRIMARY KEY AUTOINCREMENT,"+" status_status TEXT)";
And now I'd like to add a status_default field.
thanks!
Thanks - MPelletier
I've add the code you provided and it does add the field, but now the next time I restart my app I get an error - 'status_default' already exists'.
So how can I go about adding some sort of a IF NOT EXISTS statement to the line you provided?
ALTER TABLE tbl_status ADD COLUMN status_default TEXT;
http://www.sqlite.org/lang_altertable.html
That being said, adding columns in SQLite is limited. You cannot add a column anywhere but after the last column in your table.
As for checking if the column already exists, PRAGMA table_info(tbl_status); will return a table listing the various columns of your table.
ADD ON:
I've been using a strategy in database design that allows me to distinguish which modifications are required. For this, you will need a new table (call it DBInfo), with one field (Integer, call it SchemaVersion). Alternately, there is also an internal value in SQLite called user_version, which can be set with a PRAGMA command. Your code can, on program startup, check for schema version number and apply changes accordingly, one version at a time.
Suppose a function named UpdateDBSchema(). This function will check for your database schema version, handle DBInfo not being there, and determine that the database is in version 0. The rest of this function could be just a large switch with different versions, nested in a loop (or other structure available to your platform of choice).
So for this first version, have an UpgradeDBVersion0To1() function, which will create this new table (DBInfo), add your status_default field, and set SchemaVersion to 1. In your code, add a constant that indicates the latest schema version, say LATEST_DB_VERSION, and set it to 1. In that way, your code and your database have a schema version, and you know you need to synch them if they are not equal.
When you need to make another change to your schema, set the LATEST_DB_VERSION constant to 2 and make a new UpgradeDBVersion1To2() function that will perform the required changes.
That way, your program can be ported easily, can connect to and upgrade an old database, etc.
I know this is an old question... however.
I've hit this precise problem in the SQLite implementation in Adobe AIR. I thought it would be possible to use the PRAGMA command to resolve, but since adobe air's implementation does not support the PRAGMA command, we need an alternative.
What I did, that I thought would be worth while sharing here, is this:
var sql:SQLStatement = new SQLStatement();
sql.sqlConnection = pp_db.dbConn;
sql.text = "SELECT NewField FROM TheTable";
sql.addEventListener(SQLEvent.RESULT, function(evt:SQLEvent):void {
});
sql.addEventListener(SQLErrorEvent.ERROR, function(err:SQLErrorEvent):void {
var sql:SQLStatement = new SQLStatement();
sql.sqlConnection = pp_db.dbConn;
sql.text = "ALTER TABLE TheTable ADD COLUMN NewField NUMERIC;";
sql.execute();
sql.addEventListener(SQLEvent.RESULT, function (evt:SQLEvent):void {
});
});
sql.execute();
Hope it helps someone.
I solved a similar problem using the answer from this question:
ALTER TABLE ADD COLUMN IF NOT EXISTS in SQLite
Use built in user_version parameter to keep track of your updates. You set it using:
PRAGMA user_version = 1
and you retrieve it using
PRAGMA user_version
So basically retrieve user_version (it's 0 by default), check if it's 0. If yes, perform your updates and set it to 1. If you have more updates in the future, check if it's 1, perform updates and set it to 0. And so on...
In some cases I execute the command and get the exception for "duplicate column". Just a quick solution, not the perfect.
I have an application (ASP.NET 3.5) that allows users to rerun a particular process if required. The process inserts records into an MS SQL table. I have the insert in a Try / Catch and ignore the catch if a record already exists (the error in the Title would be valid). This worked perfectly using ADO but after I conveted to LINQ I noticed an interesting thing. If on a re-run of the process there was already records in the table, any new records would be rejected with the same error even though there was no existing record.
The code is as follows:
Dim ins = New tblOutstandingCompletion
With ins
.ControlID = rec.ControlID
.PersonID = rec.peopleID
.RequiredDate = rec.NextDue
.RiskNumber = 0
.recordType = "PC"
.TreatmentID = 0
End With
Try
ldb.tblOutstandingCompletions.InsertOnSubmit(ins)
ldb.SubmitChanges()
Catch ex As Exception
' An attempt to load a duplicate record will fail
End Try
The DataContext for database was set during Page Load .
I resolved the problem by redefining the DataContext before each insert:
ldb = New CaRMSDataContext(sessionHandler.connection.ToString)
Dim ins = New tblOutstandingCompletion
While I have solved the problem I would like to know if anyone can explain it. Without the DataContext redefinition the application works perfectly if there are no duplicate records.
Regards
James
It sounds like the DataContext thinks the record was inserted the first time, so if you don't redefine the context, it rejects the second insert because it "knows" the record is already there. Redefining the context forces it to actually check the database to see if it's there, which it isn't. That's LINQ trying to save a round trip to the database. Creating a new context as you've done forces it to reset what it "knows" about the database.
I had seen a very similar issue in my code were the identity column wasn't an autoincrementing int column, but a GUID with a default value of newguid() - basically LINQ wasn't allowing the database to create the GUID, but inserting Guid.Empty instead, and the second (or later) attempts would (correctly) throw this error.
I ended up ensuring that I generated a new GUID myself during the insert. More details can be seen here: http://www.doodle.co.uk/Blogs/2007/09/18/playing-with-linq-in-winforms.aspx
This allowed me to insert multiple records with the same DataContext.
Also, have you tried calling InsertOnSubmit multiple times (once for each new record) but only calling SubmitChanges once?
gfrizzle seems to be right here...
My code fails with the duplicate key error even though I've just run a stored proc to truncate the table on the database. As far as the data context knows, the previous insertion of a record with the same key is in fact a duplicate key, and an exception is thrown.
The only way that I've found around this is:
db = null;
db = new NNetDataContext();
right after the SubmitChanges() call that executes the previous InsertOnSubmit requests. Seems kind of dumb, but it's the only way that works for me other than redesigning the code.