I have created a BPEL process and added a DB adapter for polling a table change of new row added..
and my polling interval is 60 seconds,
but my process is creating an instance on every 60 seconds, ideally when table have some change then it should create an workitem in application..
please guide me if i am doing any thing wrong...
I presume if you look at the instances that are created you will notice that you are getting the same data back.
This will occur if the db adapter is unsure what records have been read.
The simplest way to do this is to have the DB adapter mark the record as read. You can do this by adding an indicator column to your schema as one solution that is set to read or unread.
But effectively, your issue is most likely, without further information, that you are re-reading the same record every iteration and therefore you need to use one of the options for determining a record has been read.
Related
We want to keep certain documents in our DB for a short duration. When a document is created, it doesn't matter how often its modified but it should be deleted after say X time units.
We looked at time to live in Cosmos DB but it seems to set the TTL from last edit and not creation.
One approach that we are considering is reduce the TTL everytime we update based on current time vs last update time of the document. It is hacky and inaccurate to errors due to clock skews.
Is there a better/accurate approach to achieving expiry from creation time? Our next approach will be to setup a service bus event that will trigger document deletion. Even that is more of best effort approach than an accurate TTL.
Every time you update a record you can derive a new TTL from the current TTL and the _ts field. So first get the item, derive the new TTL, and update the item together with the new (smaller) TTL.
I need to know how to clear documentdb collection before inserting new documents. I am using datafactory pipeline activity to fecth data from on-prem sql server and insert into documentdb collection. The frequency is set to every 2 hrs. So when the next cycle runs, I want to first clear the exisitng data in documentdb collection. How do I do that?
The easiest way is to programmatically delete the collection and recreate it with the same name. Our test scripts do this automatically. There is the potential for this to fail due to a subtle race condition, but we've found that adding a half second delay between the delete and recreate avoids this.
Alternatively, it would be possible to fetch every document id and then delete them one at a time. This would be most efficiently done from a stored procedure (sproc) so you didn't have to send it all over the wire, but it would still be consuming RUs and take time.
I am collecting data every second and storing it in a ":memory" database. Inserting data into this database is inside a transaction.
Everytime one request is sending to server and server will read data from the first memory, do some calculation, store it in the second database and send it back to the client. For this, I am creating another ":memory:" database to store the aggregated information of the first db. I cannot use the same db because I need to do some large calculation to get the aggregated result. This cannot be done inside the transaction( because if one collection takes 5 sec I will lose all the 4 seconds data). I cannot create table in the same database because I will not be able to write the aggregate data while it is collecting and inserting the original data(it is inside transaction and it is collecting every one second)
-- Sometimes I want to retrieve data from both the databses. How can I link both these memory databases? Using attach database stmt, I can attach the second db to the first one. But the problem is next time when a request comes how will I check the second db is exist or not?
-- Suppose, I am attaching the second memory db to first one. Will it lock the second database, when we write data to the first db?
-- Is there any other way to store this aggregated data??
As far as I got your idea, I don't think that you need two databases at all. I suppose you are misinterpreting the idea of transactions in sql.
If you are beginning a transaction other processes will be still allowed to read data. If you are reading data, you probably don't need a database lock.
A possible workflow could look as the following.
Insert some data to the database (use a transaction just for the
insertion process)
Perform heavy calculations on the database (but do not use a transaction, otherwise it will prevent other processes of inserting any data to your database). Even if this step includes really heavy computation, you can still insert and read data by using another process as SELECT statements will not lock your database.
Write results to the database (again, by using a transaction)
Just make sure that heavy calculations are not performed within a transaction.
If you want a more detailed description of this solution, look at the documentation about the file locking behaviour of sqlite3: http://www.sqlite.org/lockingv3.html
I have the scenario like this,
My environment is .Net2.0, VS 2008, Web Application
I need to lock a record when two members are trying to access at the same time.
We can do it in two ways,
By Front end (putting the sessionID and record unique number in the dictionary and keeping it as a static or application variable), we will release when the response is go out of that page, client is not connected, after the post button is clicked and session is out.
By backend (record locking in the DB itself - need to study - my team member is looking ).
Is there any others to ways to do and do I need to look at other ways in each and every steps?
Am I missing any conditions?
You do not lock records for clients, because locking a record for anything more than a few milliseconds is just about the most damaging thing one can do in a database. You should use instead Optimistic Concurrency: you detect if the record was changed since the last read and re-attempt the transaction (eg you re-display the screen to the user). How that is actually implemented, will depend on what DB technology you use (ADO.Net, DataSets, Linq, EF etc).
If the business domain requires lock-like behavior, those are always implemented as reservation logic in the database: when a record is displayed, it is 'reserved' so that no other users can attempt to make the same transaction. The reservation completes or times out or is canceled. But a 'reservation' is never done using locks, is always an explicit update of state from 'available' to 'reserved', or something similar.
This pattern is also describe din P of EAA: Optimistic Offline Lock.
If your talking about only reading data from a record from SQL server database, you don't need to do anything!!! SQL server will do everything about managing multi access to records. but if you want to manipulate data, you have to use Transactions.
I agree with Ramus. But still if u need it. Create a column with name like IsInUse as bit type and set it true if one is accessing. Since other guys will also need same data at same time then u need to save your app from crash .. so at every place from where the data is retrieved you have to put a check if IsInUse is False or not.
I'm trying to figure out how to develop the on my database. E.g. there are some records in the database:
alt text http://img109.imageshack.us/img109/2962/datax.png
So, if the actual DateTime on the server is 2010-01-09 12:12:12 the record no.1 must be deleted.
OK, but what, if in the datebase are e.g. 1.000.000 records? The server has to search in the database on each second to check what rows must be deleted ? It's not efficient at all.
I'm totally new to Microsoft Server so I'd be grateful of any kind of help
There isn't a time based trigger in sql server. So you are going to have to implement this as a job or through some other scheduled mechanism.
Most likely you will want an index on the StartDate (end date?) column so that your deletion query doesn't have to perform a full table scan to find the data it needs to delete.
Usually you don't actually perform deletes every second. Instead the app should be smart enough to query the table in a way to eliminate those records from it's result set. Then, you can perform lazy deletes at some other time interval to do cleanup. Such as once an hour or once a day etc.