This looks like limitation from Microsoft azure mobile client for offline sync service for android.
In my xamarin form application i have 40 azure tables to sync with remote. Whenever the particular request(_abcTable.PullAsync) has the more number record like 5K, PullAsync returns the exception saying that : Error executing SQLite command: 'too many SQL variables'.
That pull async URL goes like this : https://abc-xyz.hds.host.com/AppHostMobile/tables/XXXXXXResponse?$filter=(updatedAt ge datetimeoffset'2017-06-20T13:26:17.8200000%2B00:00')&$orderby=updatedAt&$skip=0&$top=5000&ProjectId=2&__includeDeleted=true.
But in postman i can see the same Url returning the 5K records and Works fine in iPhone device as well but failing only in android.
From the above PullAsync request if i change the "top" parameter value from 5000 to 500 it works fine in android but takes more time. Do i have any other alternatives without limiting the performance.
Package version:
Microsoft.Azure.Mobile.Client version="3.1.0"
Microsoft.Azure.Mobile.Client.SQLiteStore" version=“3.1.0”
Microsoft.Bcl version="1.1.10"
Microsoft.Bcl.Build version="1.0.21"
SQLite.Net.Core-PCL version="3.1.1"
SQLite.Net-PCL version="3.1.1"
SQLitePCLRaw.bundle_green version="1.1.2"
SQLitePCLRaw.core" version="1.1.2"
SQLitePCLRaw.lib.e_sqlite3.android" version="1.1.2"
SQLitePCLRaw.provider.e_sqlite3.android" version="1.1.2"
Please let me know if i need to provide more information. Thanks
Error executing SQLite command: 'too many SQL variables
Per my understanding, your sqlite may touch the Maximum Number Of Host Parameters In A Single SQL Statement mentions as follows:
A host parameter is a place-holder in an SQL statement that is filled in using one of the sqlite3_bind_XXXX() interfaces. Many SQL programmers are familiar with using a question mark ("?") as a host parameter. SQLite also supports named host parameters prefaced by ":", "$", or "#" and numbered host parameters of the form "?123".
Each host parameter in an SQLite statement is assigned a number. The numbers normally begin with 1 and increase by one with each new parameter. However, when the "?123" form is used, the host parameter number is the number that follows the question mark.
SQLite allocates space to hold all host parameters between 1 and the largest host parameter number used. Hence, an SQL statement that contains a host parameter like ?1000000000 would require gigabytes of storage. This could easily overwhelm the resources of the host machine. To prevent excessive memory allocations, the maximum value of a host parameter number is SQLITE_MAX_VARIABLE_NUMBER, which defaults to 999.
The maximum host parameter number can be lowered at run-time using the sqlite3_limit(db,SQLITE_LIMIT_VARIABLE_NUMBER,size) interface.
I refered Debugging the Offline Cache and init my MobileServiceSQLiteStore as follows:
var store = new MobileServiceSQLiteStoreWithLogging("localstore.db");
I logged all the SQL commands that are executed against the SQLite store when invoking pullasync. I found that after successfully retrieve response from mobile backend via the following request:
https://{your-app-name}.azurewebsites.net/tables/TodoItem?$filter=((UserId%20eq%20null)%20and%20(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00'))&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Microsoft.Azure.Mobile.Client.SQLiteStore.dll would execute the following sql statement for updating the related local table:
BEGIN TRANSACTION
INSERT OR IGNORE INTO [TodoItem] ([id]) VALUES (#p0),(#p1),(#p2),(#p3),(#p4),(#p5),(#p6),(#p7),(#p8),(#p9),(#p10),(#p11),(#p12),(#p13),(#p14),(#p15),(#p16),(#p17),(#p18),(#p19),(#p20),(#p21),(#p22),(#p23),(#p24),(#p25),(#p26),(#p27),(#p28),(#p29),(#p30),(#p31),(#p32),(#p33),(#p34),(#p35),(#p36),(#p37),(#p38),(#p39),(#p40),(#p41),(#p42),(#p43),(#p44),(#p45),(#p46),(#p47),(#p48),(#p49)
UPDATE [TodoItem] SET [Text] = #p0,[UserId] = #p1 WHERE [id] = #p2
UPDATE [TodoItem] SET [Text] = #p0,[UserId] = #p1 WHERE [id] = #p2
.
.
COMMIT TRANSACTION
Per my understanding, you could try to set MaxPageSize up to 999. Also, this limitation is from sqlite and the update processing is automatically handled by Microsoft.Azure.Mobile.Client.SQLiteStore. For now, I haven't find any approach to override the processing from Microsoft.Azure.Mobile.Client.SQLiteStore.
I’m trying to set up a very simple BAM scenario within BizTalk Server 2013R2 upon which to build, involving tracking just the time of all incoming messages processed by a port.
To this end I have :
Within Excel, created an Activity Definition (called
SimpleReceiveTest) containing a single Item called ReceiveTime of
type milestone (date/time), and a View Definition (also called
SimpleReceiveTest) containing just this Activity Definition and Item.
Imported this BAM definition spreadsheet using bm.exe
Added view rights to SimpleReceiveTest again using bm.exe
Launched the Tracking Profile Editor, imported the BAM Activity
Definition, and mapped ActivityID = MessageID and ReceiveTime =
PortStartTime by drag and drop from the Messaging Property Schema, as
shown below :
Set the Port Mappings for MessageID and PortStartTime to relate to a
test Receive Port ReceivePort1 that I am using for testing. This is
using a pass-through pipeline.
Saved and applied the above Tracking Profile
It is my understanding that for any messages received on port ReceivePort1 I should now get a tracking activity created. However this is not happening – there are no records in any of the BAM tables/views and no data is available within the BAM Portal.
I have tried restarting the hosts, and have verified that the TDDS_FailedTrackingData table is empty, there is nothing relevant in the event log, a Tracking host is running and the SQL Agent Jobs are running. I have also tried running these jobs manually.
Have I missed something, and am I correct in my expectation that this simple scenario should create tracked activities for any messages passing through the Receive Port? If so what can I try to further diagnose this?
Now fixed - it's actually a bug in vanilla BizTalk 2013R2 when using a standard pipeline that has been fixed in CU2.
FIX: BAM tracking doesn’t work when you use the XMLReceive or a custom pipeline in BizTalk Server
I would like to know the Process Integration steps.
Through Outbound ports
If any of the event occurs at AX Dynamics, we just want to know that events in the form of XML(Process Integration).
Example: Sales Order Creation, Customer Creation, Purchase Order Creation..
Outbound ports are only useful for asynchronous communication.
See AX 2012 Export Data with Outbound ports for an example (using the file system).
The steps to initiate sending data is in the AIF_SendCustomer.
As this is no lightweight operation, you may consider logging the records which needs integration in a custom integration table, then doing the processing in batch.
This is done in the insert and/or update and maybe delete method.
Deletes requires you store the RecId field value in the external system to be used for delete requests. The following does not cover this.
For logged table make the following method:
void syncRecord()
{
XXXRecordLog log;
log.RefTableId = this.TableId;
log.RefRecId = this.RecId;
log.insert();
}
Then call this.syncRecord() in the insert and update methods.
In the query to the outbound service be sure to exists join your table and the log table. This way only changed records are exported.
Make a batch job to do the transfer using the AIF_SendCustomer as a template.
After a synchronous (AifSendMode::Sync) transfer of the records, delete the log records (or mark them transferred).
Finally call AIFoutboundProcessingService to flush the file:
new AIFoutboundProcessingService().run();
Try to keeps things simple. It might be simpler to do a comma file export of the changed records!
In BizTalk 2010, I am using the SQL Adapter for polling a table to create a message and to initiate the orchestration process.
I have modified the stored procedure without changing the schema. But i have started getting errors after modifying it and SQL polling is not happening. So i have restarted the Host instance and it started working.
So here my question is Is restarting host instance mandatory after changing the stored procedures?
Error is "The adapter "WCF-Custom" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.AdapterException: The ResultSet returned as part of the Typed Stored Procedure or Typed Polling invocation did not match the metadata available. If this Stored Procedure or Polling Statement can return a variable number of result sets, consider using the un-typed Stored Procedure or un-typed Polling operation instead."
Can anyone please suggest what could be the root cause?
Thanks,
Sasikumar.S
Yes, you will need to restart the Host Instance of the Host configured for your WCF-SQL Handler.
Under the hood, the first time a particular Stored Proc is called, the WCF-SQL adapter first executes it with the SET FMTONLY ON; flag. This causes Sql Server to return just the datatypes of the expected data, but not execute the sproc itself. The adapter caches these datatypes for the lifetime of the host process.
If you change the data returned by the stored procedure, the next time it executes, it will be out of sync, and unable to coerce into the expected type. Hence, the need to restart the Host Instance(s).
TL;DR - If you change a stored procedure, you need to restart the WCF-SQL Host Instance.
Classic producer-consumer-problem.
I have x app servers which write records in a DB table (same DB).
On each server, a service is running which polls the DB table and is supposed to read the oldest entry, process it and delete it.
The issue is now that the services get into a race condition: service on server A starts reading, server B starts reading the same record. I'm a bit stuck on this...I have implemented producer-consumer so often but never across server barriers.
The server cannot talk to each other except over the DB.
Environment is SQL Server 2005 and ASP-NET 3.5.
If you pick up work in a transactional way, only one server can pick it up:
set transaction isolation level repeatable read
update top 1 tbl
set ProcessingOnServer = HOST_NAME()
from YourWorkTable tbl
where ProcessingOnServer is null
and Done = 0
Now you can select the details, knowing the work item is safely assigned to you:
select *
from YourWorkTable tbl
where ProcessingOnServer = HOST_NAME()
and Done = 0
The function host_name() returns the client name, but if you think it's safer you can pass in the hostname from your client application.
We usually add a timestamp, so you can check for servers that took too long to process an item.