I'm making a symfony application that stores a huge amount of SMS in the database and Kannel detectes these Sms and sends , I'm using the sqlbox for sure, the problem that Kannel notifies our symfony app about an sms throug the dlr-url which is causing alot of memory usage of apache, cause for every Sms we got about 3 http request from the dlr to update the sms so for 100k sms we get 300k request and in each request we update the database...
So what I'm thinking of is that why not Kannel update the sms status in the database directly without calling the dlr url... is it possible ?
From my understanding, your tests are based on the following configuration:
sqlbox to send messages (through insert in send_sms table
dlr_url set in your configuration to get delivery reports
no custom dlr-storage
How to keep DLR without using additional http calls
There is already a tool to get automatically DLR into a database: that is the interest of dlr-storage
In Kannel documentation, you will see that this field has several possibilities:
Supported types are: internal, spool, mysql, pgsql, sdb, mssql,
sqlite3, oracle and redis. By default this is set to internal.
From my experience, when using a database dlr-storage, the delivery reports (DLR) are only kept in datatable while the delivered status has not been received, then they are automatically deleted.
So if you wish to keep some logs about the sent items, you need to edit some files (gw/dlr_mysql.c and gw/dlr.c) to avoid this delete.
Configuration of the dlr-strorage
Here I will provide an example with MySql.
Sample of additional configuration in kannel.conf file:
# this line must be in the "core" group
dlr-storage = mysql
#---------------------------------------------
# DLR STORAGE
#
#
group = mysql-connection
id = mydlr
host = localhost
username = *yourMySqlUserName*
password = *yourMySqlPass*
database = *yourMySqlDatabaseWithTheDlrTable*
max-connections = 1
# Group defining where are the data in the db (table, columns)
group = dlr-db
id = mydlr
table = dlr
field-smsc = smsc
field-timestamp = ts
field-destination = destination
field-source = source
field-service = service
field-url = url
field-mask = mask
field-status = status
field-boxc-id = boxc
Related
I'm writing some R code that queries a MongoDB database, imports records matching the query criteria into R, performs record linkage with another data source, and then pushes the updated records back into MongoDB.
The code needs to work with any instance of the MongoDB database. Some people have it installed as standalone on their own computers, while others have it installed on their organisational servers. Note that these are servers specific to individual organisations and not the public mongo server.
To test my code, I have access to both scenarios - one instance is set up on my own computer, and I have several remote server instances as well.
The MongoDB database has some APIs, but I was struggling with the adapting the APIs to include the correct syntax to form my query, so I thought I would try the mongolite package instead.
I was able to create a successful connection string for the MongoDB instance on my local computer, using my user ID (which I retrieve first with an API and save as the R object myids), password, the localhost and port number as below:
# Load library:
library(mongolite)
# Create connection:
con <- mongolite::mongo(collection = "person",
db = "go-data",
url = paste0("mongodb://localhost:3000",
myids$userid,
":",
rawToChar(password)))
I understood from reading the mongolite user manual that to create the connection string / URI, you skip the http or https part of the address and preface it with either mongodb:// when the Mongodb database is on a local computer, or mongodb+srv:// when the Mongodb database is on a remote server.
However, when I try just changing the prefix and login details for the remote server version, the connection fails. Say the URL for my remote server is https://mydb-r21.orgname.org/ which opens a web page where you can log in to the Mongodb database and interact with it via a graphical user interface. Just swapping localhost:3000 for the web address mydb-r21.orgname.org/ and supplying the relevant login credentials for that server doesn't work:
# Load library:
library(mongolite)
# Create connection:
con <- mongolite::mongo(collection = "person",
db = "go-data",
url = paste0("mongodb+srv://mydb-r21.orgname.org/",
myids$userid,
":",
rawToChar(password)))
When I try, this is the error I get:
Warning: [ERROR] Failed to look up SRV record "_mongodb._tcp.mydb-r21.orgname.org": DNS name does not exist.
Error: Invalid uri_string. Try mongodb://localhost
If I try changing to mongodb::// (not localhost because it isn't hosted locally) I get this:
Error: No suitable servers found (`serverSelectionTryOnce` set): [connection timeout calling hello on 'mydb-r21.orgname.org:27017']
Interestingly, the port that is suffixed in the error message is the correct one that I was expecting, but that still doesn't help me.
The documentation in the mongolite user manual and other places I've found online seem to add some read/write specifications to the connection string, but as I'm not very familiar with how connection strings are constructed, I don't know if these are specific to the databases they are using in their examples. I can't find any clear explanation of what the extra bits that are not part of the URL mean, e.g. as shown in this blog. All the prefixes seem to be a bit different too, so I am not even sure what would be appropriate to try in my case.
Can anyone explain why the connection string works fine with localhost:port number for the local instance, but doesn't work with the URL for the remote server / online instance?
Also what do I need to do to make the URI for the remote server valid?
This looks like limitation from Microsoft azure mobile client for offline sync service for android.
In my xamarin form application i have 40 azure tables to sync with remote. Whenever the particular request(_abcTable.PullAsync) has the more number record like 5K, PullAsync returns the exception saying that : Error executing SQLite command: 'too many SQL variables'.
That pull async URL goes like this : https://abc-xyz.hds.host.com/AppHostMobile/tables/XXXXXXResponse?$filter=(updatedAt ge datetimeoffset'2017-06-20T13:26:17.8200000%2B00:00')&$orderby=updatedAt&$skip=0&$top=5000&ProjectId=2&__includeDeleted=true.
But in postman i can see the same Url returning the 5K records and Works fine in iPhone device as well but failing only in android.
From the above PullAsync request if i change the "top" parameter value from 5000 to 500 it works fine in android but takes more time. Do i have any other alternatives without limiting the performance.
Package version:
Microsoft.Azure.Mobile.Client version="3.1.0"
Microsoft.Azure.Mobile.Client.SQLiteStore" version=“3.1.0”
Microsoft.Bcl version="1.1.10"
Microsoft.Bcl.Build version="1.0.21"
SQLite.Net.Core-PCL version="3.1.1"
SQLite.Net-PCL version="3.1.1"
SQLitePCLRaw.bundle_green version="1.1.2"
SQLitePCLRaw.core" version="1.1.2"
SQLitePCLRaw.lib.e_sqlite3.android" version="1.1.2"
SQLitePCLRaw.provider.e_sqlite3.android" version="1.1.2"
Please let me know if i need to provide more information. Thanks
Error executing SQLite command: 'too many SQL variables
Per my understanding, your sqlite may touch the Maximum Number Of Host Parameters In A Single SQL Statement mentions as follows:
A host parameter is a place-holder in an SQL statement that is filled in using one of the sqlite3_bind_XXXX() interfaces. Many SQL programmers are familiar with using a question mark ("?") as a host parameter. SQLite also supports named host parameters prefaced by ":", "$", or "#" and numbered host parameters of the form "?123".
Each host parameter in an SQLite statement is assigned a number. The numbers normally begin with 1 and increase by one with each new parameter. However, when the "?123" form is used, the host parameter number is the number that follows the question mark.
SQLite allocates space to hold all host parameters between 1 and the largest host parameter number used. Hence, an SQL statement that contains a host parameter like ?1000000000 would require gigabytes of storage. This could easily overwhelm the resources of the host machine. To prevent excessive memory allocations, the maximum value of a host parameter number is SQLITE_MAX_VARIABLE_NUMBER, which defaults to 999.
The maximum host parameter number can be lowered at run-time using the sqlite3_limit(db,SQLITE_LIMIT_VARIABLE_NUMBER,size) interface.
I refered Debugging the Offline Cache and init my MobileServiceSQLiteStore as follows:
var store = new MobileServiceSQLiteStoreWithLogging("localstore.db");
I logged all the SQL commands that are executed against the SQLite store when invoking pullasync. I found that after successfully retrieve response from mobile backend via the following request:
https://{your-app-name}.azurewebsites.net/tables/TodoItem?$filter=((UserId%20eq%20null)%20and%20(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00'))&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Microsoft.Azure.Mobile.Client.SQLiteStore.dll would execute the following sql statement for updating the related local table:
BEGIN TRANSACTION
INSERT OR IGNORE INTO [TodoItem] ([id]) VALUES (#p0),(#p1),(#p2),(#p3),(#p4),(#p5),(#p6),(#p7),(#p8),(#p9),(#p10),(#p11),(#p12),(#p13),(#p14),(#p15),(#p16),(#p17),(#p18),(#p19),(#p20),(#p21),(#p22),(#p23),(#p24),(#p25),(#p26),(#p27),(#p28),(#p29),(#p30),(#p31),(#p32),(#p33),(#p34),(#p35),(#p36),(#p37),(#p38),(#p39),(#p40),(#p41),(#p42),(#p43),(#p44),(#p45),(#p46),(#p47),(#p48),(#p49)
UPDATE [TodoItem] SET [Text] = #p0,[UserId] = #p1 WHERE [id] = #p2
UPDATE [TodoItem] SET [Text] = #p0,[UserId] = #p1 WHERE [id] = #p2
.
.
COMMIT TRANSACTION
Per my understanding, you could try to set MaxPageSize up to 999. Also, this limitation is from sqlite and the update processing is automatically handled by Microsoft.Azure.Mobile.Client.SQLiteStore. For now, I haven't find any approach to override the processing from Microsoft.Azure.Mobile.Client.SQLiteStore.
I'm building an application in Symfony2 where every user gets its own database, meaning all users have their own set of database credentials. The user doesn't know those credentials, they are stored within the application.
Depending on which user is logged in, the application retrieves the user specific credentials and stores data in the user specific database.
I'm using Propel as ORM and I know I can set up multiple connections. But all the solutions I came across require knowing the connection details on beforehand, but I do not know what user will register and log in.
So my question is: How I can I initiate the proper database connection?
Supposing you already have connection (if needed, to a dummy database), you can change your connection parameters doing the following
// Get current configuration
$config = \Propel::getConfiguration();
// Change DB configuration
$config['datasources']['default']['connection']['dsn'] = 'mysql:host=127.0.0.1;port=3306;dbname=dbname;charset=UTF8';
$config['datasources']['default']['connection']['user'] = 'username';
$config['datasources']['default']['connection']['password'] = 'password';
// Apply configuration
\Propel::setConfiguration($config);
\Propel::initialize();
Classic producer-consumer-problem.
I have x app servers which write records in a DB table (same DB).
On each server, a service is running which polls the DB table and is supposed to read the oldest entry, process it and delete it.
The issue is now that the services get into a race condition: service on server A starts reading, server B starts reading the same record. I'm a bit stuck on this...I have implemented producer-consumer so often but never across server barriers.
The server cannot talk to each other except over the DB.
Environment is SQL Server 2005 and ASP-NET 3.5.
If you pick up work in a transactional way, only one server can pick it up:
set transaction isolation level repeatable read
update top 1 tbl
set ProcessingOnServer = HOST_NAME()
from YourWorkTable tbl
where ProcessingOnServer is null
and Done = 0
Now you can select the details, knowing the work item is safely assigned to you:
select *
from YourWorkTable tbl
where ProcessingOnServer = HOST_NAME()
and Done = 0
The function host_name() returns the client name, but if you think it's safer you can pass in the hostname from your client application.
We usually add a timestamp, so you can check for servers that took too long to process an item.
My receive port is of sqlBinding and typed polling. It invokes a SP to fetch a record and based on filter condition the corresponding orchestration kicks off. The BizTalk group consists of 2 servers; thus 2 ReceiveHostInstances. If both the host instances are running -at some point the same request is being read twice - causing a duplicate at the receivers end. But, why is the reeive port reading it the same record more than once? The proc which reads the updates the record and updates it so that it wont be fecthed again.
I observed this scenario while submitting 10 requests; receive port read 11 times and 11 orchestrations started.
I tried the same (10 request) with one host (as in my Dev), the receive is showing 10 only. Any clues?
The quick answer is that you have two options to fix this problem:
Fix your stored procedure so that is behaves correctly in concurrent situations.
Place your SQL polling receive handler within a clustered BizTalk host.
Below is an explanation of what is going on, and under that I give details of implementations to fix the issue:
Explanation
This is due to the way BizTalk receive locations work when running on multiple host instances (that is, that the receive handler for the adapter specified in the receive location is running on a host that has multiple host instances).
In this situation both of the host instances will run their receive handler.
This is usually not a problem - most of the receive adapters can manage this and give you the behaviour you would expect. For example, the file adapter places a lock on files while they are being read, preventing double reads.
The main place where this is a problem is exactly what you are seeing - when a polling SQL receive location is hitting a stored procedure. In this case BizTalk has no option other than to trust the SQL procedure to give the correct results.
It is hard to tell without seeing your procedure, but the way you are querying your records is not guaranteeing unique reads.
Perhaps you have something like:
Select * From Record
Where Status = 'Unread'
Update Record
Set Status = 'Read'
Where Status = 'Unread'
The above procedure can give duplicate records because between the select and the update, another call of the select is able to sneak in and select the records that have not been updated yet.
Implementing a solution
Fixing the procedure
One simple fix to the procedure is to update with a unique id first:
Update Record
Set UpdateId = ##SPID, Status = 'Reading'
Where Status = 'Unread'
Select * From Record
Where UpdateId = ##SPID
And Status = 'Reading'
Update Record
Set Status = 'Read'
Where UpdateId = ##SPID
And Status = 'Reading'
##SPID should be unique, but if it proves not to be you could use newid()
Using a clustered host
Within the BizTalk server admin console when creating a new host it is possible to specify that that host is clustered. Details on doing this are in this post by Kent Weare.
Essentially you create a host as normal, with host instances on each server, then right click the host and select cluster.
You then create a SQL receive handler for the polling that works under that host and use this handler in your receive location.
A BizTalk clustered host ensures that all items that are members of that host will run on one and only one host instance at a time. This will include your SQL receive location, so you will not have any chance of race conditions when calling your procedure.