problem in Flutter with Sqflite exception - sqlite

I have a problem with my code, I wrote simple flutter app which is f note app, and I have included SQLite as a database , I run the app at first via the emulator and everything went cool , but when I tried to run it on my real device (which is an android device), the database did not respond (i.e I could not add new notes to the database ) and when I went back to run my app via the emulator .. the app did the same thing I found in my real device and in console I found this error
Error: Can't use 'SqfliteDatabaseException' because it is declared more than once.
I need help Please

I saw your code and the problem is that the exception you get is probably related to this one:
PlatformException(sqlite_error, UNIQUE constraint failed: Notetable.id
And it's because you need to manage the unicity of your primary key when you insert a new row. You can have a look at this SO question for a quick reference.
So just to quickly make your code working I've made this changes (please take this code only for reference, write a better one):
void createDataBase(Database db,int newVersion) async{
await db.execute('CREATE TABLE IF NOT EXISTS $noteTable ($col_id INTEGER PRIMARY KEY ,'+
'$col_title TEXT , $col_description TEXT , $col_date TEXT,$col_priority INTEGER)');
}
And
Future<int> insertData(myNote note)async{
var mdatabase = await database;
var _newNoteMap = note.convertToMap();
_newNoteMap['id'] = null;
var result = await mdatabase.insert(noteTable, _newNoteMap);
return result;
}
Pay attention that you always call a DB insert even when you update an existing note.
UPDATE: added additional modification (not listed before)
in databaseObject.dart
Map<String,dynamic> convertToMap(){
var mapObject = Map<String,dynamic>();
mapObject["id"] = _id;
mapObject["title"] = _title;
mapObject["description"] = _description;
mapObject["date"] = _date;
mapObject["priority"] = _priority;
return mapObject;
}
in Note.dart
if(res >= 1){
showAlertDialog('Status', "New note added successfully and the value of result is $res");
}

Related

Ingest from storage with persistDetails = true not save ingest status result

I'm now implement a program to migrate large amount of data to ADX base on Ingest from Storage feature of ADX and I'm need to check that status of each ingestion request each time the request finish but I'm facing an issue
Base on MS document in here
If I set the persistDetails = true for example with the command below it must save the ingestion status but currently this setting seem not work (with or without it)
.ingest async into table MigrateTable
(
h'correct blob url link'
)
with (
jsonMappingReference = 'table_mapping',
format = 'json',
persistDetails = true
)
Above command will return an OperationId and when I using it to check export status when the ingest task finish I always get this error message :
Error An admin command cannot be executed due to an invalid state: State='Operation 'DataIngestPull' does not persist its operation results' clientRequestId: KustoWebV2;
Can someone clarify for me what is the root cause relate to this? With me it seem like a bug relate to ADX
Ingesting data directly against the Data Engine, by running .ingest commands, is usually not recommended, compared to using Queued Ingestion (motivation included in the link). Using Kusto's ingestion client library allows you to track the ingestion status.
Some tools/services already do that for you, and you can consider using them directly. e.g. LightIngest, Azure Data Factory
If you don't follow option 1, you can still look for the state/status of your command using the operation ID you get when using the async keyword, by using .show operations
You can also use the client request ID to filter the result set of .show commands to view the state/status of your command.
If you're interested in looking specifically at failures, .show ingestion failures is also available for you.
The persistDetails option you specified in your .ingest command actually has no effect - as mentioned in the docs:
Not all control commands persist their results, and those that do usually do so by default on asynchronous executions only (using the async keyword). Please search the documentation for the specific command and check if it does (see, for example data export).
============ Update sample code follow suggestion from Yoni ========
Turn out, other member in my team mess up with access right with adx, after fixing it everything work fine
I just have one concern relate to PartiallySucceeded that need clarify from #yoni or someone have better knowledge relate to that
try
{
var ingestProps = new KustoQueuedIngestionProperties(model.DatabaseName, model.IngestTableName)
{
ReportLevel = IngestionReportLevel.FailuresAndSuccesses,
ReportMethod = IngestionReportMethod.Table,
FlushImmediately = true,
JSONMappingReference = model.IngestMappingName,
AdditionalProperties = new Dictionary<string, string>
{
{"jsonMappingReference",$"{model.IngestMappingName}" },
{ "format","json"}
}
};
var sourceId = Guid.NewGuid();
var clientResult = await IngestClient.IngestFromStorageAsync(model.FileBlobUrl, ingestProps, new StorageSourceOptions
{
DeleteSourceOnSuccess = true,
SourceId = sourceId
});
var ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
while (ingestionStatus.Status == Status.Pending)
{
await Task.Delay(WaitingInterval);
ingestionStatus = clientResult.GetIngestionStatusBySourceId(sourceId);
}
if (ingestionStatus.Status == Status.Succeeded)
{
return true;
}
LogUtils.TraceError(_logger, $"Error when ingest blob file events, error: {ingestionStatus.ErrorCode.FastGetDescription()}");
return false;
}
catch (Exception e)
{
return false;
}

xamarin forms azure mobile apps slow sync

I'm using Azure Mobile App with Xamarin.Forms to create an offline capable mobile app.
My solution is based on https://adrianhall.github.io/develop-mobile-apps-with-csharp-and-azure/chapter3/client/
Here is the code that I use for offline sync :
public class AzureDataSource
{
private async Task InitializeAsync()
{
// Short circuit - local database is already initialized
if (client.SyncContext.IsInitialized)
{
return;
}
// Define the database schema
store.DefineTable<ArrayElement>();
store.DefineTable<InputAnswer>();
//Same thing with 16 others table
...
// Actually create the store and update the schema
await client.SyncContext.InitializeAsync(store, new MobileServiceSyncHandler());
}
public async Task SyncOfflineCacheAsync()
{
await InitializeAsync();
//Check if authenticated
if (client.CurrentUser != null)
{
// Push the Operations Queue to the mobile backend
await client.SyncContext.PushAsync();
// Pull each sync table
var arrayTable = await GetTableAsync<ArrayElement>();
await arrayTable.PullAsync();
var inputAnswerInstanceTable = await GetTableAsync<InputAnswer>();
await inputAnswerInstanceTable.PullAsync();
//Same thing with 16 others table
...
}
}
public async Task<IGenericTable<T>> GetTableAsync<T>() where T : TableData
{
await InitializeAsync();
return new AzureCloudTable<T>(client);
}
}
public class AzureCloudTable<T>
{
public AzureCloudTable(MobileServiceClient client)
{
this.client = client;
this.table = client.GetSyncTable<T>();
}
public async Task PullAsync()
{
//Query name used for incremental pull
string queryName = $"incsync_{typeof(T).Name}";
await table.PullAsync(queryName, table.CreateQuery());
}
}
The problem is that the syncing takes a lot of time even when there isn't anything to pull (8-9 seconds on Android devices and more than 25 seconds to pull the whole database).
I looked at Fiddler to find how much time takes the Mobile Apps BackEnd to respond and it is about 50 milliseconds per request so the problem doesn't seem to come from here.
Does anyone have the same trouble ? Is there something that I'm doing wrong or tips to improve my sync performance ?
Our particular issue was linked to our database migration. Every row in the database had the same updatedAt value. We ran an SQL script to modify these so that they were all unique.
This fix was actually for some other issue we had, where not all rows were being returned for some unknown reason, but we also saw a substantial speed improvement.
Also, another weird fix that improved loading times was the following.
After we had pulled all of the data the first time (which, understandably takes some time) - we did an UpdateAsync() on one of the rows that were returned, and we did not push it afterwards.
We've come to understand that the way offline sync works, is that it will pull anything that has a date newer than the most recent updated at. There was a small speed improvement associated with this.
Finally, the last thing we did to improve speed was to not fetch the data again, if it already had cached a copy in the view. This may not work for your use case though.
public List<Foo> fooList = new List<Foo>
public void DisplayAllFoo()
{
if(fooList.Count == 0)
fooList = await SyncClass.GetAllFoo();
foreach(var foo in fooList)
{
Console.WriteLine(foo.bar);
}
}
Edit 20th March 2019:
With these improvements in place, we are still seeing very slow sync operations, used in the same way as mentioned in the OP, also including the improvements listed in my answer here.
I encourage all to share their solutions or ideas on how this speed can be improved.
One of the reasons for the slow Pull() is when more than (10) rows get the same UpdatedAt value. This happens when you update the rows at once, for example running an SQL command.
One way to overcome this is to modify the default trigger on the tables. To ensure every row gets a unique UpdateAt, we did something like this:
ALTER TRIGGER [dbo].[TR_dbo_Items_InsertUpdateDelete] ON [dbo].[TableName]
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
DECLARE #InsertedAndDeleted TABLE
(
Id NVARCHAR(128)
);
DECLARE #Count INT,
#Id NVARCHAR(128);
INSERT INTO #InsertedAndDeleted
SELECT Id
FROM inserted;
INSERT INTO #InsertedAndDeleted
SELECT Id
FROM deleted
WHERE Id NOT IN
(
SELECT Id
FROM #InsertedAndDeleted
);
--select * from #InsertedAndDeleted;
SELECT #Count = Count(*)
FROM #InsertedAndDeleted;
-- ************************ UpdatedAt ************************
-- while loop
WHILE #Count > 0
BEGIN
-- selecting
SELECT TOP (1) #Id = Id
FROM #InsertedAndDeleted;
-- updating
UPDATE [dbo].[TableName]
SET UpdatedAt = Convert(DATETIMEOFFSET, DateAdd(MILLISECOND, #Count, SysUtcDateTime()))
WHERE Id = #Id;
-- deleting
DELETE FROM #InsertedAndDeleted
WHERE id = #Id;
-- counter
SET #Count = #Count - 1;
END;
END;

If one of the multiple adds in a saveChangesAsync fails do the others get added?

I have this function in my application. If the insert of Phrase fails then can someone tell me if the Audit entry still gets added? If that's the case then is there a way that I can package these into a single transaction that could be rolled back.
Also if it fails can I catch this and then still have the procedure exit with an exception?
[Route("Post")]
[ValidateModel]
public async Task<IHttpActionResult> Post([FromBody]Phrase phrase)
{
phrase.StatusId = (int)EStatus.Saved;
UpdateHepburn(phrase);
db.Phrases.Add(phrase);
var audit = new Audit()
{
Entity = (int)EEntity.Phrase,
Action = (int)EAudit.Insert,
Note = phrase.English,
UserId = userId,
Date = DateTime.UtcNow,
Id = phrase.PhraseId
};
db.Audits.Add(audit);
await db.SaveChangesAsync();
return Ok(phrase);
}
I have this function in my application. If the insert of Phrase fails
then can someone tell me if the Audit entry still gets added?
You have written your code in a correct way by calling await db.SaveChangesAsync(); only one time after doing all your modifications on the DbContext.
The answer to your question is: No, the Audit will not be added if Phrase fails.
Because you are calling await db.SaveChangesAsync(); after doing all your things with your entities, Entity Framework wil generate all the required SQL Queries and put them in a single SQL transaction which makes the whole queries as an atomic operation to your database. If one of the generated query e.g. Auditgenerated query failed then the transaction will be rolled back. So every modification that have been done to your database will be removed and so Entity Framework will let your database in a coherent state.

GitKit Client - Uploaded users cannot connect

We have an existing user database with SHA1-encoded passwords. We upload them to the Google Federated Database (through the GitKitClient java lib), but then these uploaded users can't log in The verifyPassword always returns "Incorrect password" ! The call to the uploadUsers looks like gitkitClient.uploadUsers('SHA1', new byte[0], gitkitUsers)
(We must provide an empty byte array as second param (hash key), since we get NPEs if we provide a null value)
The method that creates the GitkitUsers that are in the list is as follows:
private GitkitUser createGitkitUserFromUser(User user) {
GitkitUser gitkitUser = new GitkitUser()
gitkitUser.email = user.email
gitkitUser.localId = getLocalId(user)
gitkitUser.name = user.displayName
gitkitUser.hash = user.password?.bytes
if (user.pictureFileName) {
gitkitUser.photoUrl = user.getPictureUrl()
}
return gitkitUser
}
We see no way to further investigate. Did someone successfully use it ?
Make sure that the hashKey you use in setPassword() is the same one used in uploadUsers().
I am using the php SDK so I can't share code for you, but when I did NOT use the same hashKey for both places, I had the same problem.

AngularFire extending the service issue

I've been looking at the documentation for Synchronized Arrays https://www.firebase.com/docs/web/libraries/angular/api.html#angularfire-extending-the-services and https://www.firebase.com/docs/web/libraries/angular/guide/extending-services.html#section-firebasearray
I'm using Firebase version 2.2.7 and AngularFire version 1.1.2
Using the code below, I'm having trouble recognizing $$removed events.
.factory("ExtendedCourseList", ["$firebaseArray", function($firebaseArray) {
// create a new service based on $firebaseArray
var ExtendedCourseList= $firebaseArray.$extend({
$$added: function(dataSnapshot, prevChild){
var course = dataSnapshot.val();
var course_key = dataSnapshot.key();
console.log("new course");
return course;
},
$$removed: function(snap){
console.log("removed");
return true;
}
});
return function(listRef) {
return new ExtendedCourseList(listRef);
}
}])
.factory("CompanyRefObj", function(CompanyRef) {
//CompanyRef is a constant containing the url string
var ref = new Firebase(CompanyRef);
return ref;
})
.factory('CourseList', function (localstorage,$rootScope,ExtendedCourseList,CompanyRefObj) {
var companyID = localstorage.get("company");
$rootScope.courseList = ExtendedCourseList(CompanyRefObj.child(companyID).child("courses"));
)
If I run this code, only the $$added events will be triggered. To simulate the remove events I use the web-interface at Firebase to display data, where I press the remove button and accept the data being deleted permanently.
Additionally, if I delete the $$removed function, the extended service still won't synchronize when a record is deleted.
If I modify my code to use the $firebaseArray instead of extending the service (as seen above) both add and remove events will be recognized.
.factory('CourseList', function (localstorage,$rootScope,$firebaseArray,CompanyRefObj) {
var companyID = localstorage.get("company");
$rootScope.courseList = $firebaseArray(CompanyRefObj.child(companyID).child("courses"));
)
Finally, are there any bad practices I've missed that can cause some of the extended functions to not work?
Solved
$$added: function(dataSnapshot, prevChild){
var course = dataSnapshot.val();
var course_key = dataSnapshot.key();
//Modified below
course.$id = course_key;
//End of modification
console.log("new course");
return course;
}
After posting about the issue at firebase/angularfire github I received an answer that solved my issue. When $$added got overridden by the code provided, the $firebaseArray also lost its internal record $id.
Adding this line of code: course.$id = course_key; before returning the course, made AngularFire recognize when the record was removed from the server.

Resources