I am using QtQuick/QML/Qt5.2.1 on Android. I also tested this issue on the Desktop rather than Android and I see the same problem.
I use LocalStorage to persist application data after the application closes.
I open a database using openDatabaseSync:
var db = LocalStorage.openDatabaseSync(
"TestDB",
"1.0", <-- version
"Test Database",
1000000,
function(db) {
createSchema(db);
populateData(db);
}
);
If the database does not exist and was created, the callback function gets executed and in that case I create the database schema and populate the initial dataset.
The next time the application starts, obviously I want to keep the database as-is and not recreate it.
The problem is when I restart the application I get this error:
Error: SQL: database version mismatch
If I inspect the .ini file that was created when the database was created the first time the application was run, I see this:
[General]
Description=Test Database
Driver=QSQLITE
EstimatedSize=1000000
Name=TestDB
Version=
You can clearly see a problem here is that the "Version" attribute is empty.
When the application starts up, it compares the requested version "1.0" against this empty string "" and fails.
I can fake it to get it to work of course by specifying the version as "", or by fixing the ini file - that at least tells me the code is otherwise correct - but clearly that's not a solution.
So, did I miss something or is this a Qt bug?
You can set the database version after creating it:
var db = LocalStorage.openDatabaseSync(
"TestDB",
"1.0",
"Test Database",
1000000,
function(db) {
createSchema(db);
populateData(db);
db.changeVersion("", "1.0");
}
);
Since the callback function will only be called it the database doesn't exists, and the changeVersion function will only work if current version is "" (otherwise, exception is thrown), I believe it's safe to use it.
EDIT: Maybe this is the intended behaviour... from LocalStorage source code, line 700:
if (dbcreationCallback)
version = QString();
So, maybe you really need to set db version after you create your tables... before you do that on the callback, it's just an empty database, and shouldn't really have a version.
set the attributes like this in your above code
// db = LocalStorage.openDatabaseSync(identifier, version, description, estimated_size, callback(db))
LocalStorage.openDatabaseSync("kMusicplay", "0.1", "kMusicPlay app Ubuntu", 10000);
where
'kMusicplay' is appname ,
'0.1' is version ,
'kMusicPlay app Ubuntu' is app discription
and '10000' is size of database
Related
I have a need to create a copy of a Google Doc with a specific ID - not the "friendly" name like MyDocument, but the name that makes it unique in the GoogleSphere - the one like 1x_tfTiA9-b5UwAf3k2fg6y6hyZSYQIvhSNn-saaDs4c.
Here's the scenario why I would like to do this:
I have a newsletter which is in the form of a Google Doc. The newsletter is published on a website by embedding the document in a web page inside an <iframe> element. Also published in the same way is a "large print" version of the newsletter that is the same, apart from the fact that the default font size is 24pt, rather than 11pt.
I am trying to automate the production of the large print version, but in such a way that the unique ID of the large print document doesn't change, so that the embedded <iframe> for it still works.
I have experimented in the past with Google Apps Scripts routines for creating a deep copy of a document but the deep copy functions don't play nicely with images and tables, so I could never get a complete copy. If I could implement a "Save As" function, where the operand was an existing unique ID, I think this would do what I want.
Anyone know how I might do this?
I delved into this, attempting to set the id of the "large print" version of the file in a variety of ways:
via copy(): var copiedFile = Drive.Files.copy(lpFile, spFile.id, options);
which yields the error:
Generated IDs are not currently supported for copy requests
via insert(): var newFile = Drive.Files.insert(lpFile, doc.getBlob(), options);
which yields the error:
Generated IDs are not supported for Google Docs formats
via update(): Drive.Files.update(lpFile, lpFile.id, doc.getBlob(), options);
This method successfully updates the "large print" file from the small print file. This particular line, however, uses the Document#getBlob() method, which has issues with formatting and rich content from the Document. In particular, as you mention, images and tables in are not preserved (among other things, like changes to the font, etc.). Compare pre with post
It seems that - if the appropriate method of exporting formatted byte content from the document can be found - the update() method has the most promise. Note that the update() method in the Apps Script client library requires a Blob input (i.e. doc.getBlob().getBytes() will not work), so the fundamental limitation may be the (lack of) support for rich format information in the produced Blob data. With this in mind, I tried a couple methods for obtaining "formatted" Blob data from the "small print" file:
via Document#getAs(mimetype): Drive.Files.export(lpFile, lpFile.id, doc.getAs(<type>), options);
which fails for seemingly sensible types with the errors:
MimeType.GOOGLE_DOCS: We're sorry, a server error occurred. Please wait a bit and try again.
MimeType.MICROSOFT_WORD: Converting from application/vnd.google-apps.document to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
These errors do make sense, since the internal Google Docs MimeType is not exportable (you can't "download as" this filetype since the data is kept however Google wants to keep it), and the documentation for Document#getAs(mimeType) indicates that only PDF export is supported by the Document Service. Indeed, attempting to coerce the Blob from doc.getBlob() with getAs(mimeType) fails, with the error:
Converting from application/pdf to application/vnd.openxmlformats-officedocument.wordprocessingml.document is not supported.
using DriveApp to get the Blob, rather than the Document Service:
Drive.Files.update(lpFile, lpFile.id, DriveApp.getFileById(smallPrintId).getBlob(), options);
This has the same issues as doc.getBlob(), and likely uses the same internal methods.
using DriveApp#getAs has the same errors as Document#getAs
Considering the limitation of the native Apps Script implementations, I then used the advanced service to obtain the Blob data. This is a bit trickier, since the File resource returned is not actually the file, but metadata about the file. Obtaining the Blob with the REST API requires exporting the file to a desired MimeType. We know from above that the PDF-formatted Blob fails to be properly imported, since that is the format used by the above attempts. We also know that the Google Docs format is not exportable, so the only one left is MS Word's .docx.
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
Drive.Files.update(lpFile, lpFile.id, blob, options);
where getBlobViaURL_ implements the workaround from this SO question for the (still-broken) Drive.Files.export() Apps Script method.
This method successfully updates the existing "large print" file with the exact content from the "small print" file - at least for my test document. Given that it involves downloading content instead of using the internal, already-present data available to the export methods, it will likely fail for larger files.
Testing Script:
function copyContentFromAtoB() {
var smallPrintId = "some id";
var largePrintId = "some other id";
// You must first enable the Drive "Advanced Service" before this will work.
// Get the file metadata of the to-be-updated file.
var lpFile = Drive.Files.get(largePrintId);
// View available options on the relevant Drive REST API pages.
var options = {
updateViewedDate: false,
};
// Ideally this would use Drive.Files.export, but there is a bug in the Apps Script
// client library's implementation: https://issuetracker.google.com/issues/36765129
var blob = getBlobViaURL_(smallPrintId, MimeType.MICROSOFT_WORD);
// Replace the contents of the large print version with that of the small print version.
Drive.Files.update(lpFile, lpFile.id, blob, options);
}
// Below function derived from https://stackoverflow.com/a/42925916/9337071
function getBlobViaURL_(id, mimeType) {
var url = "https://www.googleapis.com/drive/v2/files/"+id+"/export?mimeType="+ mimeType;
var resp = UrlFetchApp.fetch(url, {
headers: { Authorization: 'Bearer ' + ScriptApp.getOAuthToken()}
});
return resp.getBlob();
}
I have a mobile app written using Apache Cordova. I am using Azure Mobile Apps to store some data.
I created Easy Tables and 1 Easy API. The purpose of the API is to perform delete / update more than 1 record. Below is the implementation of the API.
exports.post = function (request, response){
var mssql = request.service.mssql;
var sql = "delete from cust where deptno in ( ? )";
mssql.query(sql, [request.parameters],{
success : function(result){ response.send(statusCodes.OK, result); },
error: function(err) { response.send(statusCodes.BAD_REQUEST, { message: err}); }
});
}
Is there any other way to implement it ? The del() method on table object on takes id to delete and I didn't find any other approach to delete multiple rows in the table.
I am having difficulty to test the implementation as the changes in the API code is taking 2-3 hours on average to get deployed. I change the code through Azure website and when I run it, the old code is hit and not the latest changes.
Is there any limitation based on the plans we choose?
Update
The updated code worked.
var sql = "delete from trollsconfig where id in (" + request.body.id + ")";
mssql.query(sql, [request.parameters],{
success : function(result){ response.send(statusCodes.OK, result); },
error: function(err) { response.send(statusCodes.BAD_REQUEST, { message: err}); }
});
Let me cover the last one first. You can always restart your service to use the latest code. The code is probably there but the Easy API change is not noticing it. Once your site "times out" and goes to sleep, the code gets reloaded as normal. Logging onto the Azure Portal, selecting your site and clicking Restart should solve the problem.
As to the first problem - there are a variety of ways to implement deletion, but you've pretty much got a good implementation there. I've not run it to test it, but it seems reasonable. What don't you like about it?
I'm using collectionFS and in particular the S3 storage adapter.
Most files get uploaded perfectly fine and I can download them afterwards with no problem.
Occasionally I get a file that appears to load fine (Files.insert(blob,callback) returns to the callback with no error) but later when I go to download them fsFile.url() returns null.
https://github.com/CollectionFS/Meteor-CollectionFS#after-the-upload says:
If any storage adapters fail to save any of the copies in the designated store, the server will periodically retry saving them. After a configurable number of failed attempts at saving, the server will give up.
But there is no callback that I'm aware of for such a failure. Furthermore when looking at fsFile.uploadProgress() I get 100%.
The basic problem is thus that during upload everything looks fine and my app only detects a problem when trying to download the file.
Is there a way to detect an upload failure in the storage adapter?
What else would fsFile.url() returning null be symptomatic of?
Here's an example of one of these broken fsFile objects in mongodb:
{
"_id" : "uqAYajqCv68HmEJhu",
"original" : {
"updatedAt" : ISODate("2015-03-16T23:04:37.200Z"),
"size" : 699072,
"type" : ""
},
"chunkSize" : 2097152,
"chunkCount" : 0,
"chunkSum" : 1,
}
On the new version of FScollection, there was integrated 3 events on the server side, stored,uploaded and error.
Images.on('error', function (fileObj) {
console.log("The " + fileObj + " with the _id " + fileObj._id " + just get an error when uploading")
});
There is not documentation about this, since raix and aldeed clean the README.
How to set up the SQL Server database for Semantic logging.
Does the table for logging information needs to be created earlier?
If yes, what is the schema to be used.
I have the following code :
var listener = new ObservableEventListener();
string connectionString = #"Data Source=nibc2025;Initial Catalog=TreeDataBase;Integrated Security=True;User Id=sa;Password=nous#123";
listener.EnableEvents(AuditingEventSource.Log, EventLevel.LogAlways, Keywords.All);
databaseSubscription = listener.LogToSqlDatabase
(
"Test",
connectionString,
"Traces",
Buffering.DefaultBufferingInterval,
1,
Timeout.InfiniteTimeSpan, 500
);
// The following one line of code is not part of this function.
// It is just added here to show this is how I log my information.
// Inside LogInformation method I call the 'Write' method
AuditingEventSource.Log.LogInformation("sgsgg", "sgsg");
databaseSubscription.Sink.FlushAsync().Wait();
Well, Since this thread gets the most hits on Google regarding Semantic Logging onto SQL DB or even SLAB ...
The scripts to create the DB lies here
\packages\EnterpriseLibrary.SemanticLogging.Database.1.0.1304.0\scripts
And to create the EventSource and Fire the Blocks, information are given here, for a shortcut and quick fix solution
http://entlib.codeplex.com/discussions/540723
Regards,
OK. I got it. the script was in the packages folder. I had overlooked that.
I am trying to create a two column unique index on the underlying mongodb in a meteor app and having trouble. I can't find anything in the meteor docs. I have tried from the chrome console. I have tried from term and even tried to point mongod at the /db/ dir inside .meteor . I have tried
Collection.ensureIndex({first_id: 1, another_id: 1}, {unique: true}); variations.
I want to be able to prevent duplicate entries on a meteor app mongo collection.
Wondering if anyone has figured this out?
I answered my own question, what a noob.
I figured it out.
Start meteor server
Open 2nd terminal and type meteor mongo
Then create your index...for example I did these for records of thumbsup and thumbsdown type system.
db.thumbsup.ensureIndex({item_id: 1, user_id: 1}, {unique: true})
db.thumbsdown.ensureIndex({item_id: 1, user_id: 1}, {unique: true})
Now, just gotta figure out a bootstrap install setup that creates these when pushed to prod instead of manually.
Collection._ensureIndex(index, options)
Searching inside Meteor source code, I found a bind to ensureIndex called _ensureIndex.
For single-key basic indexes you can follow the example of packages/accounts-base/accounts_server.js that forces unique usernames on Meteor:
Meteor.users._ensureIndex('username', {unique: 1, sparse: 1});
For multi-key "compound" indexes:
Collection._ensureIndex({first_id:1, another_id:1}, {unique: 1});
The previous code, when placed on the server side, ensures that indexes are set.
Warning
Notice _ensureIndex implementation warning:
We'll actually design an index API later. For now, we just pass
through to Mongo's, but make it synchronous.
According to the docs "Minimongo currently doesn't have indexes. This will come soon." And looking at the methods available on a Collection, there's no ensureIndex.
You can run meteor mongo for a mongo shell and enable the indexes server-side, but the Collection object still won't know about them. So the app will let you add multiple instances to the Collection cache, while on the server-side the additional inserts will fail silently (errors get written to the output). When you do a hard page refresh, the app will re-sync with server
So your best bet for now is probably to do something like:
var count = MyCollection.find({first_id: 'foo', another_id: 'bar'}).count()
if (count === 0)
MyCollection.insert({first_id: 'foo', another_id: 'bar'});
Which is obviously not ideal, but works ok. You could also enable indexing in mongodb on the server, so even in the case of a race condition you won't actually get duplicate records.
The Smartpackage aldeed:collection2 supports unique indices, as well as schema-validation. Validation will both occure on server and client (reactivly), so you can react on errors on the client.
Actually why not use upsert on the server with a Meteor.method and you could also send also track it with a ts:
// Server Only
Meteor.methods({
add_only_once = function(id1,id2){
SomeCollection.update(
{first_id:id1,another_id:id2},{$set:{ts:Date.now()}},{upsert:True});
}
});
// Client
Meteor.call('add_only_once',doc1._id, doc2._id);
// actual code running on server
if(Meteor.is_server) {
Meteor.methods({
register_code: function (key,monitor) {
Codes.update({key:key},{$set:{ts:Date.now()}},{upsert:true});
}
...