Why is the behavior in the manual not consistent with the NebulaGraph Database - nebula-graph

Its behavior changes from time to time. Is the manual always consistent with Nebula Graph Database?

NebulaGraph is still under development. Its behavior changes from time to time. Users can submit an issue to inform the team if the manual and the system are not consistent.

Related

Storing NMA App ID, App Code & License Key in own db instead of hardcoding values in AppDelegate

We had a horror story back in August where our Here Maps SDK License Key was mistakenly changed on us (to this day, nobody still knows who did it and why). It was a nightmare, because all our users' apps froze upon launch and we had to push an update to the app store (and although I've had Apple approve an app update in the past in as little as 4 hours, that time it took them 4 days!!!).
It would make much more sense to store the values locally and then have the info populated into the app. Problem is that Here Maps requires the info in App Delegate which is the top level of app and it's difficult to build database queries in there.
Our license will be renewing soon, so I am thinking of solving this issue once and for all. Anybody had this issue before and has any ideas?
We believe it would be the correct solutions to avoid major incidents.Maintaining credentials in more controlled DB is a right approach instead of keeping in AppDelegates.

Backing up MariaDB Temporal Database

Generally, I am excited by the Temporal Database feature.
However, mysqldump is not supported for database export and restore.
I can find no resource in the documentation (linked to above) that indicates which methods of backup and restore are safe to use for this type of database. Google searches do not seem to help.
Does anyone have any insights into using these MariaDB temporal databases in production environments? Or more specifically, in using them in development environments, and then transferring the database to a production environment and still keeping the history of the database intact?
I understands this something of a dev-ops question, but it seems pretty central issue to how to work with and around this new feature. Does anyone have an insights in moving these databases around and relying on that process in-production? Just wondering how mature this technology is, given that this issue (which seems pretty central) is not covered in the documentation.
Unfortunately, as the documentation states, while mysqldump will dump these tables, the invisible temporal columns are not included - the tool will only backup the current state of the tables.
Luckily, there are a couple of options here;
You can use mariadb-enterprise-backup or mariabackup which should support the new format of the temportal data and correctly back it up (these tools do binary backups instead of table dumps);
https://mariadb.com/docs/usage/mariadb-enterprise-backup/#mariadb-enterprise-backup
https://mariadb.com/kb/en/library/full-backup-and-restore-with-mariabackup/
Unfortunately, we have found the tool to be somewhat unreliable - especially when using the MyRocks storage engine. However, it is constantly improving.
To get around this, in our production servers we take advantage of the slave replication - which keeps the temporal data (and everything else) intact across all our nodes. We then do secondary backups by taking the slave nodes down and doing a straight copy of the database data files. For more information on how to set up replication, please refer to the documentation;
https://mariadb.com/kb/en/library/setting-up-replication/
So you could potentially set up dev-copy of the database with replication and just copy the data from there. However, in your case, mariabackup might also do the trick.
Regardless of how you do it, be wary of the system clock when setting up replication or when moving these files between systems. You can get some problems when the clock is not in sync (or if the systems are in different time zones). There is some official documentation (and mitigation) on this topic also;
https://mariadb.com/kb/en/library/temporal-data-tables/#use-in-replication-and-binary-logs
Looking at your additional comment - I am not aware of any way to get a complete image of a database as it looked at a given date (with temporal data included), directly from MariaDB itself. I don't think this information is stored in a way that makes this possible. However, there is a workaround even for this. You could potentially use the above method in combination with incremental rdiff backups. Then what you would do to solve it would be to;
Backup the database with any of the above methods.
Use rdiff-backup (https://www.nongnu.org/rdiff-backup/) on those backup files, running it once per day.
This would allow you to fetch an exact copy of how the database looked at any given date of your choice. rdiff-backup also fully supports ssh, allowing you to do things like,
rdiff-backup -r 10D host.net::/var/lib/mariadb /my/tmp/mariadb
This would fetch a copy of those backup files as they looked 10 days ago.
For future planning, according to https://mariadb.com/kb/en/system-versioned-tables/#limitations:
Before MariaDB 10.11, mariadb-dump did not read historical rows from versioned tables, and so historical data would not be backed up. Also, a restore of the timestamps would not be possible as they cannot be defined by an insert/a user. From MariaDB 10.11, use the -H or --dump-history options to include the history.
10.11 is still in development as of writing this answer.

Should any PeopleSoft installation require on-going daily DB scripts to resolve "issues"?

I have very little PeopleSoft experience but have been put in a position to support an install. This question could straddles serverfault but is certainly developer oriented.
On a daily basis, we have a PeopleSoft "developer" who writes scripts to fix records/journal entries/approval status etc. To me this screams "bad install" and botched customizations. Is this normal? Is it best practice to have an employee having to write scripts daily just to keep things running?
Note: there is no fraud happening here, he has the full approval of the accounting department when doing this.
It is unlikely that it is the installation. Likely causes:
Bad customization
Missing patches
Bugs in the delivered code
If you only have one admin, though, and you have only one developer, I would be shocked to hear that there is much in the way of custom code.
Back to the question: It is not normal to need to do SQL updates regularly to fix data. Yes, it happens, but not too often. It is also possible that the end users could fix it from the application, but do not for some reason.
Ad-hoc SQL updates can be dangerous and the SQL may change on every request. It is difficult to fully test ad-hoc scripts due to the turnaround they typically require.
I assume these "fixes" are in fact making changes not implemented by the system.
It would be more sensible to either:
Build a custom page to "fix" the entries (or less sensible: modify the delivered pages).
Build and thoroughly test a paramater-driven App Engine to perform the most commonly made changes. It could potentially be run as part of the batch stream.
Watch out on your next upgrade: application tables have had a lot of changes in recent releases.

Best Practices for Self Updating Desktop Application in a network environment

I have searched through google and SO for possible answers to this question, but can only find small bits of information scattered around the place, most of which appear to be personal opinion.
I'm aware that this question could be considered subjective, but I'm not looking for personal opinion, rather facts with reasons (e.g. past experience) or even a single link to a blog/wiki which describes best practices for this (this is what I'd prefer to be honest). What I'm not looking for is how to make this work, I know how to create a self updating desktop application.
I want to know about the best practices for creating a self updating desktop application. The sort of best practices I'm especially curious about are:
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
How often should you check for updates? Weekly/daily/hourly and exactly why?
Should the update be visible to the user or run behind the scenes from a UI point of view?
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
Should you allow users to update from a central location or only allow updating through the specified application? (for closed business applications).
Surely there is some written rules/suggestions about this stuff? One of the most annoying things about a lot of applications is the updating, as it's hard to find a good balance between "out of date" and "in the users face".
If it helps consider this to be written in .net C# for a single client, running on machines with constant available connectivity to the update server, all of these machines talk to each other through the application, and all also talk to a central database server.
One best practice that many software overlook: ask to update when the user is closing your application, NOT when it has just launched it.
It's incredible how many apps don't do that (Firefox, for example). You just ran the app, you want to use it now, and instead, it prompts you if you want to update, which of course is going to take 5 minutes and require restarting the app.
This is non-sense. Just do the update at the end.
It's hard to give a general answer. It depends on the context: criticality of the update, what kind of app is it, user preferences, #users, network width, etc. Here are some of the options/trade-offs.
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
As a developer your best interest is to have all apps out there to be as up to date as possible. This reduces your maintenance effort. Thus, if the user does not mind you should update.
How often should you check for updates? Weekly/daily/hourly and exactly why?
If the updates are transparent to the user, do not require an immediate restart of the app, then I'd suggest that you do it as often as your the communication bandwidth allows (considering both the update check-frequent but small-and the download-infrequent but large)
Should the update be visible to the user or run behind the scenes from a UI point of view?
Depends on the user preferences but also on the type of the update: bug fixes vs. functionality/UI changes (the user will be puzzled to see the look and feel has changed with no previous alert)
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
same arguments as the previous question
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
if app size is small download it from scratch. This will prevent all sort of weird bugs created to mismatch between the different patches ("DLL hell"). However, this may require large download times or impose heavy toll on your network.
Should you allow users to update from a central location or only allow updating through the specified application? (for closed business applications).
I think both
From practical experience, don't forget to add functionality for updating the update engine. Which means that performing an update is usually a two step approach
Check if there are updates to the update engine
Check if there are updates to the actual application
Do you force an update if the clients
software is out of date, but not going
to break when trying to communicate
with other version of the software or
the database itself? If so how do you
signify this breaking change?
A common practice is to have a "ProtocolVersion" method which indicates the lowest/oldest version allowed.
The "ProtocolVersion" can either supplied by the client or the server depending on the trust level you have between the client and the server. In a low trust level it is probably better to have the client provide the "ProtocolVersion" and then deny access server side until the client is updated. In a "high trust level" scenario it will be easier to have the server supply the "ProtocolVersion" it accepts, and then all the logic for adapting to this - including updating the client application - implemented in the client only. Giving the benefit that the version check/handling code only needs to be in one place.
Do not ever try to force an update unless your lawyers demand that. Show the the user a update notification she can either accept or ignore. Try not to spam the same version too much is she rejected it. The help her make the decision, include a link to release notes or a short summary of changes.
Weekly would be a good default update check interval but let the user choose this, including completely disabling update check from the web. Do not check too often because she might be on an expensive mobile data plan, or she just doesn't like the idea of an application phoning home.
The update check part should be completely silent. If an update was found, display a notification for the user. During download and installation, show a progress bar.
To keep this simple, notify the user about any newer version. If you do not want to annoy them with frequent updates including just a few minor bug fixes, do not release every minor version at the download location watched by the update checker
Maintaining patches for all previously released versions is too much work. If the download size becomes a problem, figure out some other way than patches to make it smaller (7-zip compressed self-extracting exe, splitting the application to multiple MSI packages that have independent versions etc)
Two more things:
Do not implement the update engine as a process that is constantly running in the background even when I'm not using your application. My PC already ~10 such processes hogging resources, which is very annoying.
When updating the update engine itself, on one hand you need to have the engine running to show the installation progress UI but on the other hand the update process must be closed to avoid the reboot that would result from the exe file being locked. There are a number of things like running a helper program from %TEMP%, using Windows Installer restart manager, renaming the updater exe file before starting the installation package etc. Keep this in mind when architecting the update engine.
Do you force an update if the clients software is out of date, but not going to break when trying to communicate with other version of the software or the database itself? If so how do you signify this breaking change?
Ask the user.
How often should you check for updates? Weekly/daily/hourly and exactly why?
Ask the user.
Should the update be visible to the user or run behind the scenes from a UI point of view?
Ask the user.
Should you even notify the user that there is an update available if it is not a major update? (for instance fixing a single button in a remote part of the application which only one user actually requires)
Ask the user (notice a trend here?).
Should you try to patch the application or do you re-download the entire application from scratch Macintosh style?
Typically, patch, if the application is of any significant size.
As far as the "ask the user" responses go, it doesn't mean always prompt them every single time. Instead, give them the option to set what they should be prompted for and what should just be done invisibly (and the first time a given thing occurs, ask them what should be done in the future, and remember that). This shouldn't be very difficult and you gain a lot of goodwill from a larger portion of your user base, since it's very hard to have fixed settings suit the desires of everyone who uses your app. When in doubt, more options are better than less - especially when they're the kind of option that's fairly trivial to code.

How Scalable is SQLite? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I recently read this Question about SQLite vs MySQL and the answer pointed out that SQLite doesn't scale well and the official website sort-of confirms this, however.
How scalable is SQLite and what are its upper most limits?
Yesterday I released a small site* to track your rep that used a shared SQLite database for all visitors. Unfortunately, even with the modest load that it put on my host it ran quite slowly. This is because the entire database was locked every time someone viewed the page because it contained updates/inserts. I soon switched to MySQL and while I haven't had much time to test it out, it seems much more scaleable than SQLite. I just remember slow page loads and occasionally getting a database locked error when trying to execute queries from the shell in sqlite. That said, I am running another site from SQLite just fine. The difference is that the site is static (i.e. I'm the only one that can change the database) and so it works just fine for concurrent reads. Moral of the story: only use SQLite for websites where updates to the database happen rarely (less often than every page loaded).
edit: I just realized that I may not have been fair to SQLite - I didn't index any columns in the SQLite database when I was serving it from a web page. This partially caused the slowdown I was experiencing. However, the observation of database-locking stands - if you have particularly onerous updates, SQLite performance won't match MySQL or Postgres.
another edit: Since I posted this almost 3 months ago I've had the opportunity to closely examine the scalability of SQLite, and with a few tricks it can be quite scalable. As I mentioned in my first edit, database indexes dramatically reduce query time, but this is more of a general observation about databases than it is about SQLite. However, there is another trick you can use to speed up SQLite: transactions. Whenever you have to do multiple database writes, put them inside a transaction. Instead of writing to (and locking) the file each and every time a write query is issued, the write will only happen once when the transaction completes.
The site that I mention I released in the first paragraph has been switched back to SQLite, and it's running quite smoothly once I tuned my code in a few places.
* the site is no longer available
Sqlite is scalable in terms of single-user, I have multi-gigabyte database that performs very well and I haven't had much problems with it.
But it is single-user, so it depends on what kind of scaling you're talking about.
In response to comments. Note that there is nothing that prevents using an Sqlite database in a multi-user environment, but every transaction (in effect, every SQL statement that modifies the database) takes a lock on the file, which will prevent other users from accessing the database at all.
So if you have lots of modifications done to the database, you're essentially going to hit scaling problems very quick. If, on the other hand, you have lots of read access compared to write access, it might not be so bad.
But Sqlite will of course function in a multi-user environment, but it won't perform well.
SQLite drives the sqlite.org web site and others that have lots of traffic. They suggest that if you have less than 100k hits per day, SQLite should work fine. And that was written before they delivered the "Writeahead Logging" feature.
If you want to speed things up with SQLite, do the following:
upgrade to SQLite 3.7.x
Enable write-ahead logging
Run the following pragma: "PRAGMA cache_size = Number-of-pages;" The default size (Number-of-pages) is 2000 pages, but if you raise that number, then you will raise the amount of data that is running straight out of memory.
You may want to take a look at my video on YouTube called "Improve SQLite Performance With Writeahead Logging" which shows how to use write-ahead logging and demonstrates a 5x speed improvement for writes.
Sqlite is a desktop or in-process database. SQL Server, MySQL, Oracle, and their brethren are servers.
Desktop databases are by their nature not a good choices for any application that needs to support concurrent write access to the data store. This includes at some level most web sites ever created. If you even have to log in for anything, you probably need write access to the DB.
Have you read this SQLite docs - http://www.sqlite.org/whentouse.html ?
SQLite usually will work great as the
database engine for low to medium
traffic websites (which is to say,
99.9% of all websites). The amount of web traffic that SQLite can handle
depends, of course, on how heavily the
website uses its database. Generally
speaking, any site that gets fewer
than 100K hits/day should work fine
with SQLite. The 100K hits/day figure
is a conservative estimate, not a hard
upper bound. SQLite has been
demonstrated to work with 10 times
that amount of traffic.
SQLite scalability will highly depend on the data used, and their format. I've had some tough experience with extra long tables (GPS records, one record per second). Experience showed that SQLite would slow down in stages, partly due to constant rebalancing of the growing binary trees holding the indexes (and with time-stamped indexes, you just know that tree is going to get rebalanced a lot, yet it is vital to your searches). So in the end at about 1GB (very ballpark, I know), queries become sluggish in my case. Your mileage will vary.
One thing to remember, despite all the bragging, SQLite is NOT made for data warehousing. There are various uses not recommended for SQLite. The fine people behind SQLite say it themselves:
Another way to look at SQLite is this: SQLite is not designed to replace Oracle. It is designed to replace fopen().
And this leads to the main argument (not quantitative, sorry, but qualitative), SQLite is not for all uses, whereas MySQL can cover many varied uses, even if not ideally. For example, you could have MySQL store Firefox cookies (instead of SQLite), but you'd need that service running all the time. On the other hand, you could have a transactional website running on SQLite (as many people do) instead of MySQL, but expect a lot of downtime.
i think that a (in numbers 1) webserver serving hunderts of clients appears on the backend with a single connection to the database, isn't it?
So there is no concurrent access in the database an therefore we can say that the database is working in 'single user mode'. It makes no sense to diskuss multi-user access in such a circumstance and so SQLite works as well as any other serverbased database.
Think of it this way. SQL Lite will be locked every time someone uses it (SQLite doesn't lock on reading). So if your serving up a web page or a application that has multiple concurrent users only one could use your app at a time with SQLLite. So right there is a scaling issue. If its a one person application say a Music Library where you hold hundreds of titles, ratings, information, usage, playing, play time then SQL Lite will scale beautifully holding thousands if not millions of records(Hard drive willing)
MySQL on the other hand works well for servers apps where people all over will be using it concurrently. It doesn't lock and it is quite large in size. So for your music library MySql would be over kill as only one person would see it, UNLESS this is a shared music library where thousands add or update it. Then MYSQL would be the one to use.
So in theory MySQL scales better then Sqllite cause it can handle mutiple users, but is overkill for a single user app.
SQLite's website (the part that you referenced) indicates that it can be used for a variety of multi-user situations.
I would say that it can handle quite a bit. In my experience it has always been very fast. Of course, you need to index your tables and when coding against it, you need to make sure you use parameritized queries and the like. Basically the same stuff you would do with any database to improve performance.

Resources