How to make sure that the DB is up to date - azerothcore

When installing/updating AzerothCore sometimes one encounters errors such as:
[ERROR]: In mysql_stmt_prepare() id: 3, sql:
[ERROR]: Unknown column 'entry' in 'field list'
[ERROR]: Unknown column 'dmg_multiplier' in 'field list'
[ERROR]: Table 'acore_world.graveyard_zone' doesn't exist
[ERROR]: Unknown column 'mindmg' in 'field list'
ERROR: Your database structure is not up to date. Please make sure you've executed all queries in the sql/updates folders.
this usually means the database structure is not up to date.
More specifically, the local DB version is not aligned with the local core version.
This leads to the following questions:
How to check whether the DB is up to date or not?
How to understand what DB SQL updates are missing per each DB?

UPDATE 2021:
The latest AzerothCore has now the automated DB updater integrated inside the core.
You just need to enable it via worldserver.conf by setting:
Updates.EnableDatabases = 7
Then your worldserver process will automatically update all DBs for you.
You need latest AC to get this feature.
Original answer and explanation:
AzerothCore has three databases: auth, characters and world. All of them need to be properly up to date in order to start the server application.
Each database has a table version_db_xxxx which holds information about the database version inside its last column's name.
auth DB has the version_db_auth table
characters DB has the version_db_characters table
world DB has the version_db_world table
The database version will be expressed in the format of YYYY_MM_DD_XX which is basically a date followed by a number (XX).
This value will be the name of the last column of such tables and it corresponds to the name of the last SQL update file that has been applied to that database.
The SQL update files can be found in the azerothcore-wotlk/data/sql/updates/db_xxxx/ directory (where xxx is the database name):
https://github.com/azerothcore/azerothcore-wotlk/tree/master/data/sql/updates/db_auth
https://github.com/azerothcore/azerothcore-wotlk/tree/master/data/sql/updates/db_characters
https://github.com/azerothcore/azerothcore-wotlk/tree/master/data/sql/updates/db_world
To make sure the database is up to date, one should compare (per each database):
the last column name of the version_db_xxxx table
the most recent sql file name contained in data/sql/updates/db_xxxx
(most recent in terms of most recent date. If the date is the same, the file having the highest pending number is the most recent)
If the values are the same, then the DB is up to date. Otherwise, the DB needs to be updated by importing all missing SQL update files in order.

Related

aws codedeploy - running sql scripts

I run my sql scripts which inserts data to DB as a part of my codedeploy lifecycle event on a Autoscaling group. The Autoscaling group has 2 instances, the sql scripts run fine on the 1st instance and the deployment is successful on that instance.
In the 2nd instance, as the DB has the data already inserted the sql script fails with the below error message:
[stderr]ERROR 1062 (23000) at line 32: Duplicate entry
Any workaround or solution will be of great help.
Thanks
It suggests that the DB already has an entry which you're trying to insert, hence that error. You may like to first check if the DB has that entry or not.
To identify which part of the script is giving you this error, you may try to create subset of your script and identify the actual cause.
This certainly is the issue when you already have some record(s) and the DB / Table / schema does not allow for duplicate entry.
Assuming your deployment group is a OneAtATime deployment type, then your lifecycle hook should check for the entry before it inserts the SQL.
That way, only the first deployed instance will apply the change. The other deployments will test for the entry, and then skip the insert code phase.

why the output values length are getting reduced when using DB links in oracle and asp.net

We are retrieving the output from table through DB link by executing a stored procedure and input parameters which worked previously and got the output in asp.net application.But now we noted that outputs through DB links are getting trimmed say if status is 'TRUE' ,we are getting as 'TRU' etc why the output values are getting trimmed.The only change we did recently was we changed one of the type of input parameter from number to varchar at the receiving remote side,But i don't think that is the issue??whe we execute the stored procedure remotely on the table.It is giving proper output but through DB link outputs are getting trimmed.ANy one has any idea about this issue??
My oracle client is having issues,i reinstalled and it worked fine.Only my system was having issues.so i desided to reinstall

How do I prevent flyway from creating the schema during init

I'm trying to start using flyway v2.3 on an existing Oracle 11g schema that does not contain the schema_history table
In my flyway.properties i've set the flyway.user to the schema owner and i've set the flyway.schemas property to the same value
When running init from the command line I expected flyway to only create the schema_history table but it fails with this message:
$ ./flyway.cmd init
Flyway (Command-line Tool) v.2.3
Creating schema "myschema" ...
ERROR: Unable to create schema "myschema"
ERROR: Caused by: java.sql.SQLSyntaxErrorException: ORA-01031: insufficient privileges
Why is flyway attempting to create the schema? I only want it to create the schema_history table in the schema I configured
The command is correct. Please note that flyway.schemas is case-sensitive and automatically filled with the default schema of the user if left empty.
I suspect the value you put in flyway.schemas is in the wrong case. Just leave it empty and you should be ok.
So you have to be sure you want to work on schema which belongs to user name with which you login. If you want to work in different schema you have to specify inside flyway.properties.
I've just had the same problem in Flyway Maven plugin 3.1.
I turned out that I have created my user with lowercase name
CREATE USER myuser ...
And I gave Flyway
flyway.user=myuser
But while connecting my user's name was cast to uppercase, so Flyway reported that user of name MYUSER did not exist.
Solution: Create and use Oracle DB user with uppercase name.

"Error: unable to open database file" for GROUP BY query

I have a python script which creates a sqlite database out of some external data. This works fine. But everytime I execute a GROUP BY query on this database, I get an "Error: unable to open database file". Normal SELECT queries work.
This is an issue for both, the sqlite3 library of python and the sqlite3 cli binary:
sqlite> SELECT count(*) FROM REC;
count(*)
----------
528489
sqlite> SELECT count(*) FROM REC GROUP BY VERSION;
Error: unable to open database file
sqlite>
I know that these errors are typically permission errors (I have read all questions which I could find on this topic on StackOverflow), but I'm quite sure it is not in my case:
I'm talking about a readily created database and read requests
I checked the permissions: Both the file and its containing folders have write permissions set
I can even write to the database: Creating a new table is no problem.
The device is not full, it got plenty of space.
Ensure that your process has access to the TEMP directory.
From the SQLite's Use Of Temporary Disk Files documentation:
SQLite may make use of transient indices to implement SQL language
features such as:
An ORDER BY or GROUP BY clause
The DISTINCT keyword in an aggregate query
Compound SELECT statements joined by UNION, EXCEPT, or INTERSECT
Each transient index is stored in its own temporary file. The
temporary file for a transient index is automatically deleted at the
end of the statement that uses it.
You probably can verify if temporary storage is the problem by setting the temp_store pragma to MEMORY:
PRAGMA temp_store = MEMORY;
to tell SQLite to keep the transient index for the GROUP BY clause in memory.
Alternatively, create an explicit index on the column you are grouping by to prevent the transient index from being created.

SQLite INSERT or UDATE with a custom condition

I know there is a lot of question already asked and answered on this subject but they don't seem to fit my situation.
I have a distant sqlite database — DB server — and a local one — DB local containing photo album entries. DB local updates whenever needed from DB server. DB server has a primary key called identifier, which is stored in DB local to prevent duplicates, but DB local also has its own primary key column called id
If I need to create a new album on my phone I insert an entry in DB local with identifier set to -1 and when DB server will be reachable ask for a proper identifier.
My issue is : I do a lot of refresh and don't want to increment my primary key each time.
When I refresh DB local from DB server I would like to INSERT new albums, and UPDATE existing ones.
I read about the INSERT OR REPLACE statement but it would require my identifier column in DB local to be set as unique. Unfortunately I cannot do so since I can have multiple identifierset to -1.
Is there any way to perform an INSERT or UPDATE conditionally in a single request ?
Thanks !
EDIT : the update is done this way : the DB local is updated from DB server. DB local data is never pushed to DB server, the only way to add a new item is to call an API on the server which will create an empty entry on DB server and get its identifier. But since server is not always reachable (EDGE/3G) some entries in DB local have identifier set to -1. Once the API call have returned with the corresponding identifier we store it instead of -1 for the corresponding entry in DB local

Resources