AzerothCore Rapberry Pi 4b: Modules not loading, error saying "Table `command` contains data for non-existant command" - azerothcore

I have setup AzerothCore on my Raspberry Pi 4B running Ubuntu through the Bash setup, and it is working fine. Trying to setup modules however, I can't seem to make them work. I put files into azerothcore-wotlk/modules, run CMake in terminal via cmake azerothcore-wotlk and run the SQL queries in the database. However, they refuse to load, and I recieve errors upon starting the server that say:
Table `command` contains data for non-existant command 'xp'. Skipped.
Table `command` contains data for non-existant command 'xp enable'. Skipped.
Table `command` contains data for non-existant command 'xp disable'. Skipped.
Table `command` contains data for non-existant command 'xp set'. Skipped.
Table `command` contains data for non-existant command 'xp view'. Skipped.
Table `command` contains data for non-existant command 'xp default'. Skipped.
Table `command` contains data for non-existant command 'chata'. Skipped.
Table `command` contains data for non-existant command 'chath'. Skipped.
Table `command` contains data for non-existant command 'chat'. Skipped.

Try to turn on autoupdater in worldserver.conf: Updates.EnableDatabases = 7
clear version_db_world table
and restart server
It must apply all uptates to your database when server starts.
If after that the errors repeat or the updater does not work, run db_assembler with parameter 5. It's a little longer, but it's guaranteed to fill everything you need into the database.

Nevermind, turns out out i was trying to run cmake and other commands seperately, and ended up breaking things. I started from scratch and updated the server properly after adding a module and everything is working now.

Related

How can I quit sqlite from a command batch file?

I am trying to create a sealed command for my build pipeline which inserts data and quits.
So far I have created my data files
things-to-import-001.sql and 002 etc, which contains all the INSERT statements I'd like to run, with a file per table.
I have created a command file to run them
-- import-all.sql
.read ./things-to-import-001.sql
.read ./things-to-import-002.sql
.quit
However when I run my command
sqlite3 -init ./import-all.sql ./database.sqlite
..the data is inserted, but the program remains running and shows the sqlite> prompt, despite the .quit command. I have also tried using .exit 0.
From the sqlite3 --help
-init FILENAME read/process named file
Docs: https://www.sqlite.org/cli.html#reading_sql_from_a_file
How can I tell sqlite to exit once my inserts have finished?
I have managed to find a dirty workaround for this issue.
I have updated my import file to include a bad command, and executed using -bail to quit on first error.
-- import-all.sql
.read ./things-to-import-001.sql
.read ./things-to-import-002.sql
.fakeErrorToQuitWithBail
Then you can execute with
sqlite3 -init import-all.sql -bail
and it should quit with
Error: unknown command or invalid arguments: "fakeErrorToQuitWithBail". Enter ".help" for help
Try using ".exit" at the place of ".quit". For some reason SQLite dont doccumented this commands.
https://www.tutorialspoint.com/sqlite/sqlite_commands.htm

How to migrate data to new server? (Sequence generator issue)

What is the right way to backup and restore a MariaDB database that has sequence generation enabled (i.e. NOT autoincrement)? (This includes migrating to a new server.)
Is it possible to instruct the sequence generator to pick up indexing table data at a specific ID value? How?
Steps I take to create my issue
I wish to transfer an application to a new server:
Backup data on source server:
mysqldump --skip-opt --no-create-db --no-create-info --hex-blob [database-name] [...list of tables...] > data-backup.sql
On target server, create new empty database (same name)
Build/run JHipster Spring application on target server: java -jar myapp.jar (Running this application recreates/configures a new instance of the database on the target server.)
Restore data:
mysql [database-name] < data-backup.sql
All the above steps produce no errors (so far).
Problem
When I follow these steps, the database is restored (apparently perfectly). I can log in to the application and access all information. BUT when I attempt to create new entities (i.e. save something to the database), I get an ID 'Duplicate entry' error in the server logs:
2022-03-24 12:54:43.775 ERROR 11277 --- [ XNIO-1 task-1] o.h.e.jdbc.batch.internal.BatchingBatch : HHH000315: Exception executing batch [java.sql.BatchUpdateException: (conn=33) Duplicate entry '1001' for key 'PRIMARY'], SQL: insert into product (name, id) values (?, ?)
2022-03-24 12:54:43.776 WARN 11277 --- [ XNIO-1 task-1] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 1062, SQLState: 23000
2022-03-24 12:54:43.776 ERROR 11277 --- [ XNIO-1 task-1] o.h.engine.jdbc.spi.SqlExceptionHelper : (conn=33) Duplicate entry '1001' for key 'PRIMARY'
2022-03-24 12:54:43.779 ERROR 11277 --- [ XNIO-1 task-1] o.z.problem.spring.common.AdviceTraits : Internal Server Error
org.springframework.dao.DataIntegrityViolationException: could not execute batch; SQL [insert into product (name, id) values (?, ?)]; constraint [PRIMARY]; nested exception is org.hibernate.exception.ConstraintViolationException: could not execute batch
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:276)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:233)
at org.springframework.orm.jpa.JpaTransactionManager.doCommit(JpaTransactionManager.java:566)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransactionManager.java:743)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:711)
at org.springframework.transaction.interceptor.TransactionAspectSupport.commitTransactionAfterReturning(TransactionAspectSupport.java:654)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:407)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:119)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:753)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:698)
at com.mycompany.app.web.rest.ProductResource$$EnhancerBySpringCGLIB$$84c14d6d.createProduct(<generated>)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
...
Clearly my backup/restore process is not accounting properly for the sequence generator, which generates ID values that conflict with the existing data.
What I am doing wrong? What is the right process of backing up/restoring such a database?
Environment: JHipster 7.7.0 (Angular, monolithic), MariaDB 10.4, OpenJDK 16.0.2_7, OS Windows 10 Pro and openSUSE 15.2, Firefox 98.0.2 and Chrome 99.0.4844.84.
PS: I previously reported this issue here, aimed at the JHipster community, but got limited response. I think I need a MySQL/MariaDB expert opinion on this.
(Apologies in advance: I'm not a database expert. The technique I outline above has served me well for years, but previously I was dealing with AUTO_INCREMENT. This sequence generator has me baffled.)
Ok! I have solutions.
[For the sake if these notes, let's call the database: mydata. Also, in JHipster, the MariaDB sequence generator is called: sequence_generator]
Let's consider two situations:
(1) Simple migration
If you are merely migrating the application to a new server, the process is straight forward:
Step 1: On the original server backup and secure your database: mysqldump -u root -p mydata > mydata.sql
Step 2: Transfer the SQL file to the new server, along with the JHipster JAR file
Step 3: On the new server, create an empty database with the same name, and restore the data: mysql -u root -p mydata < mydata.sql
Step 4: Now launch your JHipster application, and everything should work
(2) Model modification
The assumption is that you have modified your model in some way (e.g. added properties to one or more entities). This solution is fiddly, but it works (for me).
Step 1: Backup your database, and secure it (in case something goes wrong): mysqldump -u root -p mydata > mydata.sql
Step 2: Backup and secure the original JHipster JAR that works with the original database
Step 3: Duplicate your database (schema and data) in a new table: mydata_bk
Step 4: Drop your original database, and create a new empty database
Step 5: Launch your new JHipster JAR, and give it time to create the new database schema, then stop the application
Step 6: Use a tool (DataGrip, sqlYog, etc) to compare the old (mydata_bk) and new schema (mydata), and modify the old schema to match the new schema
Step 7: Restore/copy all data from mydata_bk to mydata, EXCEPT for the tables DATABASECHANGELOG, DATABASECHANGELOGLOCK and the special sequence_generator table
Step 8: Open the mydata.sql SQL file, and at the top, after initial comments, one of the first instructions will read:
--
-- Sequence structure for `sequence_generator`
--
DROP SEQUENCE IF EXISTS `sequence_generator`;
CREATE SEQUENCE `sequence_generator` start with 2000 minvalue 1 maxvalue 9223372036854775806 increment by 50 cache 1000 nocycle ENGINE=InnoDB;
SELECT SETVAL(`sequence_generator`, 201050, 0);
The specific numbers may vary, but the broad details will be similar. In a MariaDB SQL console type/execute each of those SQL statements: DROP SEQUENCE ...;, CREATE SEQUENCE ...;, and SELECT SETVAL(...);
Step 9: Launch your JHipster application.
Hope this helps others that run into similar issues. Let me know if you have a better approach!

Is there a way to check existence of file in MVS before attempting copy to Unix server

A file to be copied from MVS Mainframe to Unix server using Connect Direct. Below is the sample script which works fine. Now before copying the file is there a way to verify the file existence in MVS ?
submit FILE_COPY process
SNODE=${SENDING_NODE} SNODEID=(${USERNAME},${PASSWORD})
&INDSN="$INPUT_FILE"
&OUTDSN="$OUTPUT_DIR$OUTPUT_FILE"
COPYSTEP COPY FROM (FILE="&INDSN")
TO
(FILE="&OUTDSN"
UNIT=SYSDA
SYSOPTS=":datatype=text:"
DISP=RPL
SPACE=(TRK,(100,50),RLSE)
DCB=(RECFM=FBA,LRECL=216,BLKSIZE=0)
pnode)
PEND;
EOF
Yes - if you are running in batch. Simply have a step before your C-D step to execute IDCAMS and print tthe first line of the dataset. If the dataset is not found a non-zero return code is set. Just check that in the EXEC statement of your C-D step.
(If you had tagged this with Mainframe I would have seen this 2 weeks ago.)

Why am I getting: database is locked, in an SQLite3 script?

I'm getting an error when running an SQLite script.
--drop use table before replacing it
DROP TABLE IF EXISTS db.use;
--Create the use table in the saved database
CREATE TABLE db.use AS SELECT * FROM use2; -- this is the line that generates the error: Error: near line 145: database is locked
Are these two statements run asynchronously or something? I don't understand what's causing the error, but I'm wondering if it has to do with that.
Might there be a way to run the script in a lock-step manner, i.e. non-asynchronously?
This is how I was running the command: sqlite3 --init script_name.sql dbname.db, and elsewhere in the script I had an ATTACH statement reading the same database dbname.db. Essentially reading the same file twice.
The way I solved this was by executing the script in the sqlite3 shell:
sqlite3> .read script_name.sql
Have you tried to add a commit statement after the drop statement?
I think that would make sure the create table statement run after the drop statement is totally done.

fast export unexplained failure

I have roughly 14 million records that I am attempting to export from a Teradata table to file using a fast export connection object.
There is no size limit for fast export files on our Linux system, and there is 1.2 TB of available space in the target directory.
The session fails, and gives the following errors:
READER_2_1_1 FEXP_87011 Process [16022] exited with status [12]
SDKS_38200 Partition-level [SOURCE_TABLE_NAME]: Plug-in #305400 failed in deinit()
I googled the error message, and found this post:
Here
I followed the recommendations in the port to delete the .out file in the temp directory, delete the files that were partially filled in the target directory, and drop the error table and delete the log file. This did not fix the issue and the session still fails with the same error messages.
Try to use TPT Export plug-in instead. Also you can try to execute this FastExport using bteq scripts directly on your unix environment.

Resources