[ERROR]: Unknown column 'DamageModifier' in 'field list' - azerothcore

I installed the DB with updates and I got this error:
Opening DatabasePool 'wotlk_world'. Asynchronous connections: 1, synchronous connections: 1. MySQL client library: 5.6.42 MySQL server ver: 5.6.42 MySQL client library: 5.6.42 MySQL server ver: 5.6.42 [ERROR]: In mysql_stmt_prepare() id: 4, sql: "SELECT entryorguid, source_type, id, link, event_type, event_phase_mask, event_chance, event_flags, event_param1, event_param2, event_param3, event_param4, event_param5, action_type, action_param1, action_param2, action_param3, action_param4, action_param5, action_param6, target_type, target_param1, target_param2, target_param3, target_param4, target_x, target_y, target_z, target_o FROM smart_scripts ORDER BY entryorguid, source_type, id, link" [ERROR]: Unknown column 'event_param5' in 'field list' [ERROR]: In mysql_stmt_prepare() id: 54, sql: "SELECT difficulty_entry_1, difficulty_entry_2, difficulty_entry_3, KillCredit1, KillCredit2, modelid1, modelid2, modelid3, modelid4, name, subname, IconName, gossip_menu_id, minlevel, maxlevel, exp, faction, npcflag, speed_walk, speed_run, scale, rank, mindmg, maxdmg, dmgschool, attackpower, DamageModifier, BaseAttackTime, RangeAttackTime, unit_class, unit_flags, unit_flags2, dynamicflags, family, trainer_type, trainer_spell, trainer_class, trainer_race, minrangedmg, maxrangedmg, rangedattackpower, type, type_flags, lootid, pickpocketloot, skinloot, resistance1, resistance2, resistance3, resistance4, resistance5, resistance6, spell1, spell2, spell3, spell4, spell5, spell6, spell7, spell8, PetSpellDataId, VehicleId, mingold, maxgold, AIName, MovementType, InhabitType, HoverHeight, HealthModifier, ManaModifier, ArmorModifier, RacialLeader, movementId, RegenHealth, mechanic_immune_mask, flags_extra, ScriptName FROM creature_template WHERE entry = ?" [ERROR]: Unknown column 'DamageModifier' in 'field list' DatabasePool wotlk_world NOT opened. There were errors opening the MySQL connections. Check your SQLDriverLogFile for specific errors. Cannot connect to world database
127.0.0.1;3306;root;ascent;wotlk_world

The problem is:
[ERROR]: Unknown column 'DamageModifier' in 'field list'
It looks like your world DB is not up to date. So you need to update it properly. To do that you can either use the DB assembler script (bin/acore-db-asm) or manually importing the missing sql files from data/sql/updates/db_world.
To make sure your DB is up to date, check the name of the last column of the table version_db_world of your world database. It should match with the most recent sql file name of the direcotry data/sql/updates/db_world.
I recommend reading:
How to make sure that the DB is up to date

Related

Can't add a field to a model in Symfony, bin/console crashes

I'm working with Sylius framework. I'm following the guide to customize models.
I am trying to add a field notice to a model Taxon which is already overridden in my project. For that, I added the field description to Taxon.orm.yml of the model:
MyProject\Bundle\ShopBundle\Entity\Taxon:
type: entity
table: sylius_taxon
# {Relationships code...}
fields:
# {Some existing fields...}
notice:
type: text
nullable: true
I also added a field, a getter and a setter to the overriding Taxon class.
Then I'm trying to run bin/console doctrine:migrations:diff, but when I run bin/console even without any arguments, it crashes with the following exception:
[Doctrine\DBAL\Exception\InvalidFieldNameException]
An exception occurred while executing 'SELECT s0_.code AS code_0, s0_.tree_left AS tree_left_1, s0_.tree_right AS tree_right_2, s0_.tree_level AS tree_level_3, s0_.position AS position_4, s0_.id AS id_5, s0_
.created_at AS created_at_6, s0_.updated_at AS updated_at_7, s0_.enabled AS enabled_8, s0_.default_markup AS default_markup_9, s0_.notice AS notice_10, s0_.tree_root AS tree_root_11, s0_.parent_id AS parent_
id_12 FROM sylius_taxon s0_ WHERE s0_.parent_id IS NULL ORDER BY s0_.tree_left ASC':
SQLSTATE[42S22]: Column not found: 1054 Unknown column 's0_.notice' in 'field list'`
[Doctrine\DBAL\Driver\PDOException]
SQLSTATE[42S22]: Column not found: 1054 Unknown column 's0_.notice' in 'field list'`
[PDOException]
SQLSTATE[42S22]: Column not found: 1054 Unknown column 's0_.notice' in 'field list'
If I remove the changes to Taxon.orm.yml then bin/console works again. What is missing in my changes?
One of my bundles' configruation contained that model's repository, that's it. I temporarily deleted the bundle's configuration from config.yml, and bin/console worked.
When you add new field you should doctrine:schema:update

Flyway migration blocked by null version_rank

I'm using PostgreSQL 9.5 and flyway 5.0.7
Everything worked fine for the previous 6 migrations but now it blocks for the latest
I have the following error :
22:27:45.230 [INFO ] o.f.c.i.u.l.slf4j.Slf4jLog - Flyway Community Edition 5.0.7 by Boxfuse
22:27:45.408 [INFO ] o.f.c.i.u.l.slf4j.Slf4jLog - Database: jdbc:postgresql://localhost:32767/my_db (PostgreSQL 9.5)
22:27:45.566 [INFO ] o.f.c.i.u.l.slf4j.Slf4jLog - Successfully validated 7 migrations (execution time 00:00.061s)
22:27:45.658 [INFO ] o.f.c.i.u.l.slf4j.Slf4jLog - Current version of schema "public": 6
22:27:45.733 [INFO ] o.f.c.i.u.l.slf4j.Slf4jLog - Migrating schema "public" to version 7 - update
Exception in thread "main" org.flywaydb.core.internal.exception.FlywaySqlException:
Unable to insert row for version '7' in Schema History table "public"."flyway_schema_history"
---------------------------------------------------------------------------------------------
SQL State : 23502
Error Code : 0
Message : ERROR: null value in column "version_rank" violates not-null constraint
Détail : Failing row contains (null, 7, 7, update, SQL, V7__update.sql, -1303600795, postgres, 2018-02-25 22:28:00.536556, 158, t).
at org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory.doAddAppliedMigration(JdbcTableSchemaHistory.java:171)
at org.flywaydb.core.internal.schemahistory.SchemaHistory.addAppliedMigration(SchemaHistory.java:146)
at org.flywaydb.core.internal.command.DbMigrate.doMigrateGroup(DbMigrate.java:378)
at org.flywaydb.core.internal.command.DbMigrate.access$400(DbMigrate.java:52)
at org.flywaydb.core.internal.command.DbMigrate$5.call(DbMigrate.java:297)
at org.flywaydb.core.internal.util.jdbc.TransactionTemplate.execute(TransactionTemplate.java:75)
at org.flywaydb.core.internal.command.DbMigrate.applyMigrations(DbMigrate.java:294)
at org.flywaydb.core.internal.command.DbMigrate.migrateGroup(DbMigrate.java:259)
at org.flywaydb.core.internal.command.DbMigrate.access$300(DbMigrate.java:52)
at org.flywaydb.core.internal.command.DbMigrate$4.call(DbMigrate.java:179)
at org.flywaydb.core.internal.command.DbMigrate$4.call(DbMigrate.java:176)
at org.flywaydb.core.internal.database.postgresql.PostgreSQLAdvisoryLockTemplate.execute(PostgreSQLAdvisoryLockTemplate.java:71)
at org.flywaydb.core.internal.database.postgresql.PostgreSQLConnection.lock(PostgreSQLConnection.java:110)
at org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory.lock(JdbcTableSchemaHistory.java:148)
at org.flywaydb.core.internal.command.DbMigrate.migrateAll(DbMigrate.java:176)
at org.flywaydb.core.internal.command.DbMigrate.migrate(DbMigrate.java:145)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:1206)
at org.flywaydb.core.Flyway$1.execute(Flyway.java:1168)
at org.flywaydb.core.Flyway.execute(Flyway.java:1655)
at org.flywaydb.core.Flyway.migrate(Flyway.java:1168)
at com.test.MyApplication.main(MainApplication.java:47)
Caused by: org.postgresql.util.PSQLException: ERROR: null value in column "version_rank" violates not-null constraint
Détail : Failing row contains (null, 7, 7, update, SQL, V7__update.sql, -1303600795, postgres, 2018-02-25 22:28:00.536556, 158, t).
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2422)
at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2167)
at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:306)
at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:441)
at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:365)
at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:155)
at org.postgresql.jdbc.PgPreparedStatement.executeUpdate(PgPreparedStatement.java:132)
at org.flywaydb.core.internal.util.jdbc.JdbcTemplate.update(JdbcTemplate.java:334)
at org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory.doAddAppliedMigration(JdbcTableSchemaHistory.java:165)
... 20 more
Any idea why this column "version_rank" is not generated or not initialized ?
Thanks in advance for your help
You upgraded from Flyway 3.x to 5.x, skipping 4.x. This is not possible as written in the release notes: https://flywaydb.org/documentation/releaseNotes#5.0.0
Upgrade to 4.2.0 first before upgrading to 5.x and everything will work as expected.
Also please take a minute to check the release notes next time you upgrade a major version.
Here's the commit with the metadata table changes if you need to apply the changes manually. I've linked to the version for postgres - search for the version of upgradeMetaDataTable.sql in the folder matching the required dialect.
Fortunately you can apply the changes as a standard flyway migration, as the metadata tables are not used until the end of each script.
E.G. create a migration V999.00__FlywayFix.sql to correct a flyway version table called flyway_table as follows:
DROP INDEX "flyway_table_vr_idx";
DROP INDEX "flyway_table_ir_idx";
ALTER TABLE "flyway_table" DROP COLUMN "version_rank";
ALTER TABLE "flyway_table" DROP CONSTRAINT "flyway_table_pk";
ALTER TABLE "flyway_table" ALTER COLUMN "version" DROP NOT NULL;
ALTER TABLE "flyway_table" ADD CONSTRAINT "flyway_table_pk" PRIMARY KEY ("installed_rank");
UPDATE "flyway_table" SET "type"='BASELINE' WHERE "type"='INIT';
it's worked for me for Postgres
CREATE TABLE flyway_schema_history (
installed_rank INTEGER NOT NULL,
version varchar(50) DEFAULT NULL,
description varchar(200) NOT NULL,
type varchar(20) NOT NULL,
script varchar(1000) NOT NULL,
checksum INTEGER DEFAULT NULL,
installed_by varchar(100) NOT NULL,
installed_on timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
execution_time INTEGER NOT NULL,
success BOOLEAN NOT NULL,
PRIMARY KEY (installed_rank)
);
INSERT INTO flyway_schema_history (installed_rank, version, description, type, script, checksum, installed_by, installed_on, execution_time, success)
SELECT installed_rank, version, description, type, script, checksum, installed_by, installed_on, execution_time, success
FROM schema_version;
ALTER TABLE schema_version RENAME TO bak_schema_version;

Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR when use openstacksdk to create_server

When I create the openstack server, I get bellow Exception:
Resource 7bed8adc-9ed9-49dc-b15e-6660e2fc3285 transitioned to failure state ERROR
My code is bellow:
server_args = {
"name":server_name,
"image_id":image_id,
"flavor_id":flavor_id,
"networks":[{"uuid":network.id}],
"admin_password": admin_password,
}
try:
server = user_conn.conn.compute.create_server(**server_args)
server = user_conn.conn.compute.wait_for_server(server)
except Exception as e: # there I except the Exception
raise e
When create_server, my server_args data is bellow:
{'flavor_id': 'd4424892-4165-494e-bedc-71dc97a73202', 'networks': [{'uuid': 'da4e3433-2b21-42bb-befa-6e1e26808a99'}], 'admin_password': '123456', 'name': '133456', 'image_id': '60f4005e-5daf-4aef-a018-4c6b2ff06b40'}
My openstacksdk version is 0.9.18.
In the end, I find the flavor data is too big for openstack compute node, so I changed it to a small flavor, so I create success.

I cant restore the dumpfile,

i am useing Oracle 11g Express and i used this command to restore a dumpfile
impdp SCHEMAS=datamining DIRECTORY=data_pump_dir DUMPFILE=dm.dmp remap_tablespace=system:users
then it gave me the below error:
.
.
.
x number;
y varchar2(200);
l_input utl_file.file_type;
begin
dbms_output.enable;
dbms_output.put_line(year_);
dbms_output.put_line(cycle);
select count(0) into x f
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
ORA-39083: Object type ALTER_FUNCTION failed to create with error:
ORA-31625: Schema DATAMINING is needed to import this object, but is unaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 6720
ORA-01435: user does not exist
Failing sql is:
ALTER FUNCTION "DATAMINING"."AUTO_CORR" COMPILE PLSQL_OPTIMIZE_LEVEL= 2
PLSQL_CODE_TYPE= INTERPRETED PLSQL_DEBUG= FALSE PLSCOPE_SETTINGS= 'ID
ENTIFIERS:NONE' REUSE SETTINGS TIMESTAMP '2015-10-02 10:04:21'
ORA-39083: Object type ALTER_FUNCTION failed to create with error:
ORA-31625: Schema DATAMINING is needed to import this object, but is unaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 6720
ORA-01435: user does not exist
Failing sql is:
.
.
.
.
CREATE FORCE VIEW "DATAMINING"."B_CLEANING_1" ("FILE_", "CUSTOMER_ID", "PRE_DEB
T", "PRICE", "DATE_", "CYCLE_", "YEAR_") AS select b."FILE_",b."CUSTOMER_ID",b."
PRE_DEBT",b."PRICE",b."DATE_",b."CYCLE_",b."YEAR_"
from bills b
where b.c
ORA-39083: Object type VIEW failed to create with error:
ORA-31625: Schema DATAMINING is needed to import this object, but is unaccessible
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 6720
ORA-01435: user does not exist
Failing sql is:
CREATE FORCE VIEW "DATAMINING"."MT" ("FILE_", "CUSTOMER_ID", "SPECIAL_TYPE", "U
SAGE1", "USAGE2", "USAGE3", "CYCLE_", "YEAR_", "END_", "LENGTH_DAY", "CONTRACT_P
OWER", "PRE_DEBT", "PRICE", "C_CYCLE", "B_DATE", "B_CYCLLE") AS select m.file
Processing object type SCHEMA_EXPORT/TYPE/TYPE_BODY
ORA-39083: Object type TYPE_BODY failed to create with error:
ORA-01435: user does not exist
Failing sql is:
CREATE TYPE BODY "DATAMINING"."AUTO_CORRIMPL" IS
STATIC FUNCTION ODCIAggregateInitialize(sctx IN OUT auto_corrImpl)
RETURN NUMBER IS
-- initialize the variables
BEGIN
sctx := auto_corrImpl(0, sys.odcinumberlist());
RETURN ODCIConst.Success;
END;
MEMBER FUNCTION ODCIAggregateIterate(self IN OUT auto_corrImpl,
VALUE IN NUMBER) RETURN NU
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_01" completed with 69 error(s) at 06:29:50
comment 1:we did remap_tablespace because we wrongly put the schema on system tablespace,
comment 2:we run this code on one computer and it worked but it is not working on other computer
Please guide me...
It seems the datamining user does not exist. Please create it and retry the import.
Thanks
Sabiha

SQLITE_ERROR: Connection is closed when connecting from Spark via JDBC to SQLite database

I am using Apache Spark 1.5.1 and trying to connect to a local SQLite database named clinton.db. Creating a data frame from a table of the database works fine but when I do some operations on the created object, I get the error below which says "SQL error or missing database (Connection is closed)". Funny thing is that I get the result of the operation nevertheless. Any idea what I can do to solve the problem, i.e., avoid the error?
Start command for spark-shell:
../spark/bin/spark-shell --master local[8] --jars ../libraries/sqlite-jdbc-3.8.11.1.jar --classpath ../libraries/sqlite-jdbc-3.8.11.1.jar
Reading from the database:
val emails = sqlContext.read.format("jdbc").options(Map("url" -> "jdbc:sqlite:../data/clinton.sqlite", "dbtable" -> "Emails")).load()
Simple count (fails):
emails.count
Error:
15/09/30 09:06:39 WARN JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1$$anonfun$8.apply(JDBCRDD.scala:358)
at org.apache.spark.TaskContextImpl$$anon$1.onTaskCompletion(TaskContextImpl.scala:60)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:79)
at org.apache.spark.TaskContextImpl$$anonfun$markTaskCompleted$1.apply(TaskContextImpl.scala:77)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.TaskContextImpl.markTaskCompleted(TaskContextImpl.scala:77)
at org.apache.spark.scheduler.Task.run(Task.scala:90)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
res1: Long = 7945
I got the same error today, and the important line is just before the exception:
15/11/30 12:13:02 INFO jdbc.JDBCRDD: closed connection
15/11/30 12:13:02 WARN jdbc.JDBCRDD: Exception closing statement
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (Connection is closed)
at org.sqlite.core.DB.newSQLException(DB.java:890)
at org.sqlite.core.CoreStatement.internalClose(CoreStatement.java:109)
at org.sqlite.jdbc3.JDBC3Statement.close(JDBC3Statement.java:35)
at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$$anon$1.org$apache$spark$sql$execution$datasources$jdbc$JDBCRDD$$anon$$close(JDBCRDD.scala:454)
So Spark succeeded to close the JDBC connection, and then it fails to close the JDBC statement
Looking at the source, close() is called twice:
Line 358 (org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD, Spark 1.5.1)
context.addTaskCompletionListener{ context => close() }
Line 469
override def hasNext: Boolean = {
if (!finished) {
if (!gotNext) {
nextValue = getNext()
if (finished) {
close()
}
gotNext = true
}
}
!finished
}
If you look at the close() method (line 443)
def close() {
if (closed) return
you can see that it checks the variable closed, but that value is never set to true.
If I see it correctly, this bug is still in the master. I have filed a bug report.
Source: JDBCRDD.scala (lines numbers differ slightly)

Resources